5
JsonBoa
1d

Just read up about the issue with AI interviewers and interviewees.
Schools and companies were using AI to interview potential students and employees.
Those reasoned that if AI is a good enough judge of character and intellect, then it must be good enough at expressing both of those things, so then they deep-faked themselves into the perfect applicants.

Assuming that learning institutions act rationally, those must believe that an automated selection process will be a net positive.

Now, learning institutions might want to use AI as a tool to select applicants because it is objectively better than humans at selecting the best humans... or because it is cheaper enough so that the savings more than make up for the lemons that get through the gauntlet. Occam's razor rejects the former in favor of the latter.

The highest ranking learning institutions would hardly lower standards without putting up a fight. If those were just cash-strapped and struggling to cut costs, it would make little sense to cut corners on their most lucrative line of business (application fees).
Thus, the institutions must believe that the interview is just a technicality in their admissions process. So much so that they can literally automate this step and be no worse off.

That's it. Learning institutions either believe that interviews in their admissions processes are so formulaic that those can be automated with no loss; or that their human interviewers are so plastic that machines can do their job just as well.
In both cases pledges could just let chatgpt be interviewed in their place. It would be a net positive for both sides.

Comments
  • 4
    cheaper to have AI do it than pay someone to do it

    they wouldn't lower standards? hah. they went after students for reporting on security vulnerabilities where I got my education. they had no interest in fixing that you could just download everyone's social insurance numbers. lots of interest in ruining their lives for legitimately thinking the school would care about the issue though
  • 1
    This is the dystopian future all the movies talked about. Garbage.
  • 3
    @jestdotty Sounds like the Missouri F12 hacker.

    Never report security holes unless you've been paid to pen test, too risky. People are stupid.
  • 2
    Why are we using an autocomplete to interview for positions? Are they all insane? Using a language model to select people is like using a facial recognition model to select people. Sure, the fact the object in the picture has a face is a good indicator they are a smart human and not an apple, but you can't be sure of false positives nor false negatives with any degree of certainty. Similarly, just because someone can type doesn't mean they are smart.

    We're loosing touch with reality -_-
  • 3
    @jestdotty people who detect vulnerabilities are below their standards, they (institutions) want mindless corporate drones that can consult for medical corporations and say that a 33% fatality rate on a treatment plan is actually "an absolute majority of a win!"
  • 0
    @Hazarth it will select for good breeding

    which just means are you amendable to being brainwashed by false education, if yes compliant slave for the workplace I guess

    "good breeding" actually meant amendable to aristocracy education process to have a specific culture of viewing others as lesser than you, and here are the machiavellian tactics of how to dispose of any undesirables
Add Comment