- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Employers are letting artificial intelligence conduct job interviews. Candidates are trying to beat the system.
“And when they got on the phone, Ty assumed the recruiter, who introduced herself as Jaime, was human. But things got robotic.”
If regulators are trying to come up with AI regulations, this is where you start.
It should be a law that no LLM/“AI” is allowed to pass itself off as human. They must always state, up front, what they are. No exceptions.
I would argue that AI also shouldn’t be allowed to make legally binding decisions, like deciding who to hire. Since a computer can’t be held accountable for its decisions, there’s nothing stopping it from blatantly discriminating.
It should be illegal to use an AI in the hiring process that can’t explain its decisions accurately. There’s too much of a risk of bias in training data to empower a black box system. ChatGPT can lie, so anything powered by it is out.
They also should not harm a human being or, through inaction, allow a human being to come to harm.
Yes. I assume anyone from a company is a bot right out of the gate.