Is it OK to use AI during a job interview?

As AI tools enter the interview room, where does smart preparation end and cheating begin?
The debate over whether it is ethical to use artificial intelligence to get a job – specifically during interviews in the hiring process – doesn’t have a single answer. It is similar to the dilemma faced by teachers who discover that students are using technology in exams.
Is it unacceptable? Is it inevitable, given that AI is used in many other areas of work? Is it self-deception, which disappears once the candidate is faced with real problems in their new job?
Smart preparation or unfair advantage?
There are precedents for this dilemma. One came to light several months ago when it was revealed that Amazon is cracking down on the use of AI tools, such as coding assistants, which could give candidates an advantage in job interviews.
But that’s not the only case. With tech already accepted in many aspects of the job search process, from CV writing to interview preparation, many candidates use AI in virtual interviews during the selection process.
Now that candidates are leveraging the power of tools such as ChatGPT to polish their CVs, write cover letters, or even whisper answers during interviews, HR professionals and hiring managers are faced with an ethical dilemma: where is the line between smart preparation and an unfair advantage?
Therefore, the question does not seem to be: can this use be prevented in recruitment processes? Rather, it is: what is the correct and ethical use of AI in one or more job interviews?
The answer seems to lie in transparency. Does the company you are applying to disagree with this use? Or are there nuances – does it allow the use of AI to gather certain data or information, to argue a response, or perform a specific exercise during the interview? (Unlikely, but not impossible).
In any case, it is essential that, if the candidate uses digital tools, they do so honestly and transparently. Trying to hide it will only break the initial trust – and will also be completely obvious!
This isn’t just a theoretical dilemma. Some of the world’s largest organisations have already made their opinion on this matter very clear.
Read: AI agents for hire? It's still too risky
Amazon draws the line
Amazon has taken one of the hardest stances to date. Internal documents seen by Business Insider reveal that the company now bans the use of generative AI during interviews. Candidates are explicitly warned not to use teleprompter-style tools that provide real-time prompts or support – what Amazon describes as an “unfair advantage”.
The company argues that such assistance undermines their ability to assess a candidate’s authentic skill and judgment. Interview guidelines now include tips for spotting suspicious behaviour: typing while being asked questions, overly rehearsed answers, wandering eyes that suggest screen-switching. A slip here could lead to disqualification.
Employers, of course, have every right to prohibit job applicants from using GenAI tools during the interview. They want candidates to come as they are, not as their chatbot-enhanced alter ego.
With AI tools now accessible to the masses, job seekers have found in them a convenient co-pilot. From helping draft CVs to brainstorming interview answers, AI promises polish, precision, and the occasional illusion of eloquence.
Companies such as Nvidia and Jasper acknowledge this reality; they’re also watching closely the use of AI in job interviews. Jasper’s Chief People Officer, Alex Shapiro, noted that while ChatGPT can help candidates strengthen their applications, unedited content risks accuracy, authenticity, and credibility. Cover letters wholly generated by AI often lack the human spark that makes candidates memorable. Without genuine motivation or personality, a letter becomes forgettable – or worse, suspect.
When AI goes too far
This clampdown isn’t paranoia. Alarm bells rang at Amazon after a viral video emerged from a startup that showed how its coding assistant helped someone land a role at the tech giant. The firm behind the tool, Final Round, proudly called it “a magical teleprompter” that whispers ideal responses to candidates mid-interview.
Not everyone’s impressed, however.
“If you want to look like a flesh-bound chatbot, then by all means use an AI teleprompter,” said TMT analyst Ian Silvera, warning that reliance on such tools only erodes long-term competency.
Cheney Hamilton of Bloor Research echoed the sentiment: “You might get through the test, but you’re not actually proving you understand the material.”
Interviews, after all, aren’t multiple-choice exams. They’re about chemistry, communication, and character. And while AI might help a candidate tick the technical boxes, it can’t replicate lived experience, instinct, or cultural fit.
Read: AI in hiring? Concerns over bias remain
The slippery slope of simulation
The use of AI in interviews, particularly live ones, has drawn comparisons to cheating. Some liken it to taping crib notes behind a monitor or flicking between browser tabs for answers. But this new era of “augmented authenticity” carries heavy risks. Among the main concerns are:
Perception of dishonesty: Veteran interviewers are quick to detect signs of distraction or unnatural delivery. Even a quick glance away can spark suspicion.
Trust erosion: Being flagged as inauthentic undermines one of the most prized leadership traits – trustworthiness.
Minimal upside, major downside: The perceived performance boost from using AI mid-interview is marginal, estimated at 5% to 10%, while the risk of total disqualification is very real.
False confidence: Candidates who lean too heavily on AI may find themselves out of their depth once hired, especially if they lack the real skills they claimed to possess.
The takeaway: if AI is helping you sound smarter than you are, it may also be setting you up to fail. So, what’s acceptable use of AI in a job hunt? The general consensus is that it’s all about how – and when – it’s used. Acceptable uses include:
- Using AI to brainstorm or improve the language in application materials
- Editing AI-generated content to reflect genuine experience and voice
- Preparing for interviews with simulated Q&A sessions but not bringing a script to the real thing.
- Highlighting AI literacy in appropriate roles, supported by examples and training credentials
Job candidates should avoid crossing the line by:
- Submitting AI-written materials without review or personalisation
- Using unverified skills or achievements in CVs or cover letters
- Employing real-time AI assistance during live interviews, especially when banned
- Assuming recruiters won’t spot generic, robotic, or inconsistent messaging
More firms tighten the reins
Amazon isn’t alone. AI startup Anthropic recently issued a blanket ban on AI use during the hiring process. They want to see applicants’ own thinking, unfiltered by tech.
“We encourage AI use in the role,” the company said, “but not during the application process.”
Meanwhile, a survey from Capterra found that 41% of UK applicants had used AI to exaggerate or fabricate skills in job applications – a figure that will concern any HR leader trying to separate fact from fiction in a stack of CVs.
Used wisely, AI can be a powerful ally. It can sharpen language, surface examples, and streamline the application process. But it cannot, and should not, replace authenticity.
The key for business and HR leaders is to promote transparency: let candidates know what is acceptable and what is off-limits; create space for AI literacy to shine; draw a firm line at deception.
The ethical debate around AI in hiring is far from over, but one thing is clear: authenticity still matters. Candidates who rely too much on AI may ace the first round, but they risk falling flat when the training wheels come off.