AI in Hiring: The Big Breakthroughs (and Risks) You Need to Know

AI is transforming the way companies recruit, and its impact is only growing. At Step, we work closely with students, graduates, and employers, so we’ve been tracking this trend for a while. Last year, we explored AI’s role in graduate hiring in our whitepaper, What Does the Rising Use of AI Mean for Early Careers? Now, new research—including a report from the Information Commissioner’s Office (ICO) and insights from Shoosmiths—has reinforced many of our findings, shedding light on both the benefits and risks of AI-driven hiring.

With regulators and legal experts now weighing in, AI in recruitment is no longer a futuristic concept—it’s happening now. So, what do these latest insights mean for employers and early-career job seekers?

 

How AI in recruitment has changed

Over the past year, AI-powered tools have become even more embedded in hiring processes. Employers are using AI to speed up CV screening, automate candidate matching, and even conduct AI-assisted interviews. But, as we flagged in our whitepaper, these technologies raise concerns around bias, privacy, and transparency.

The ICO’s AI Tools in Recruitment Audit Outcomes Report highlights how AI is being used—sometimes effectively, but often problematically. Key concerns include inaccurate candidate assessments, AI-driven hiring bias, and excessive data collection. The ICO also stresses that businesses using AI must have robust risk management and data protection policies in place (ICO Report).

Shoosmiths echoes these concerns, urging businesses to ensure fair and lawful processing of personal data. Their advice? Employers need to carry out risk assessments, be transparent about how AI makes hiring decisions, and avoid collecting unnecessary personal data (Shoosmiths).

What this means for graduates & candidates

At Step, we have seen first-hand how AI is changing job applications. Many graduates are now using AI tools to write CVs, draft cover letters, and even prepare for interviews. Our whitepaper explored this trend in detail, and the ICO’s latest research backs up what we found.

Key areas of concern include:

  • Bias in AI recruitment tools: Some AI-driven applicant tracking systems have been found to unintentionally exclude candidates from underrepresented backgrounds. The ICO’s audit flagged AI systems that filtered out applicants based on protected characteristics (ICO Report).
  • Privacy Concerns: Many AI recruitment tools collect more personal data than necessary, raising serious questions about how this information is stored and used (ICO Report).
  • Lack of Transparency: Candidates often don’t know how AI is assessing their applications, making it harder to challenge unfair rejections (ICO Report).

Shoosmiths emphasises that companies must document AI-driven decisions to ensure fair and transparent hiring. They also stress the need for human oversight to prevent AI from making flawed or biased decisions (Shoosmiths).

But AI isn’t going away—so how can graduates adapt? Here’s what we recommend:

  • Optimise your CV for AI screening: Use clear formatting and industry-specific keywords so AI can pick up on your relevant skills.
  • Be prepared for AI-assisted interviews: Some employers are now using AI-driven video assessments—know how these work and how your responses might be analysed.
  • Check employer AI policies: Research how a company uses AI in hiring and adjust your application approach accordingly.

 

How employers can use AI ethically & effectively

For businesses, AI can streamline recruitment—but only if it’s used responsibly. The latest research suggests that companies should:

  • Reduce Bias in AI Models: AI tools should be regularly tested to ensure fair and accurate hiring outcomes (ICO Report).
  • Be Transparent About AI’s Role: Candidates deserve to know when AI is involved in hiring decisions. Employers should clearly communicate how AI is used and ensure human oversight (Shoosmiths).
  • Enhance Data Protection: AI systems should only collect necessary candidate data and comply with privacy laws (ICO Report).
  • Conduct Regular Risk Assessments: Shoosmiths recommends Data Protection Impact Assessments (DPIAs) to evaluate potential risks in AI hiring tools (Shoosmith).

Applying this allows businesses to use AI while ensuring fair and ethical recruitment practices.

 

The road ahead

AI is changing recruitment, particularly for early-career hiring. While challenges like bias, transparency, and privacy must be addressed, AI also presents opportunities to make hiring more efficient and data-driven.

At Step, we closely monitor these developments to support students, graduates, and employers in navigating ongoing changes. Our team provides oversight on each application, ensuring a smooth and informed process for all involved.

To explore our original insights on AI in early careers, check out our full whitepaper here: What Does the Rising Use of AI Mean for Early Careers?

Want to find out more about how Step can support your recruitment needs? Get in touch with us here.