As artificial intelligence is introduced to the recruitment process more and more, from CV screening and candidate ranking to automated assessments and interview preparation, AI tools offer significant gains in efficiency and consistency. However, they also introduce new regulatory, ethical and security risks that organisations cannot afford to overlook.
Recruitment AI tools are increasingly classified as “high-risk systems” under the EU AI Act. Organisations must now ensure:
- Transparency in AI usage
- Human oversight in hiring decision-making
- Bias mitigation and fairness
- Data protection and privacy compliance
- Clear communication with candidates
- Robust fraud prevention processes
This workshop provides practical, actionable guidance to help organisations meet these obligations confidently.
AI in Recruitment: Opportunities and Risks
By the end of this workshop, participants will be able to:
- Explain how AI is currently used in recruitment and the associated risks
- Identify algorithmic bias and transparency gaps
- Understand EU AI Act obligations for hiring processes
- Evaluate AI recruitment tools for compliance and ethical integrity
- Detect and prevent AI-enabled candidate fraud
- Respond appropriately when AI use is suspected in live interviews
- Design an Acceptable AI Use policy for candidates
- Embed human oversight into AI-supported decision-making
- Communicate AI use clearly to build candidate trust
- Produce internal documentation supporting responsible AI governance
The EU AI Act and Its Implications for Hiring
- Understanding “high-risk AI systems” and why most recruitment AI tools fall into this category
- What organisations need to do to remain compliant – 8 practical steps you can take to keep your hiring processes on the right side of the new legislation
Ethical and Transparent Use of AI in Hiring
- What are the principles for responsible AI use in recruitment (fairness, accountability, transparency, privacy, human-in-the-loop)
- Designing your Acceptable AI Use for Candidates policy
- How to communicate your organisation’s acceptable candidate use of AI in a clear and accessible way.
- Ethical design of assessments and interview processes
- Avoiding over-reliance on AI and maintaining human judgment
What does AI enabled Candidate Fraud look like?
- Common types of candidate fraud (CV falsification, identity fraud, deepfake interviews, outsourced test-takers, AI-generated documents).
- How technology has changed candidate behaviours and the risks for employers
- Tools and techniques for detecting fraud throughout the hiring cycle
- Designing fraud-resistant recruitment processes
All attendees receive a digital toolkit, including:
- AI governance templates
- Best practice guides
- Compliance checklists
- AI audit tools
- Draft Acceptable Candidate AI Use policy
This programme directly supports the Sustainable People Practices framework by strengthening fair, transparent and bias-aware recruitment systems under the People Practices & Processes pillar. It reinforces the Governance, Risk & Regulation pillar by addressing compliance, documentation requirements and risk management obligations arising under the EU AI Act. In addition, it contributes to developing a Workforce for the Future by building internal capability to adopt and oversee emerging technologies responsibly and ethically. By embedding structured, ethical AI oversight into recruitment processes, organisations reduce regulatory and reputational risk, enhance organisational resilience and support long-term sustainable workforce development.