AS SEEN IN AS SEEN IN

HR & Hiring Managers Beware: How to Detect Malicious AI-Use in Recruitment

ChatGPT-written resumes. Canned interview answers. Fabricated references. There are a lot of watchouts that HR/hiring managers must be mindful of in their everyday work. Now, AI-generated deep fake personas – during live online interviews no less – may have to be added to that list.

Imagine this: You are about to start a video interview with a jobseeker. They show up, however their camera is turned off. You request that they turn on their camera. They say it’s broken. Something feels off, so you politely insist that they show themselves on camera so you can see them. They log off and on twice, “attempting” to get their camera to work, and when they’re finally visible, their virtual background and overall video raise more red flags. When you ask them to raise one hand in front of their face, they abruptly jump off the online video call.

This is exactly what happened to Bettina Liporazzi, from letsMake.com.   In a recent LinkedIn post she shared part of that interview and received over 800 comments.  In the past, this recruitment story would have felt like it belonged in a futuristic sci-fi movie. And yet today, with AI tools developing at breakneck speeds, this scenario above is unfortunately a reality.

This appears to be a rising problem especially in tech recruitment. One company, Vidoc Security shared their story, of how their co-founders were almost scammed twice by imposters applying for one particular remote coding job.

The FBI has raised the alarm on this matter too, warning that fraudsters are using deepfakes in many scenarios, including applying for sensitive jobs, especially remote ones, where they can access company networks and sensitive data. Malevolent employees may intend to steal trade secrets, funds, salary and personal employee or customer data. They may even plan to install malware and then demand a ransom from a company.

As HR professionals, we must heed this as a warning. In fact, CNBC just published an article saying fake jobseekers are flooding US companies who are hiring for remote jobs. Moreover, research firm, Gartner predicted that by 2028, with the rise of AI-generated profiles globally, 1 in 4 job candidates will actually be fake!

We all carry a critical responsibility to safeguard sensitive information housed within our organizations against bad actors. Our stakeholders trust and expect us to maintain a high standard of vigilance in this regard.

So, while it is natural for us to get excited about how AI can supercharge our work, we must ask important questions, address vulnerabilities in our processes, and above all, be prepared for what awaits us in the future.

To this end, we researched some tips and strategies that can guide us in evolving our hiring practices in this brave new world of AI:

  1. Require an in-person or on-camera interview – even for remote jobs: Sometimes, solutions to new age problems require that we go back to basics. At some point in the recruitment process, it would be wise for organizations to communicate in advance that, even for remote jobs, one in-person, or on-camera interview will be required without the use of virtual backgrounds, etc. If a candidate refuses, treat that as a red flag. In Bettina Liporazzi’s case, from letsMake.com, she asked the jobseeker to put their hand up by their face, and that prompted the scammer to abort their mission. In the case of Vidoc Security, some details were not adding up in the online interviews versus their resumes. For example, one candidate said they went to university in Poland, however they couldn’t speak any Polish in the interview, while the second candidate, reported pursuing higher education in Serbia, and yet, did not speak Serbian at all.

 

  1. Be diligent about fact-checking, references, background checks and on-boarding: The team at Vidoc offers a free “Deepfake Fraud Prevention” eBook with 17 strategies to detect fake workers. Among them, they say, always double check details like addresses and phone numbers, as scammers often provide fake information. Consider too, what kind of organizational information a worker will have access to and from this perspective, ask for background checks wherever necessary. Even for fully remote jobs, consider investing the time and funds into on-site on-boarding. Nothing matches in-person bonding, and this added step can foil a potential fraudster’s plan, making it a win-win strategy for your employees and company.

 

  1. Be straightforward and even bold with your candidates: Too often, as “nice Canadians” we tend to worry about potentially offending a job candidate. However, it is more important to safeguard your stakeholders. Afterall, it is far more damaging in the long run to bring on someone in your workforce who ultimately engages in deceptive practices. The key to success lies in embracing transparent communication, with a balanced approach (e.g. not assuming every job candidate is a potential scammer) where you apply empathy, while also trusting your gut instinct. That may feel like a tall task, as we all venture into the unknown. For this, the HRPA recommends the first step to protecting against deepfakes should be education. This can help to reduce a knowledge gap and give HR/hiring managers and leaders a rudimentary understanding of how to spot critical telltale signs.

 

  1. Develop company-wide ethical guidelines on AI usage for the hiring process and for job candidates: Clear expectations and policies for AI use on both sides of the hiring equation contributes to building trust through transparency. Be up front about the AI tools that your company uses in the recruitment process, and in turn, encourage candidates to be up front about the AI tools they are using too. When there are no surprises, this openness can foster a culture of honesty, where everyone benefits.

 

  1. Be ready to get uncomfortable, chart new territories, and ask hard questions: Change is never easy, especially at the speed with which everything is developing. Here are a few questions that can help guide some internal discussions around AI to help your organization better prepare for what’s to come in the future: 
    • Let’s evaluate our current process for verifying candidate identities. Do our protocols keep pace with potential and emerging AI threats?
    • Where is our organization vulnerable from potentially malicious AI-use, and what needs to change to address this?
    • Where do our HR tools incorporate AI-driven efficiencies? And where do they uphold anti-fraud safeguards?
    • Where is our use of AI fostering genuine innovation, and where is it hindering it?

As we venture forward, it is important for us to ask ourselves: Where’s the line?  Let’s start having candid conversations. Let’s routinely update our policies to keep up with emerging AI advancements. And most importantly, let’s embrace a willingness and passion to adapt. In this way, we will be able to keep both our organization, and our greatest asset – our people – safe.


If you are looking to add some exceptional Bilingual (French/English) new hires to your team, then tap into our proprietary candidate network today. We are connected to the best talent across Canada, and it would be our pleasure to help you find your next company superstore! Call or email us today at: 416-236-3303, and [email protected]. We look forward to serving you!

 

 

 

About the Author