AI Generated Scams: Changing Culture, Improving Training


February 16, 2024

Artificial intelligence is a blessing and a curse. For every benefit, it seems, there’s a downside. One of them, increasingly sophisticated cybercrime, is already a huge and worsening problem.

Not only can AI crack even the most complex passwords and write emails in the style of your boss and search ceaselessly for vulnerabilities in networks and create harder-to-detect malware. It’s also great at making astoundingly real-seeming deep fakes that employ deep learning to increase the effectiveness of social engineering attacks. Examples abound. Many of them are very entertaining—and more than a bit frightening for how well they’re executed.

But while experts can usually spot telltale signs of manipulation, the average person often can’t. And that’s a problem. Why? Because if you’re going to hand over money or sensitive information to someone, or take any type of action at their behest, it’s most likely going to be someone you know or think you know or at least know of: Your boss, your Aunt Mary, the President of the United States.

Yes, even the leader of the free world was the subject of a recent AI-generated scam. According to a Politico report, a robocall began circulating around New Hampshire “that spoofed President Joe Biden’s voice and told voters to cast ballots in November” instead of on the January 23 primary date.

“The call ended with a phone number that belongs to Kathy Sullivan, the former New Hampshire Democratic Party chair who is the treasurer of a super PAC encouraging people to write in Biden’s name on tomorrow’s ballot.”

this call may be monitored or recorded

The attorney general’s office deemed it “an unlawful attempt to disrupt the New Hampshire presidential primary election and to suppress New Hampshire voters.” But all it could do was “urge voters to disregard the calls.” Which probably means some voters fell for it. And that’s just one minor instance on the grand scale of fakery.

“Voice cloning is one of the first instances where we’re witnessing deepfakes with real-world utility, whether that’s to bypass a bank’s biometric verification or to call a family member asking for money,” one researcher explained. “No longer are only world leaders and celebrities at risk, but everyday people as well. This work represents a significant step in developing and evaluating detection systems in a manner that is robust and scalable for the general public.”

For years, well before AI was a reality or even a buzzword, social engineering was at the core of countless cyber attacks. But exploiting human psychology, as these types of nefarious invasions do, once required human-to-human manipulation. That’s less and less the case, as superintelligent and self-learning machines take over—and do it far better than humans ever could.

“The AI-granted ability to generate human-like texts in an instant, virtually clone loved ones’ voices based on just snippets of audio, and scale behavioral-driven attacks with the click of a button has increasingly democratized access to cybercrimes that were previously only relegated to the realm of the most sophisticated bad actors,” this report notes. “Compounding matters, the proliferation of digital banking and real-time money movement has made it increasingly easier for bad actors to request and receive the payments they want without any of the frictions that the Hollywood-style handing over of a briefcase full of cash in a neutral location might entail.”

So what can and should be done? We asked Mindsight’s cybersecurity guru Mishaal Khan to weigh in. Operational AI-driven safeguards are theoretical now and largely ineffectual, he says, but someday they’ll be available to help stem this scourge. In the meantime, it’s all on us humans.

cloud improves the customer experience

“We think a breach is a breach of data, but what they’re doing is they’re breaching protocol and policy and vulnerability to authority,” he says. “So the solution is training and awareness. Make sure your company is offering security awareness training that includes training on DeepFix, voice cloning, and social engineering in general. A lot of training doesn’t have that. You might also include video-on-demand style training where they’ll show you a video of somebody’s image and voice being cloned, and tell you how it was done. The point of this is if you know what’s out there, you will be more conscious of it.”

There also needs to be a cultural change, Khan adds, where people are actively and consistently encouraged to validate everything, question their bosses and adopt a more or less zero-trust mentality. That doesn’t mean disbelieving, but it does mean verifying and thinking critically about what an authority figure is asking or demanding. Which in turn requires brushing up on policies and protocol so red flags are more apparent in video and voice calls that might be scams.

“That’s what’s going to stop these types of scams,” Khan says. “Because they rely on psychological vulnerability. So if you resolve that by creating a culture of questioning and verifying within your organization, you create a defense mechanism for social engineering.”

About Mindsight

Mindsight is industry recognized for delivering secure IT solutions and thought leadership that address your infrastructure, cybersecurity, and communications needs. Our engineers are expert level only – and they’re known as the most respected and valued engineering team based in Chicago, serving medium-sized to enterprise organizations around the globe. That’s why clients trust Mindsight as an extension of their IT team.

Visit us at

Related Articles

View All Blog Posts

Contact Us
close slider


Fill out the form below to get the answers you need from one of Mindsight's experts.

hbspt.forms.create({ portalId: "99242", formId: "dfd06c5c-0392-4cbf-b2cb-d7fb4e636b7f" });