AI Will Increase the Quantity—and Quality—of Phishing Scams

A piece I coauthored with Fredrik Heiding and Arun Vishwanath in the Harvard Business Review:

Summary. Gen AI tools are rapidly making these emails more advanced, harder to spot, and significantly more dangerous. Recent research showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Companies need to: 1) understand the asymmetrical capabilities of AI-enhanced phishing, 2) determine the company or division’s phishing threat severity level, and 3) confirm their current phishing awareness routines.

Here’s the full text.

Posted on June 3, 2024 at 7:04 AM4 Comments

Comments

What price common sense June 3, 2024 8:24 AM

@ALL

It’s not just those who do good that see the advantages in technology.

Good or bad people “look for an advantage” to reduce not just costs but risk.

It kind of shows why we are still ahead of AGI and the LLMs and ML systems.

What price common sense June 3, 2024 8:25 AM

@ALL

It’s not just those who do good that see the advantages in technology.

Good or bad people “look for an advantage” to reduce not just costs but risk.

It kind of shows why we are still ahead of AGI and the LLMs and ML systems.

Mark Wright June 4, 2024 7:53 AM

It used to be easy, at least for a tech savvy individual, to simply look at the text of a message or email and spot obvious signs that it is phishing or spam. Sadly with LLMs that is no longer the case. It’s no longer sufficient to see the typos, poor grammar, incorrect facts, and other such clues to determine something might be phishing.

LLMs generate text with perfect spelling and grammar. While LLMs still hallucinate and make mistakes, they have significantly improved since ChatGPT was first made public. They have constructed models with ever larger context windows that can analyze a wealth of information about an individual and generate a very personal and targeted attack. Multi-modal AIs which can consume images and use details inferred from those images to construct ever more convincing phishing attacks.

My point being, it’s no longer sufficient for algorithms to merely look at the content of a message or email detect phishing, soon it may not even be enough for a tech savvy human being. The only real path forward may be analyzing the metadata and looking for general trends to detect phishing campaigns. Sadly this means the big tech companies will need ever more data in order to detect these phishing campaigns, which means eroding our privacy ever further.

I don’t believe AI is the solution here. It’s just another arms race, like with deepfakes, eventually they’ll be so good that they’re virtually impossible to detect. We might be able to make strides for now, but ultimately were going to lose the war against AI.

Anonymous June 4, 2024 10:44 AM

“phishing is evolving from mere emails to a plethora of hyper-personalized messages, including falsified voice and video.”

I guess I will only consume media that the AI will consider real.

Leave a comment

All comments are now being held for moderation. For details, see this blog post.

Sidebar photo of Bruce Schneier by Joe MacInnis.