SafeHome.org may receive compensation from some providers listed on this page. Learn More
We may receive compensation from some providers listed on this page. Learn More
I hate online scams, but I’ve always prided myself on staying one step ahead of them. Until recently, anyone with digital safety training could.
But the situation has changed now that AI has entered the picture. Cybercrime is taking unprecedented and ugly forms that are exhausting our efforts to root them out. And unlike the rash of institutional ransomware scams we saw in 2021-2022 (think Colonial Pipeline), the victims of these AI-fueled grifts are mostly individuals and families.
Here’s a runthrough of the most unsettling cyber scams I’ve seen to date. As always, stick around until the end for some tips on staying safe.
You’ve probably heard of DALL-E, ChatGPT’s artistic cousin. Instead of chatting with you, DALL-E makes art based on your prompts. For example, you can tell DALL-E: “Make me a statue of Michelangelo’s David, but David in the form of an giant pepperoni pizza.” And, like a genie with unlimited wishes to grant, DALL-E will perform your bidding and spit out an image of a Pizza David in a few seconds.
Well, AI can also generate and mimic voices, as loved ones around the world are just now finding out the hard and horrible way. 75-year-old Gregg Card is one of them. When Card recently got a call from his grandson Brandon, who said he was in jail without his wallet or cellphone and needed to make bail, he sent the money right away — $,2200.1 After Brandon hit Grandpa Gregg up for another lump sum, Card’s bank got suspicious. Their suspicions were justified. It turns out the caller wasn’t Brandon at all. It was a thug-controlled bot with Brandon’s voice.
Card isn’t an outlier. In just the first quarter of 2023, victims of imposter scams have filed 178,080 complaints with the FTC. Gift card scams executed over the phone topped the list. When you add AI bots to the mix, bots that can impersonate our family and friends, expect that number to skyrocket.
When her 15-year-old daughter Briana called out of the blue saying she’d screwed up, Jennifer DeStefano thought she’d had a skiing accident. Briana was on a school trip in the mountains. But then a man’s voice jumped on the line demanding $1 million in ransom and ugly consequences if DeStefano didn’t comply.
DeStefano went into shock but still managed to contact the police, who must have told her to call her daughter immediately. Briana, it turned out, was fine. The whole “kidnapping” was virtual, arranged, again, with the help of some fancy voice-mimicking AI good enough to fool a mother.
The tools to create audio deepfakes aren’t difficult to come by. A $5 monthly software subscription and about a minute of audio is all it takes, according to UC Berkeley computer science professor Hany Farid.2 In this case, the lowlife probably scraped the audio off Briana’s TikTok account. Parents, take note. Here are a few tips for keeping creeps from swiping up your kids’ data.
Prompt injection attacks are probably the scariest cybernews not to hit the press this year. They’re technically easy to create and catastrophically difficult to control. Here’s how they work.
You feed an AI chatbot, like the Bing assistant, a prompt, like “Hi, can you find me a cheap flight to Madrid in May?” And Bing finds you a cheap flight.
Enter the hacker looking to ooze his way into your convo and pocketbook. He’s already injected a prompt of his own into a website that you happen to have open in a separate tab. No one knows the prompt is there — not you, not the website owner — because the code is invisible, a small string of it colored white on a white background. That rogue code jumps into the Bing chat box like a flea and hijacks your conversation.
I realize this sounds crazy, but it isn’t. Programmers at GitHub actually engineered just such a virus to impersonate a Microsoft employee offering deals on laptops.3 Prompt injections like this may already be infesting the web, in other words. And that isn’t even the worst part.
I’ve written about some pretty scary ChatGPT phishing scams, where grifters open what amounts to scam farms. They use large language model (LLM) software like ChatGPT to run hundreds of simultaneous scams at once with hundreds of potential victims. This is bad, very bad. But for a sophisticated scam like this to work, we still have to engage with cybercriminals. We have to accept their invitations to chat and willingly hand over our details or invest in their schemes.
But what if we didn’t have to engage with a thief for him to bleed our accounts dry or steal our Social Security number? If crooks could hotwire our online assistants, which have access to all our personal details, we wouldn’t. With a few lines of code embedded in a spam email or SMS, bad actors could initiate a private chat with our assistants — you know, one bot to another.
The truly scary thing about phishing 2.0 attacks is that you wouldn’t even need to click on a message for them to work. They’d deploy themselves automatically.
What would you say to an “AI-powered cryptocurrency investment advisor, powered by Elon Musk AI.”4 The minds behind TruthGPT, a new Musk-powered crypto coin, say investors will earn a huge ROI. The Texas DOJ, on the other hand, says “investment fraud.” TruthGPT, according to the Lone Star State, doesn’t even deserve to be classified as a Ponzi scheme. It’s just outright fraud backed by a cubic ton of misinformation.
The red flags? “Elon Musk AI” would have to top the list. Elon Musk has patented a lot of exciting stuff, but not, as far as I’m aware, an AI version of himself. This is an example of the kind of high-falutin’ but meaningless language that tends to congeal around investment scams.
I won’t even go into the “animated avatars of Elon Musk” used to promote this rigmarole, because I’ve seen enough of Musk these days to last me a lifetime.
Personally, I wouldn’t mind a little AI-powered investment advice. And one day I might take it. But until Merrill Lynch, or some name I trust, offers it, I’m going to keep my money in a non-Musk-powered bank.
I’m anything but a Luddite, but do I have a healthy fear of entities a million times cleverer than me conspiring, without malice or forethought, to do terrible things to me and my loved ones. Because of my fear, I’m wary of giving any AI bot access to my sensitive details. And I’m extra careful to protect those details from attacks I can’t see coming. That’s why I urge you to …
It scares me to no end that I won’t even have to click on or download anything for a bot to sweet talk my laptop into giving up my private details. (See No. 4 above.) I can’t really do anything about that. But I can know the second a sleazeball tries to use my details to take out a mortgage in my name or put my driver’s license down on a phony credit card. I can know that for 10 bucks a month. That’s about how much a top-notch identity theft protection service costs these days.
True confession: I’ve been using Macs since I was 12, so I never thought much about malware protection. I use a virtual private network (VPN) too. The combination, I’ve always thought, was enough to keep me safe. Not anymore. Once fraudsters start pumping out prompt injections (No. 3 above), it’s going to be a brave new world, and I want those virus filters to be running full-bore in the background 24/7. There are a handful of affordable identity theft protection services that also offer a VPN. Aura, with its all-in-one app, is one of them. NortonLifelock is another.
VPNs aren’t virus detectors, but a quality VPN seals off our connections to any websites we’re communicating with. It also encrypts all the data flowing in and out of our devices. This makes it much more difficult for the dime-a-dozen cybercriminals likely to employ the new wave of AI attacks to steal anything useful.
AI-fueled cybercrime is getting weirder and weirder, and it doesn’t seem like this is fazing criminals. If anything, it’s just proving that lowlifes with laptops will use anything at their disposal to get what they want — even an AI voice machine that can impersonate a terrorized 15-year-old girl. Just talking about this stuff makes me feel icky.
That icky feeling isn’t going to go away any time soon. In fact, the eerie silence on the AI scam front may just be the quiet before the storm. If that’s true, you may want to finally take the plunge and invest in a little digital security, such as a VPN, some antivirus software, and identity theft protection. Because this time, maybe for the first time, it feels like our wits alone might not be enough to keep the crooks at bay.
The Washington Post. (2023, Mar 5). They thought loved ones were calling for help. It was an AI scam.
https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/?utm_source=pocket_saves
CNN. (2023, Apr 29). ‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping.
https://edition.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html?utm_source=pocket_saves
Github. Indirect Prompt Injection Threats.
https://greshake.github.io/
Decrypt. (2023, May 3). Fake Elon Musk Coin, AI Scams Raise Ire of Texas Regulators.
https://decrypt.co/138785/texas-cease-and-desist-ai-scams-fake-elon-musk-truthgpt-coin