It is easy to identify scam emails that are clumsy. New AI-aided scams have become more convincing.
The Wall Street Journal reports Artificial Intelligence Helps Scammers Outsmart you–and Your Bank. This is a free hyperlink.
Artificial intelligence makes it harder to detect scammers
The grammar police and authorities are no longer able to detect poorly written messages. The bad guys have become better writers and conversationalists. They can hold a conversation and not reveal that they are a robot, say bank and tech investigators.
ChatGPT, along with other AI tools, can be used by scammers to impersonate your voice and identity. In recent years, criminals used AI-based software in order to impersonate senior executives who demanded wire transfers.
“Your spidey-senses will no longer prevent you from being a victim,” said Matt O’Neill. He is a former Secret Service Agent and cofounder of cybersecurity firm 5OH Consulting.
These recent frauds are often similar with older scams. AI has allowed scammers to target larger groups of people and use more personal data to convince you that the scam is real.
Fraud prevention officials say that these tactics are harder to detect because they bypass the traditional indicators of scams such as malicious links, poor grammar and wording. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. Officials say that all of these methods can be difficult to stop.
JPMorgan Chase is using large-language models in its fight against identity fraud. Carisma Fields, JPMorgan Chase’s vice president of external communication, said that the bank has also increased its efforts to educate clients about scams.
Password risks, amplified
Previously, criminals had to guess or steal passwords via phishing attacks or data breaches. They often targeted high-value accounts one at a time. Now, scammers are able to quickly cross-reference passwords and test them across platforms. O’Neill said that they can use AI systems for writing code to automate different aspects of their ploys.
If scammers get your email address and a password that is commonly used from a data breach at a tech company, AI tools will quickly check if those credentials can unlock your bank account, social media account, or shopping account.
This is the lead-in for the WSJ article. The link above is free.
The Journal deserves a big thank you for providing free links. I try to be honest about what I use. I only use a few free links to refer to the Journal.
The Financial Times threatened to sue me if I used more than one sentence in an article, including a rebuttal.
I cancelled my subscription to the FT and have not referred to an article since. I don’t see what the FT gains from this.
Consider this a public-service announcement from the Wall Street Journal.
Thank you for your support