...
Anthropic wrote in the report that in cases like this, AI “serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.” For example, Claude was specifically used to write “psychologically targeted extortion demands.” Then the cybercriminals figured out how much the data — which included healthcare data, financial information, government credentials, and more — would be worth on the dark web and made ransom demands exceeding $500,000, per Anthropic.
“This is the most sophisticated use of agents I’ve seen … for cyber offense,” Klein said. ...
In another case study, Claude helped North Korean IT workers fraudulently get jobs at Fortune 500 companies in the U.S. in order to fund the country’s weapons program. ...
Another case study involved a romance scam. A Telegram bot with more than 10,000 monthly users advertised Claude as a “high EQ model” for help generating emotionally intelligent messages, ostensibly for scams. It enabled non-native English speakers to write persuasive, complimentary messages in order to gain the trust of victims in the U.S., Japan, and Korea, and ask them for money. ...
See the full story here: https://www.theverge.com/ai-artificial-intelligence/766435/anthropic-claude-threat-intelligence-report-ai-cybersecurity-hacking