Malicious digital twins: the next cybersecurity threat

Artificial Intelligence is still in the spotlight after the alert from Trend Micro, an IT security company, according to which AI could generate the so-called "evil twins", i.e. "computer copies of us" capable of increasing scams, which will become increasingly difficult to find

AI-generated human clones could create havoc with cyber forgery

This is the warning from Trend Micro, a leading cybersecurity firm that says these malevolent digital twins would soon become big cybersecurity threats by making fraud difficult to identify. The clones are AI-generated replicas of individuals.

“As generative AI gets increasingly woven into the fabric of businesses and the services they utilize, it will create new threats that require constant vigilance,” says Jon Clay, VP of Threat Intelligence at Trend Micro. “Hyper-personalized attacks will require a coordinated effort across industries to mitigate. Business leaders should remember that today, no cybersecurity risk exists in isolation. All security questions ultimately affect business risks, relevant to future strategies.”

Concerns of privacy and the emergence of ai-born cyber threats

Two days ago, the Italian Privacy Authority stopped access to DeepSeek, an AI-powered chatbot made by two Chinese talented companies-Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence. This is because of the lack of safety measures for sensitive data.

However, Trend Micro is warning that this situation is getting worse because these threats are now intended and malicious digital twins will start leaking or otherwise exploiting personal data, which will be used to train large language models (LLMs). These LLMs are to construct replicas of the knowledge, personality, and writing style of so-called victims.

When combined with manipulative biometric data, deepfake video, and audio technology, these clones can be used for identity fraud or even manipulation among friends, colleagues, and family members.

Deepfake tech generates bigger fraud threats

Deepfake technology along with AI could massively weaponize personalization effects in large-scale attacks such that business email compromise (BEC/BPC) scams will have more touch on impersonating employees, luring victims, and disposing of them in well-planned social engineering traps. This could be done by employing LLM-powered chatbots using real people aiding through open-source intelligence.

“Pre-attack capability will make further advancements in then making attacks successful,” warns Trend Micro. “It makes mass creation of seemingly realistic social media personas to disseminate misinformation and conduct scams possible.”

Keeping up to date with the evolvement of threats, Trend Micro recommends adopting a risk-based approach to cybersecurity, including central identification of assets, more efficient risk assessment, and pre-emptive threat mitigation.

“This is alarming but totally expected news,” says Pierluigi Paganini, CEO of Cyberhorus and Associate Professor of Cybersecurity at Luiss Guido Carli University, to ANSA. “The underground scene is already bustling with platforms capable of producing deepfake content and phishing emails that would pass undetected by most internet users. There’s still rising fear regarding the use of technology in businesses.”

Artificial intelligence drives the progress being made today, but this latest step underscores even more the desperate need to balance innovation against the demand for data protection.

“AI will replace human involvement in many processes, and monitoring its operations will introduce new security risks,” Paganini adds. “Way too many businesses and organizations remain unprepared against those emerging threats. This is why advanced defense strategies are the top priority, like risk-based security concepts, LLM security improvements, and constant user education.”

Ironically, AI itself-which so far proves invaluable in bringing solutions across many areas-could also end up as the most significant weapon in waging war against these threats.

“AI can be one of the most powerful allies in cybersecurity.” Paganini concludes: “It can make impressive enhancement in most applications where AI can be put to work in defense.”

Condividi su Whatsapp Condividi su Linkedin