![]() |
Image : wtinews, Hackers AI Attacks |
In a simple blog post, Microsoft highlighted, "Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, trying to understand their potential value for their operations and how they can bypass security controls."
One of the groups that is getting a lot of attention is Strontium, which is linked to Russian military intelligence. This group has a history of being involved in high-profile hacking events, such as interfering with the 2016 U.S. presidential election.
They are also known as APT28 or Fancy Bear. Strontium is using Language Models (LLMs) to study satellite transmission protocols, radar imaging technologies, and complicated technical specifics. They are also using LLMs for simple scripting tasks like changing files and choosing data, which could help automate some expert chores.
A North Korean hacking group called Thallium is also using LLMs to look for security holes, do simple scripting jobs, and make content for phishing operations. Meanwhile, an Iranian group named Curium is utilizing LLMs to build complex phishing emails and code meant to dodge antivirus applications. Hackers with ties to the Chinese government are also involved. They use LLMs for research, coding, translating, and making their own tools better.
LLMs haven't been used in any big hacks yet, but Microsoft and OpenAI are already taking steps to protect themselves. They are constantly shutting down accounts and assets connected with these hacking groups to prevent possible large-scale attacks.
As the use of AI in cyber attacks evolves, Microsoft is cautioning about future risks, including voice mimicry. The rise of AI-powered fraud, especially in speech synthesis, presents a unique risk – even a short voice sample can train a model to copy anyone.
In answer to these new threats, Microsoft is looking to AI as a defensive tool. They're creating a Security Copilot, an AI assistant tailored for cybersecurity experts to spot breaches and make sense of the overwhelming amount of data created by cybersecurity tools daily. Additionally, Microsoft is reassessing its software security in light of recent hacking events.
As the threat environment continues to change, staying vigilant and supporting cooperation will be crucial in countering the potential misuse of AI in cyber warfare. By staying informed, practicing caution, and supporting responsible AI development, the tech industry aims to secure a safer digital future for everyone.