Hackers AI Attacks : using Language Models (LLMs) cyberattacks more powerful

sit a hacker front , with Russia, North Korea, Iran, and China can look on computer with hacker background (feel like a hacking)
Image : wtinews, Hackers AI Attacks 

The fact that Microsoft and OpenAI are warning about a worrying trend in cyber warfare is a big surprise. New study shows that hackers are now using advanced language models, such as ChatGPT, to make their cyberattacks more powerful. This project by Microsoft and OpenAI shows a scary truth: hackers, even groups funded by governments, are using cutting-edge AI to improve their attacks and get around security measures.

In a simple blog post, Microsoft highlighted, "Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, trying to understand their potential value for their operations and how they can bypass security controls."

One of the groups that is getting a lot of attention is Strontium, which is linked to Russian military intelligence. This group has a history of being involved in high-profile hacking events, such as interfering with the 2016 U.S. presidential election.

 They are also known as APT28 or Fancy Bear. Strontium is using Language Models (LLMs) to study satellite transmission protocols, radar imaging technologies, and complicated technical specifics. They are also using LLMs for simple scripting tasks like changing files and choosing data, which could help automate some expert chores.

A North Korean hacking group called Thallium is also using LLMs to look for security holes, do simple scripting jobs, and make content for phishing operations. Meanwhile, an Iranian group named Curium is utilizing LLMs to build complex phishing emails and code meant to dodge antivirus applications. Hackers with ties to the Chinese government are also involved. They use LLMs for research, coding, translating, and making their own tools better.

LLMs haven't been used in any big hacks yet, but Microsoft and OpenAI are already taking steps to protect themselves. They are constantly shutting down accounts and assets connected with these hacking groups to prevent possible large-scale attacks.

As the use of AI in cyber attacks evolves, Microsoft is cautioning about future risks, including voice mimicry. The rise of AI-powered fraud, especially in speech synthesis, presents a unique risk – even a short voice sample can train a model to copy anyone.

In answer to these new threats, Microsoft is looking to AI as a defensive tool. They're creating a Security Copilot, an AI assistant tailored for cybersecurity experts to spot breaches and make sense of the overwhelming amount of data created by cybersecurity tools daily. Additionally, Microsoft is reassessing its software security in light of recent hacking events.

As the threat environment continues to change, staying vigilant and supporting cooperation will be crucial in countering the potential misuse of AI in cyber warfare. By staying informed, practicing caution, and supporting responsible AI development, the tech industry aims to secure a safer digital future for everyone.


Testimonial Author Blue Tick

"WASHINGTON, July 20 (Reuters) - Hackers and propagandists are wielding artificial intelligence (AI) to create malicious software."Read more

Raphael Satter

July 21, 20232:38 AM GMT+5:30


Testimonial Author Blue Tick

"important for us to understand how AI can be potentially misused in the hands of threat actors. "Read more

Microsoft Threat Intelligence

February 14, 2024

Previous Post Next Post