Η Russia, China and other US rivals in cyberspace use new artificial intelligence (AI) tools to improve the their hacking skills, according to a Report Microsoft and OpenAI.

We've heard before about the abuse of AI chatbots (e.g. ChatGPT) by cybercriminals, but this report is the first to correlate leading government hacking groups with specific uses of large language models (LLMs).

Recently, there has been a heated debate about the dangers of the fast-growing Technology, while many countries are trying to put some restrictions on its use.

See also: London Underground: Detects dangerous situations using AI

According to the Microsoft report, AI tools are used by at least two hacking groups linked to the Chinese government and groups from Russia, Iran and North Korea.

"Cybercrime groups, government hackers and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand their potential value for their activities and controls Security that may need to be bypassed“, έγραψε η Microsoft.

Microsoft said it cut off hacking groups' access to AI-based tools ChatGPT of OpenAI. It also said it would alert manufacturers to other tools used by hackers and provide more details about the teams and techniques they choose.

Fortunately, no major events have yet taken place attacks by exploiting artificial intelligence, but hackers are working on it.

Ο Sherrod DeGrippo, a Microsoft executive, acknowledged that the company can't see everything hackers are doing and stressed that stopping some accounts would not deter the invaders from creating new ones.

"Microsoft doesn't want to make it easy for threat actors to commit campaigns“, είπε. “That's our role, to hit them as they evolve".

See also: AI: Makes cyber attacks more effective

Among the state hacking groups using AI tools are:

  • A top Russian team, related to the military intelligence agency GRU: This team has used artificial intelligence to investigate technologies
    satellites and radars that may be related to the war in Ukraine.
  • North Korean hackers: They used artificial intelligence to learn more about publicly reported vulnerabilities, including a bug from 2022 in Microsoft's support tools.
  • A group from Iran: Hackers have used AI tools to find new ways to trick people and avoid detection.
  • Two Chinese teams: Οι hackers της μιας κρατικής hacking ομάδας προσπάθησαν να δημιουργήσουν προγράμματα και περιεχόμενο, ενώ μια άλλη κινεζική ομάδα “assesses the effectiveness of LLMs in sourcing information on potentially sensitive subjects, high profile individuals and profiles, regional geopolitics, US influence and domestic assumptionsς”, έγραψε η Microsoft.

See also: Organizations ban the use of Generative AI due to privacy concerns

Unfortunately, artificial intelligence (AI) can increase the effectiveness of hacking efforts in a number of ways. First, it can. automate procedures, allowing hackers to execute attacks on a larger scale and with greater speed.

Secondly, AI can help in personalisation of attacks. Using machine learning, AI can analyse the data and adapt attacks based on the findings, making them more effective.

Third, AI can improve the detection of the weaknesses of a system. By using deep learning algorithms, AI can discover new vulnerabilities that a hacker can exploit.

Finally, it can facilitate the development of more complex attacks. With the ability to learn and adapt, AI can help hackers create more sophisticated and harder-to-detect attacks.

Source : www.washingtonpost.com