Nation-State Hackers Using OpenAI’s ChatGPT to Boost Cyber Operations, Microsoft Says

[ad_1]

Nation-state hackers are using artificial intelligence to refine their cyberattacks, according to a report published by Microsoft Corp. on Wednesday. Russian, North Korean, Iranian and Chinese-backed adversaries were detected adding large-language models, like OpenAI’s ChatGPT, to their toolkit, often in the preliminary stages of their hacking operations, researchers found. Some groups were using the technology to improve their phishing emails, gather information on vulnerabilities and troubleshoot their own technical issues, according to to the findings.

It’s the largest indication yet that state-sponsored cyber-espionage groups, which have haunted businesses and governments for years, are improving their tactics based on publicly available technologies, like large language models. Security experts have warned that such an evolution would help hackers gather more intelligence, boost their credibility when trying to dupe targets and more rapidly breach victim networks. OpenAI said Wednesday it had terminated accounts associated with state-sponsored hackers. 

We are on WhatsApp Channels. Click to join. 

“Threat actors, like defenders, are looking at AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could advance their objectives and attack techniques,” Microsoft said in the report.

No significant attacks have included the use of LLM technology, according to the company. Policy researchers in January 2023 warned that hackers and other bad actors online would find ways to misuse emerging AI technology, including to help write malicious code or spread influence operations.

Microsoft has invested $13 billion in OpenAI, the buzzy startup behind ChatGPT. 

Hacking groups that have used AI in their cyber operations included Forest Blizzard, which Microsoft says is linked to the Russian government. North Korea’s Velvet Chollima group, which has impersonated non-governmental organizations to spy on victims, and China’s Charcoal Typhoon hackers, who focus primarily on Taiwan and Thailand, have also utilized such technology, Microsoft said. An Iranian group linked to the country’s Islamic Revolutionary Guard has leveraged LLMs by creating deceptive emails, one which was used to lure famous feminists and another that masqueraded as an international development agency.

Microsoft’s findings come amid growing concerns from experts and the public about the grave risks AI could pose to the world, including disinformation and job loss. In March 2023, over a thousand people, including prominent leaders of major tech companies, signed an open letter warning about the risks AI could have to society. More than 33,000 people have signed the letter.

Another suspected Russian hacking group, known as Midnight Blizzard, previously compromised emails from Microsoft executives and members of the cybersecurity staff, the company said in January.

Also, read other top stories today:

Sam Altman says he does not like ChatGPT name! Calls it horrible. So, if you are entering the world of AI, make sure you name your chatbot properly. Some interesting details in this article. Check it out here.

Big Tech Crackdown Evaded! Apple’s iMessage and Microsoft’s Bing search engine, Edge web browser and Advertising service will avoid strict new European Union rules reining in Big Tech platforms. Read more about it here.

Love Based on Financial Status? One of the few online dating moves that still makes people squeamish is filtering prospective partners based on financial status, and sites such as Millionaire Match emphasize prioritizing money. Know all about it here.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *