OpenAI and Microsoft have published findings on new threats in the rapidly evolving domain of artificial intelligence showing that threat actors are incorporating artificial intelligence technologies into their arsenal, treating AI as a tool to increase their productivity in conducting offensive operations.
They also announced the principles that shape Microsoft’s policy and actions to mitigate the risks associated with the use of our AI tools and APIs by advanced persistent threats (APTs), advanced persistent manipulators (APMs) and the cybercrime syndicates they monitor.
Despite the acceptance of AI by threat actors, research has yet to uncover any particularly innovative or unique AI-enabled tactics that can be attributed to the misuse of AI technologies by these adversaries. This indicates that while the use of artificial intelligence by threat actors is evolving, it has not led to the emergence of unprecedented methods of attack or abuse, Microsoft said in blog post.
However, both OpenAI and its partner, along with their associated networks, are monitoring the situation to understand how the threat landscape may evolve with the integration of AI technologies.
They are committed to staying ahead of potential threats by scrutinizing how AI can be used maliciously, ensuring readiness for any new techniques that may emerge in the future.
“The goal of Microsoft’s partnership with OpenAI, including the publication of this research, is to ensure the safe and responsible use of AI technologies such as ChatGPT, while adhering to the highest standards of ethical practice to protect the community from potential misuse. As part of this commitment, we’ve taken measures to disrupt assets and accounts associated with threat actors, improved the protection of OpenAI LLM technology and users from attack or misuse, and designed guardrails and security mechanisms around our models,” Microsoft said in a blog post. “Additionally, we are also deeply committed to using generative artificial intelligence to disrupt threat actors and harnessing the power of new tools, including Microsoft Copilot for Security, to advance defenders everywhere.
The principles outlined by Microsoft include:
- Identifying and countering the use of malicious threat actors.
- Notice to other AI service providers.
- Cooperation with other stakeholders.
- Transparency to the public and stakeholders about actions taken under these threat actor principles.