OpenAI Flags ‘High’ Cybersecurity Risks in New AI Models, Calls for Global Safeguards

Sapatar / Updated: Dec 11, 2025, 17:16 IST 29 Share
OpenAI Flags ‘High’ Cybersecurity Risks in New AI Models, Calls for Global Safeguards

OpenAI has issued a blunt warning in its latest cybersecurity report, stating that its newest large-scale AI models carry a “high” risk of being misused for cyberattacks. The company’s internal assessments, carried out over months of red-team simulations, revealed that the latest generation of AI systems can significantly accelerate offensive cyber operations—even for users without advanced technical expertise.

● Internal Tests Show AI Can Speed Up Hacking Workflows

According to the report, OpenAI’s evaluation teams demonstrated that the models could streamline steps involved in reconnaissance, vulnerability scanning, exploit discovery, and social engineering. While the AI did not generate highly novel exploits, it reduced the time and effort needed for attackers to assemble functioning attack chains, something cybersecurity experts consider a dangerous multiplier for existing threats.

● Malware and Phishing Assistance Identified as High-Risk Areas

The assessment highlighted two areas where risk was notably elevated: malware creation and phishing. OpenAI found that the models could provide structured guidance on crafting obfuscated code, building payloads, or customizing phishing messages that closely mimic legitimate communications. The company stressed that safeguards are in place, but sophisticated users may still find ways to bypass restrictions.

● OpenAI Calls for Stronger Governance and Industry Cooperation

In response, OpenAI urged governments, regulators, and industry leaders to develop stronger guardrails around deployment of advanced AI. The company emphasized that the risk does not stem solely from OpenAI’s systems but from a broader wave of increasingly capable AI platforms entering the market. It recommended cross-industry threat monitoring, mandatory safety evaluations, and standardized cybersecurity benchmarks for powerful AI models.

● Cybersecurity Experts Say Threat Landscape Is Evolving Rapidly

Security analysts have echoed OpenAI’s concerns. Experts warn that AI’s ability to automate manual tasks, write code efficiently, and mimic human communication could dramatically widen the attack surface. Organizations already battling a surge in ransomware and AI-generated scams may face an even more challenging future as malicious actors gain access to these tools.

● Governments Worldwide Watching AI Risks Closely

The warning comes as global governments move quickly to establish rules for advanced AI. The U.S., EU, Australia, and several Asian nations are exploring mandatory risk-management frameworks for frontier models. OpenAI’s statement is expected to influence regulatory discussions, particularly around AI model access, monitoring, and high-risk deployment controls.

● Balancing Innovation and Safety Remains a Key Challenge

OpenAI reiterated that while advanced AI offers enormous benefits in cybersecurity defense, medicine, and automation, unchecked misuse could undermine digital safety at scale. The company said it remains committed to improving model safety and transparency as it continues developing next-generation AI systems.