Artificial intelligence tools are being pushed to their limits as the crypto industry grapples with more than 3.4 billion dollars in hacking losses recorded in 2025. Developers are increasingly deploying AI driven agents to monitor and secure smart contracts that collectively manage over 100 billion dollars in digital assets.
Unlike previous years marked by numerous smaller breaches, 2025 losses were concentrated in a handful of large scale attacks. Three major incidents accounted for nearly 70 percent of the total value stolen. The most prominent was a hack targeting the Bybit exchange, which resulted in losses of roughly 1.4 billion dollars, making it one of the largest crypto thefts on record.
The scale of these breaches has intensified pressure on developers to strengthen on chain security. Smart contracts, which automatically execute financial transactions on blockchain networks, are central to decentralized finance platforms. Any vulnerability in their code can expose investor funds to immediate risk.
Traditional manual audits, while still important, are increasingly seen as insufficient on their own. Audits can be time consuming and costly, and once a contract is deployed, new attack methods may emerge that were not anticipated during review. This has led teams to explore continuous monitoring systems powered by AI.
OpenAI, in collaboration with industry partners including Paradigm and OtterSec, has been testing an evaluation framework known as EVMbench. The system assesses whether AI agents can detect vulnerabilities in Ethereum Virtual Machine based smart contracts under realistic conditions. In controlled environments, AI agents analyze code, identify potential flaws and simulate exploit attempts to evaluate contract resilience.
Early findings suggest that AI tools can detect irregularities faster than traditional methods and are highly effective at identifying exploitable weaknesses. However, results also indicate that AI agents are currently more successful at exploiting vulnerabilities than at safely repairing them. Some models have demonstrated the ability to exploit more than 70 percent of tested flaws, a sharp increase compared with earlier AI systems.
This dual use nature of AI presents a growing concern. The same systems designed to protect decentralized finance infrastructure could also be leveraged by attackers to scan code at scale and automate exploit strategies. As AI models become more advanced, the speed and sophistication of potential attacks may increase.
Industry leaders have noted that AI agents may soon interact directly with blockchain networks, approving transactions or managing wallets autonomously. While this could reduce user errors and improve efficiency, it also raises questions about oversight and misuse.
The rapid integration of AI into crypto security highlights a broader shift in how digital finance is defended. Continuous monitoring, automated testing and real time risk analysis are becoming essential. At the same time, the industry faces the challenge of ensuring that powerful AI security tools do not become equally powerful offensive weapons.



