A Look at AI Adoption Trends in Cybersecurity

Security Leaders Balance Promise and Risk as AI Tools Emerge for Threat Detection

Mr. Steven Sim Kok Leong, Chair, OT-ISAC Advisory Committee
February 28, 2025

Discussions about securing artificial intelligence systems dominated cybersecurity conversations in 2024. But a more critical question emerged: How can AI strengthen cybersecurity?

Machine learning has played a key role in cybersecurity since 2015, enabling anomaly detection and behavioral analytics in endpoint detection and response, network detection and response, and extended detection and response systems. These tools now integrate with security information and event management platforms, using machine learning to reduce both false positives and false negatives. This approach, now termed "Assistive AI," enhances threat detection and analysis capabilities.

Generative AI, which took the world by storm in late 2023, prompted security professionals to examine its cybersecurity applications. Organizations began using it to create incident reports and deliver concise analyses. The challenge with gen AI lies in its primary assistive nature. Some vendors claimed their AI solutions enabled machine-speed incident response, but human oversight is necessary. Machine speed is, thus, not achievable at this stage if strong confidence in the output, even with retrieval augmented generation, or RAG, cannot be assured.

Discussions around agentic AI have increased since late 2024 and continued into early 2025. This technology represents a potential game-changer because it enables autonomous threat detection and response - distinct from the automated processes in security orchestration, automation and response systems. With agentic AI, the industry is moving closer to achieving true machine-speed security operations.

Most companies have already deployed EDR and SIEM with machine learning capabilities, and this adoption has helped enterprises defend against behavioral attacks including those that live off the land, or LOTL. Despite these advances, the response remains largely manual, requiring teams of analysts to provide managed detection and response services. This human element creates significant delays, potentially enabling attackers using agentic AI to complete their objectives before defenders can respond. The cybersecurity industry needs to match adversaries' capabilities by adopting similar AI technologies to counter autonomous defenses - fight AI with AI.

Attackers need to succeed just once while failing 999 times, while defenders must succeed every time. Hackers don't care if algorithms fail on a victim's network - they can always try again. Security teams need to ensure their solutions protect systems without disrupting business operations.

In a rush to ride the gen AI wave, many vendors released products, services and solutions in 2024, and user enterprises have started piloting these solutions as AI adoption becomes a common ask from their board.

Onus on End Users

Many cybersecurity vendor solutions have unfortunately fallen short. The three key considerations vendors tend to overlook when evaluating gen AI solutions include usage confidence, friction and governance while rushing out to be among the first few to market. Enterprises have expressed disappointment, while vendors advised waiting for future releases that promised improvements.

By late 2024 and early 2025, some EDR, NDR and XDR vendors have begun showcasing agentic AI capabilities. All hopes are up, but until MSSPs themselves reduce the staffing needed for MDR, we cannot clearly say we have effectively used agentic AI to reduce manual overhead.

To close this gap, user enterprises need to continue testing and providing feedback to vendors. The cybersecurity ecosystem can mature only when both vendors and enterprises are committed to innovation, adopt a growth mindset, and take user needs and feedback into consideration.

Agentic AI Needs Guardrails

Vendors must provide thorough user acceptance testing guidelines that demonstrate agentic AI benefits compared to standard solutions. These guidelines require clear methodologies to check whether the AI tool not only is complete, robust and trustworthy but also does not hallucinate.

Cybersecurity vendors also must prove that their agentic AI solution can withstand cyberattacks. With greater capabilities come greater responsibilities. A misguided autonomous system could potentially shut down an entire enterprise by quarantining all servers and endpoints after incorrectly identifying malware infections.

I am cautiously optimistic about AI's evolution. I think assistive AI is maturing in 2025 with increased adoption at the cybersecurity front, but agentic AI is still in the early stages of development. Implementation requires appropriate guardrails and sandbox environments for testing. Many cybersecurity leaders at large enterprises are adopting a wait-and-see approach due to limited time and resources, preferring to let industry leaders pioneer adoption before committing.

Security professionals have avoided solutions that might cause career-destroying, self-inflicted denial-of-service incidents with catastrophic operational impacts, especially for critical infrastructure. Despite these concerns, the outlook remains balanced. Similar to cloud adoption, AI integration has followed the Gartner Hype Cycle with progressive maturity. The industry is steadily progressing up the slope of enlightenment toward productive implementation.

About the Author

Man in suit smiling with arms crossed

Steven Sim Kok Leong, Chair, OT-ISAC Advisory Committee

Chair

Steven Sim has worked for more than 27 years in the cybersecurity field with large end-user enterprises and critical infrastructures, undertaken global CISO role, driven award-winning CSO50 security governance and management initiatives and headed incident response, security architecture, technology and operations at local, regional and global levels.