New insights reveal how AI-driven development is outpacing security – and what organizations must do to adapt.
Security Journey, a leading provider of application security education, has released a new report outlining the security challenges posed by AI adoption in software development, and the steps organizations must take to close the growing gap between how software is built and how it is secured.
The report,ย Closing the Security Gap in AI,ย captures insights from a roundtable held inย June 2025, featuring leading voices in application security, development, and AI. The panel explored how AI tools, particularly large language models and code generation assistants, are transforming software workflows, often at the expense of security. Developers are releasing code faster, but often without fully understanding the implications of using AI in the development lifecycle.
Security Journey’s roundtable participants – including security leaders, engineers, and educators from across the industry – discussed the real-world consequences of AI-generated vulnerabilities, the risks of over-relying on automation, and the cultural and structural changes required to support secure AI adoption.
Also Read:ย Cyberhaven Names James McCarthy SVP of Sales to Meet Growing Demand for Data Security Solutions
The report pinpoints where organizations must adapt to secure their use of AI, including:
- Governance must reflect reality: AI policies are often developed without a clear understanding of how teams truly engage with the technology. When governance is overly rigid or reactive, it drives employees toward shadow AI – exacerbating risk rather than mitigating it.
- Developers need greater support and accountability: AI is shifting more decision-making onto developers, many of whom lack the security knowledge to assess risks. Organizations must provide proactive education and just-in-time support.
- Security culture needs to evolve with the tech: Teams will only prioritize security if it is integrated into their daily routines and reinforced by peers. Positive reinforcement, clear defaults, and internal champions can help normalize secure behavior.
- AI is accelerating talent gaps: Overreliance on AI tools is preventing junior developers from building foundational experience. Organizations risk losing long-term expertise unless they invest in both human and technical development.
- Security May Get Worse Before It Gets Better:ย Threat actors are already taking advantage of vulnerabilities in AI-generated code. As organizations struggle to keep pace, the frequency of incidents may continue to rise. The path forward demands education, rigorous testing, and a shift in security culture
“This isn’t a tooling problem – it’s a people problem,” saidย Dustin Lehr, AppSec Advocate at Security Journey. “From boardrooms to codebases, the pressure to adopt AI is accelerating. It’s transforming how software is created, but developers remain accountable for securing it. If we don’t match the speed of AI adoption with equally aggressive education and governance, we risk exposing organizations to systemic vulnerabilities. Developers need more than policies – they need training, support, and a culture that empowers secure choices. This report doesn’t just highlight the challenges – it offers a roadmap to close the gap.”
Also Read:ย The CIOโs New Mandate: Weaving the Unified Data Fabric for AI-Powered Enterprise Decisions
[To share your insights with us as part of editorial or sponsored content, please write toย psen@itechseries.com]

