IT Brief Asia - Technology news for CIOs & IT decision-makers
Flux result fc41b3aa 8862 4880 bcfd fc720050def5

AI coding speeds up, but security teams fall behind

Thu, 23rd Apr 2026 (Today)

ProjectDiscovery has published findings from its 2026 AI Coding Impact Report, which points to a gap between faster software delivery and security teams' ability to keep pace.

The company surveyed 200 cybersecurity practitioners and leaders in North America and Western Europe from mid-sized and large organisations. Most respondents worked at businesses with 1,001 to 5,000 employees.

The results show that software engineering teams are moving faster as AI-assisted coding becomes more common. All respondents reported increased engineering delivery over the past 12 months, and 49% said most or all of that increase came from AI-assisted coding tools.

Security teams, however, are under strain. The findings show that 62% said it is becoming harder to keep up with the volume of code that needs security review, while only 40% said they are managing the increased workload well.

Pressure appeared more acute in mid-sized organisations, where 69% said it is becoming harder to keep up with the growing volume of code requiring security review.

Manual Burden

A central finding is how much time security staff spend checking results rather than fixing problems. Two-thirds of practitioners, or 66%, said they spend more than half their time manually validating findings instead of resolving underlying vulnerabilities.

The most common weekly tasks were triaging alerts, cited by 60% of respondents, coordinating fixes at 53%, and validating exploitability at 46%. Those activities take time away from remediation work.

This suggests that AI-driven gains in software production may not translate into similar improvements in security operations. If teams remain tied up reviewing alerts and confirming whether issues are real, vulnerability backlogs may continue to grow even as coding output rises.

Trust Question

The survey also found that trust remains a major barrier to AI adoption within security teams. While respondents acknowledged the potential for AI-based tools to help manage increasing workloads, many said they would not rely on systems they could not closely inspect.

For AI-based penetration testing tools, 57% said they would need a full audit trail of actions taken before they could trust the technology. The finding points to demand for visibility and accountability in tools used by security practitioners, particularly where automated decisions could affect risk assessments or remediation priorities.

The report also highlighted concerns about risks introduced or amplified by AI-assisted coding. Practitioners ranked exposure of secrets as the leading issue, with 78% identifying it as the number one challenge.

The findings indicate that the spread of AI tools in engineering is not being matched evenly across security functions. That imbalance could leave companies with a widening internal gap, as more code is produced at greater speed but review and remediation do not scale at the same rate.

The data reflects a workforce facing growing pressure from higher code volumes, more alerts, more false positives and more manual proof-of-concept work. ProjectDiscovery argues that this combination is stretching security teams' bandwidth.

Rishi Sharma, Chief Executive Officer and Co-Founder of ProjectDiscovery, said the issue lies after vulnerabilities are identified rather than in discovery itself. "The industry spends a lot of oxygen talking about finding more vulnerabilities, but our data shows the real bottleneck is downstream. We have a validation and remediation systems problem."

He added: "Practitioners do not need more scanners piling on more alerts. They need fewer tools that deliver evidence instead of noise, and they need AI that can help teams scale innovation and risk management at the same pace."

ProjectDiscovery is known for its open-source security tools, including the vulnerability scanner Nuclei and other products used to map attack surfaces and identify exploitable weaknesses. It also offers Neo, a platform that brings together static application security testing, dynamic application security testing and automated penetration testing.

The survey findings come as companies across sectors continue to push AI tools into software development workflows. For security leaders, the challenge is no longer simply whether AI can increase coding output, but whether the controls, auditability and evidence needed to manage the resulting risks are keeping up.

Among respondents, the clearest demand was for systems that reduce manual verification and provide traceable evidence of what automated tools have done.