Rethinking Code Security: Why Software Development Needs a New Foundation in the age of AI

In its 2025 GenAI Code Security Report, Veracode delivered a harsh truth: while AI produces functional code, it introduces security vulnerabilities in 45% of cases. Despite advancements in AI model size and power, the security performance of these models has remained flat for the last two years. The bottom line? The code that AI co-pilots generate is not becoming more secure. This conclusion comes after analyzing 80 curated coding tasks across more than 100 large language models (LLMs).
Sure, Generative AI is revolutionizing how we code. It helps us move faster, automates mundane tasks, and accelerates development in ways we never imagined. But when only 55% of AI-generated code is deemed secure, it raises a critical question: Are we building on broken foundations?
The Problem: Teaching AI on Insecure Foundations
One major reason behind these security vulnerabilities is how AI models are trained. They learn from vast amounts of open-source code, much of which contains security flaws. The result? AI becomes adept at understanding syntax and logic but doesn’t inherently grasp the principles of secure code.
Here’s what Veracode’s report uncovered:
- Flat Security Performance: Despite the continuous growth of AI models, security performance has stagnated for the past two years.
- Vulnerability Gaps: While AI models are good at catching common issues (like SQL injection and basic cryptography flaws), they struggle with more complex vulnerabilities, leading to inconsistent security across the stack.
- The “Full Power” Problem: Many general-purpose programming languages offer developers—and by extension, AI—broad access to system resources by default. This wide-open access conflicts with the “least privilege” principle, which is foundational for security.
The Solution: Language-Level Security
This problem can't be solved by simply retraining the models. We need a more fundamental shift. Just as we've evolved languages to solve problems like memory safety and unpredictable control flows, it's time to evolve languages to solve the security crisis. Noumena Protocol Language (NPL) was designed for exactly this purpose. It cuts through these problems by making security a first-class citizen—not a layer you hope the AI gets right. Here's how NPL addresses the challenge:
-
Compiler-Enforced Trust: With NPL, authorization isn't an afterthought. It's baked directly into the language and compiler. If a permission isn't explicitly defined, it's denied. This is a Zero Trust principle that eliminates entire classes of vulnerabilities by design.
-
Contextual Authorization: Instead of asking "What role does this user have?", NPL's Relational Authorization model asks, "What is the dynamic relationship between these parties, in this context?" This enables you to build nuanced, precise access logic that’s impossible to misconfigure.
-
A Truly AI-Ready Stack: Because NPL is simple, declarative, and enforces security from the ground up, AI models can generate secure, compliant code natively without learning bad habits. The compiler acts as a final safety net, ensuring that even AI-generated code must comply with your security rules.
Time for a New Approach
The findings from the Veracode report serve as a wake-up call. Relying on traditional languages and hoping AI will get it right isn’t enough. It’s time to stop patching over security flaws and start building on a new, more secure foundation. NPL isn’t just a tool for fixing AI's security shortcomings—it’s a blueprint for a future where security is built in from the start.
Ready to see how NPL and AI can work together to build secure, compliant applications? Check out the whitepaper below or get in touch with us.