[] minute read

Capabilities & Limitations of AI in Application Security

Infusing AI into appsec processes and tools offers a practical enhancement when focusing on threat modeling and tool utilization.

Written by: Chris Romeo
Thu, Mar 21 2024

Integrating artificial intelligence (AI) is no longer a novel idea. It's fueling conversations, opinions, and forecasts. But as we cut through the AI hype, it's pivotal to ground ourselves in the reality of its capabilities and limitations within AppSec.

The current AI market is being portrayed as the savior of everything. My response is, let's tap the brakes a bit. We must evaluate what AI is – and not – suitable for.

For example, bolting a chatbot onto your product does not mean you are an AI company – you aren’t using AI. You just added a chatbot. I have yet to find an AI chatbot that offers any value to me within a product other than ChatGPT. ChatGPT isn’t integrated; it is a complete solution.

I prefer the concept of infusing AI into solutions. This means using it for what it’s good at – summarizing data. Its forte lies in distilling complex data sets into comprehensible insights. But it’s no magic wand. It excels at pattern recognition and data summarization, but generating innovative ideas remains a uniquely human trait. AI in AppSec – and threat modeling – isn't about concocting groundbreaking thoughts from a digital ether; it's about bolstering the foundational elements of security—people, processes, tools, and governance.

Enhancing, Not Replacing, the Human Element

Business logic is where human intellect rules. AI can support threat modeling by offering a vast repository of threat intelligence but cannot architect secure solutions in isolation. The human factor remains irreplaceable, steering the design and ensuring the application design and architecture align with security and privacy imperatives. Design is where innovation lies and AI is not good at innovation.

You would never implement an intern's work without scrutiny and revision. The same applies to AI. It needs expert human intervention.

It is essential to acknowledge that even with the advancements in artificial intelligence, the design process still requires a human component. AI agents cannot yet conduct security architecture reviews and independently enhance the process. However, AI is well-suited to provide a database of various threats and mitigations to take action. The real value of AI is to augment human knowledge and experience by some measurable amount – let’s say up to 20% -- making dev teams more efficient in threat mitigation. But it’s not a replacement for human logic. Despite advancements in AI, the reality is that AI cannot (yet) conduct application security reviews unassisted. Only humans can drive the innovation process, as AI is not capable of thinking up anything new.

Take GitHub Co-Pilot, for example. It's not about reinventing the wheel of secrets detection or dependency management—those tasks we already have tools for. Instead, Co-Pilot and its ilk are about enhancing developer productivity by providing scaffolding. AI agents support developers and security professionals to be more efficient, not to render them redundant.

AI-Infusion: Seamless integration at just the right time

At Devici, we have this concept of 'AI Infusion'—integrating AI so seamlessly into AppSec tools that its presence becomes indistinguishable from its inherent functionality. This aligns with our tool's design philosophy, making your workflow more productive without interacting with standalone AI systems.

This is how it works. If a developer builds a threat model, they will identify the threats they know about. However, using an LLM (Large Language Model) to process a description from the developer about a specific element in the threat model tends to produce more results than the developer created alone. As the developer evaluates the AI results, they gain new knowledge of potential threats.

This makes our products smarter by integrating AI support into the places where threat modeling teams need it most. We’re inserting AI into the fabric of the process to enhance the specific activity the person is performing when they need more knowledge.

Threat Model for Free
Effortlessly Build & Scale Privacy and Secure By Design Programs

Governance: Beware of Consistency & Privacy Issues

There’s a lot of buzz around AI and governance. My caution about bringing AI into this realm is consistency and privacy. Here’s why. Chat GPT and other generative AI solutions can provide solutions, content, and ideas when asked. But, every time you ask, they produce a different answer –not including the times you might ask it to regenerate. Asking the same question twice in a row results in different answers. Governance is about repetition. It's about consistency, so generative AI has some maturing to do before it’s a solid governance solution. 

This doesn’t even account for AI hallucination. There are reports of AI creating fake quotes and non-existent case references in a brief for lawyers who used ChatGPT to write it. So much of the AI we use today is just digesting and regurgitating information, often to the detriment of reality. It cannot replace the human factor – only augment it. Ensuring the information AI produces is true and is not someone else’s intellectual property or content is a huge issue. There is a reason OpenAI is willing to pay legal fees around claims of copyright infringement.  

Educating Through AI

AI's potential to educate and elevate developers' security acumen is another facet we're passionate about. The aim is not to rank but to enrich and empower developers, using AI-driven insights to tailor educational interventions that bolster their security expertise. AI tools make digesting and prioritizing information from appsec or scanning tools more efficient and manageable. They then present it clearly and concisely, enabling developers, engineers, and DevOps team members to act faster – and often learn while doing so.

Also, as more experienced team members add knowledge and prompts to the LLM, that information becomes more accessible and indexable to newer developers. 

AI can also evaluate work done by developers to provide them with targeted education and resources to help them improve. This is not about stack-ranking developers (that would not be a company I want to work for). Instead, it is important to uncover specific areas where a team member could use improvement and serve up the data and information to assist them.

Looking Ahead: A Pragmatic AI Trajectory in AppSec

There is no shortage of predictions about AI's trajectory in AppSec, yet we adopt a measured stance. We anticipate a market evolution where the AI hyperbole settles, giving way to nuanced, value-adding AI applications that fortify the AppSec framework, particularly in promoting automated guardrails and informed decision-making.

Here are a few areas everyone should be considering for the future of AI in application security:

  • We need to get a trust model in place for AI. How do we build that, and how long does it take to reach where we can measure trust in AI?

  •  We must put Guardrails in place. For example, I recently heard a story about a company with an access control setup as a guardrail. When developers create a new code file, they must add access control rules to the JSON file. Those rules define the access control policy for the new endpoint. If the rules are not added, the commit is rejected.

  • Remediation is where AI has the most significant impact. It’s not just about uncovering the security issue, building the remediation, and committing the code. We need to focus on providing developers with suggestions and code for review.

Embracing AI in AppSec is less about blind reliance and more about informed augmentation. It's a testament to the art and science of secure solution design, a fusion of the imaginative human spirit and the methodical precision of machine intelligence—each enhancing the other to forge a fortified front against the omnipresent threat landscape.


Related articles

Skip to main content