The Present & Future Impact of AI for Threat Modeling 

Overview

AI is a hot topic. Most organizations add chatbots to their systems or software, but the potential of AI is so much more profound and far-reaching than a simple chatbot. This raises questions about its impact on security practices and where threat modeling fits. 

Join four of the leading voices in threat modeling for an engaging discussion about the role of AI in threat modeling both now and in the future. They'll tackle topics such as: 

• AI's impact now and in the next five years

• What roles do tools and technology play 

• What are some of the risks of using AI in threat modeling? We're going to threat model it! 

Speakers

  • Chris Romeo, CEO, Devici

  • Dr. Kim Wuyts, Cyber & Privacy Manager  

  • Brook Schoenfield, CTO Resilient Software Security 

  • Izar Tarandach, Senior Principal Security Architect 

Transcript

Chris Romeo 00:02

All right. Welcome. Welcome to our webinar here for devici on AI and threat modeling. My name is Chris Romeo. I'm the CEO of devici. And I get to be the moderator of this brilliant, charming and fun panel that we were just talking about in the preamble to this, we have brilliant, charming and fun. Well, all these people meet all three of those characteristics as well. But let's do a quick round of introductions for those folks. everybody probably knows you already. But just in case there's a few that don't. Dr. Kim Wuyts, I'd love to start with you for a brief introduction.

Dr. Kim Wuyts 00:39

Sure, I am currently a Cyber and Privacy Manager at one of the big four. But before that, I have done a lot of research on privacy Threat Modeling, and one of the main outputs, there was the LINDDUN privacy threat modeling framework.

Chris Romeo 00:59

Okay, and when I think privacy threat modeling, I think Dr. Kim Wuyts, is the person who really began this idea for all of us. I love to just point that out every time I introduce you, Kim, or I'm part of your introduction,...

Dr. Kim Wuyts 01:19

I do have to point out too, that there is a bigger team there too. That deserves some credit. I'm happy that, I'm part of that. Definitely.

Chris Romeo 01:29

And you were the driving force, though. All right. How about Brook Schoenfield, if you would introduce yourself, please.

Brook Schoenfield 01:37

Right now I'm a CTO for a little software and consulting company called Resilient Software Security. With my dear friend, Damilare Fagbemi and Michael Greene. And before that serial leader of software security programs and general InfoSec, depending bouncing back and forth, and author of a whole bunch of books about all of that. Whatever I could figure out to throw down. And quoting all of you, you all show up in my books. So yeah.

Chris Romeo 02:24

All right. Thanks, Brook. How about Izar Tarandach?

Izar Tarandach 02:28

Me? I do security stuff, and I want to be broke when I grow up. And the way to do that is I only have one book versus his many. I've done a thing or two in security. Nowadays, I am a Senior Principal Security Architect, and written a tad about Threat Modeling, and a happy participant of the Security Table Podcast with Chris and Matt Coles.

Chris Romeo 03:00

And we can't, we can't go any further until we address the shirt that you're wearing at the moment that says, "Go away or I'll replace you with a very short prompt." And the various bold on the shirt. So you know, just in the spirit of AI, that's, that's a great place to start. And I'm guessing, are you going to be wearing this shirt at holiday gatherings so that family members know not to ask you questions about IT, or what was your ultimate desire?

Izar Tarandach 03:29

That's that's the one when I when I say I only do Linux, I don't do Windows.

Chris Romeo 03:34

That's a different shirt.

Izar Tarandach 03:34

That's usually good enough. No, this is an evolution of the old, "Go away or be replaced by a very short script." And in these times that we live, you know, a bash script is not enough. And you can't say that you're going to substitute people for something that bashes or something like that. So it's more appropriate to say, "Go away, or I'll replace you with a very short prompt."

Chris Romeo 03:37

Good stuff. And so I want to just remind our audience here on LinkedIn, you can comment to ask questions. So we'd love to see your questions come in throughout this conversation, love to see your thoughts on the things that we're talking about. If you disagree with us on something, please put it in the comment. And we'll probably put it up on screen and talk about it because we want to ensure everybody gets the most out of this experience. And so I'm going to start with a question for Brook because as we were preparing for this, we were talking about who are the experts in artificial intelligence and who's qualified to speak about this topic of artificial intelligence. So Brook, I'd love to get your thoughts on that as a place to start.

Brook Schoenfield 04:38

Well, we can go two sides to that question. Everyone who's interacted with it or no one because we don't actually - if you dig into it, and I've been digging as best I can, and I'm really probably just dangerous - know enough to be dangerous here. We don't know how it works. We actually don't know how it works. And you might be surprised we know how to train it. We know how to get it going, we know how to get results. We know how to measure what the error problems are, or I mean, there's all different ways we measure the probabilistic changes. How much error? Should we change the error? How much? How many tokens can we give it? There's lots of measures here, don't get me wrong. But we actually don't know what's going on inside the probabilistic engine. And there's lots of research on that, but we don't. So there are people who are deep into all of these subjects, thank goodness, not me. And they're really close to the forest. They're inside the forest. And then there are us right observers on the outside, who really don't understand all of it. We may understand pieces, but don't understand all of it. And none of us knows how it works. So if you're feeling a little impostor syndrome, oh, my gosh, I wonder if I couldn't comment or think about this. Let it go. Because, yeah, joining the rest of us.

Chris Romeo 06:17

We're all in the same boat, then.

Brook Schoenfield 06:19

Enjoy the ambivalence.

Chris Romeo 06:21

Our current understanding of this so let's, let's dive into the the threat modeling pool here. And start - Izar, I'm going to come to you first here with this question. From your perspective, how is AI currently being used in the discipline of threat modeling today?

Izar Tarandach 06:39

Badly.

Chris Romeo 06:43

I was hoping for a little more than a one word answer, but, you know...

Izar Tarandach 06:47

Very badly. So the thing is, okay, so I, of course, I'm joking. But the thing is that, as we, as people tend to approach threat modeling and threat modeling tooling, in general, there has been for as long as I can remember, this constant search for the security tool that's going to be the silver bullet. The one that you're going to click a button, and you get to this magical thing called the tripod, right? And the pushback of threat modelers, people who thought of Threat Modeling - who thought of technology of methodologies, Threat Modeling, was always that threat modeling is at its root, a conceptual exercise. You are not reviewing code, you are not testing features, you are looking at an idea and saying what could go wrong in there, even before it becomes code. And for tools to be able to do that, it was always constrained to what do you give the tool? And then the realization that to give a perfect representation? If if such a thing existed, and it doesn't - a perfect representation of what is it that you want to build to a tool so that the tool could make the right inferences was always a big part of the problem. Because if the best representation that you can give of a tool is actually what you wrote. So you write the system, that's the representation of the system in code, right? And then you've got caught into that loop, do I have to actually write the system so that it can be formalized and passed to the tool so that the tool can tell me what the threats would be. And the promise of AI is that all of a sudden, you come in, and you say, hey, now I don't have to be so formal. I can just actually discuss in natural language ideas with this magic box, and somehow try to come up with the other side. And in my personal experience, and I've been playing with this quite a lot lately. It's not that easy. There is the wow factor of just going into ChatGPT and saying, "This is my system, what are the threats" and get this list? But then when you go deep into the list, you start seeing that sometimes it's too generic, sometimes it's too, too much. Sometimes it's what the hell is this thing talking about? And that's where I put the "very badly" in the use of, of AI.

Chris Romeo 09:26

Yeah, makes sense. So Kim, I'm gonna come come to you now and get your perspective on this. And so from from the things that you're looking at and what you're seeing happening across our industry, how do you see threat modeling and AI bumping into each other right now?

Dr. Kim Wuyts 09:42

Yeah, I think the example Izar gave about the LLMs and when ChatGPT became publicly available - you saw a lot of people playing with it and saying, like, "Look, I put in this prompt with three sentences about a scenario, and look, I have got like 20 useful threats." And that's great, because that saves you some time, but it's just like the low hanging fruit. And even then you're not entirely sure that it makes that much sense, you still need to reason about it to add up - to the check whether it's actually useful and valid output. So I think we're now at - it gives you a good initial thing, like people suffer from the blank page syndrome, you get something to get you going. But it's not really automation, as people like to think about that AI part of threat modeling, at least according to what I have been seeing.

Chris Romeo 10:47

Yeah, and so Brook, if we were to put a...let's say, there's six people in a room, very diverse set of backgrounds and perspectives. So we've got a developer, we've got a product person, the thing that we talked about in the threat modeling manifesto that's the perfect scenario. And on the other side of this competition, we put ChatGPT. What's the score gonna look like in that type of head-to-head threat modeling experience right now, in your in your thought?

Brook Schoenfield 11:17

Well, literally, I've tried exactly what Izar and Kim have said, where I've thrown in a description of the system. And it's exactly as Kim describes it, you get this really high level, oh, and what it's doing actually is going to its its bank of information and saying, "Oh, that seems to look like sort of like this, so it's probably this kind of a problem." And there's no specificity. There is no, it's very general. And if you poke at it exactly what you said Izar, or if you poke at it, and say, "Well, now, are you sure that this particular system has this problem?" It'll either go, "Oh, I'm sorry." It's very good at doing that. Or it'll, it'll give you another round, or even I've made them go around and back around and reiterate, same thing, they said only differently. So it goes in circles. So I think the score is going to be something like 20 for the AI and 80. Especially if you have an experienced threat modeler if you have Izar on the team, they're going to run up close to 100.

Izar Tarandach 12:39

That brings up another thing, Kim brought up the blank page issue that we mentioned in the threat modeling manifesto. Now another thing that we brought up in there, and that I think that AI is basically going against is the anti pattern of the hero, CAD modeler, the one person in the team, that's going to go and do the whole threat model, right. And now we have extentualized this hero threat modeling to into another layer. And we are saying here, take the whole system, give me a threat model back. But that's just one point of view. And even though it's one point of view that has the whole internet knowledge behind it, it's still one point of view. So all those other people in the room, all of a sudden, we are just giving them the way things are today. We are only giving them the opportunity to give input in how we describe the system. Not in how they think things could go wrong, that we are were compartmentalizing to that one LLM, which all of a sudden becomes the hero of threat modeling.

Brook Schoenfield 13:47

You know, the one thing that I want to come back to and I think it's really important, over and over and over again. And if you talk to anybody who teaches threat modeling, like I do, you know, someone like Adam, I've checked this with Adam, I've checked it with all of you. We see it over and over again, that people get Threat Modeling really fast. They get it, that's not a real big problem. And you've proved that a million times Kim with your ice cream and various other analogies, right. But they they get stuck in only the places they know, "Oh, I'm going across the internet, I guess I should encrypt with TLS because that's an untrusted network," or "I've got compartment and, you know, user to user data, so I should authenticate." But it's hard for them to look comprehensively. And the one thing I want to highlight is both of you said it but it's really important. The blank page syndrome, getting something asking you know, instead of asking the for the threat model asking - and I haven't tried this, but asking the AI, "What are all the domains I should look at for this system?" - might actually be a huge win, just getting started is a big deal for people. And looking comprehensively, after that I'm in complete agreement, you just it goes around in circles. And even something trained more specifically with lots of issues and patterns still will be constrained to those. It's not creative. It's only creative with the stuff that it already knows.

Izar Tarandach 15:35

No, I'm just going to throw a spoke in the wheel here. We've been talking about getting the description of the system joining to the LLM getting threats on the other side. One thing that I did have some success in trying was to, in taking that problem that we keep discussing all the time of the things like an attacker. But most of us have moved past and said, this is this is not feasible, this is not doable. But all of a sudden, now we have a place where we can go and say, "Given this thing, how would you attack it?" And then I've got a list of possible attacks, which before you didn't have, which is not going to create a threat model for you. But might very well inform a threat model for you. Because people are more- I don't want to use the word ignorant - but people don't know more about how we could possibly attack something, then how you could possibly defend it. Right? Defending is closing stuff, and attacking is poking at those closures and see if you can pry them open. And I personally think that it's a much more significant set of knowledge how you attack something than how you defend

Chris Romeo 16:50

Ken just asked an interesting question in regards to what Brooks said here. Because Brook said, you could ask the AIs to tell you what domains, you know, give you a list of domains. And I think this is a pretty interesting question. I actually I want to know the answer to this. "How would the AI know which domains unless it's reviewing the code? How does it get context? How does an AI get context on an existing system?"

Brook Schoenfield 17:13

I think you have to describe it. And it comes in the prompt. Again, we're back to can you describe it enough. And it knows enough about those sorts of systems. That's a different problem in AI. And I'd like to see an AI trained on reading the code and coming up with the map, the attack surfaces and the defense points. That might be an interesting AI. I'm not building that, by the way, devici, hint.

Chris Romeo 17:55

We're in the midst of it.

Brook Schoenfield 17:56

But that is a really, that's a really interesting problem. There is a company, I'm not going to name them because I don't want to do any vendor commercials here. I hate doing that in these things, but there is at least one company that I have seen their tech. And they look at the code and they produce a map from that code that is architectural. And it is a view. Going back to what you said he saw there is no correct view. That's the whole point. All maps are wrong. But many maps are useful. Right? That's very important. I forget who said that. George something. Yeah, but it doesn't matter. It's a truism. And producing a map that's useful from the code is a really useful thing. And I could see AI being able to do that, and being applied to that problem. Really, really well. It's more discrete. So the way I'm thinking about AI and threat modeling, and all of security really is instead of looking for the great grand replace Izar, replace Kim, replace Chris. Let's not do that. At the moment. Let's look at really discrete problems and solve some things that will help the Izar, the Chris, the Kim, but more importantly, help the hundreds of thousands of developers, the millions of developers who don't have all that attack knowledge and don't have time to go get it.

Chris Romeo 19:35

Yeah, that's that's something that we've, we've definitely seen throughout the lifetime of the security industry. I want to come back to Kim, I've got a question kind of branches off what the some of the direction were Brook was going here and talking about training. I guess I'm curious on your thoughts - Kim - about, Could we train a model in real time to be better at one specific threat model, by like, what if we funneled all the design documents? I don't know if people don't maybe people don't write design documents. I'm a little more old school. But even all the user stories like what if we sent the LLM all of the user stories that people had written in the last couple of years embedded into the LLM? And then in real time, we asked the LLM to give us a threat model. So we almost trained it, it's almost like just in time training for the model by sending all the design documents, do you think that would get us any closer?

Dr. Kim Wuyts 20:30

I think that would be a great win, because that I think one of the burdens of the manual threat modeling process is building that model, building that system. So if you can use what is already there, the documentation, the user stories, the I don't know, client server view diagram, whatever you have, if you can bring that into this type of AI thing, and it did uses what you need DFD or whatever you want to use to do your threat model, I think that's a huge gain. And that will make people want to more quickly jump into that, because I see how these LLMs and AI in general will be a great win, because they have all that knowledge. And if you ask the right questions, they hopefully - that's another discussion - but they should give you really easy access to the exact information you need. But that means that you need to be able to provide the right prompts the right information about those models. And that means that you shift the work that you normally put into finding the threats into making a precise description of those models, because the system needs to have all the details, because you're relying solely on that hero thread modeler AI. So you need to... yeah, so if so coming back to the question. Yes. If you can deduce that from existing documentation without much overhead, I think that would be a huge gain. Absolutely.

Chris Romeo 22:07

Yeah, I want to highlight one of the points I took away from what you just said, in that there's danger in pushing the effort into creating the better prompt than just going, like, what's the trade off between just going and doing the threat model versus writing the perfect prompt that gets the LLM to give me the answers?

Izar Tarandach 22:28

I know this one. So it's something that I've been struggling with, since I've started playing with this stuff, right. And what's been helping me a lot is... I don't know if you guys been to reddit or whatever, but there is this eli5, explain to me like I'm five, something, right. And I tried to keep that in my head, like, I'm explaining this to something that's not exactly listening to me and thinking at the same time, it's going to get one block of information, decide something, say something. And in fact, that's something that I don't know if it exists out there. But I would love to see an LLM that actually asks for clarification. So I start giving the data rather than giving me an answer. It says, "Explain this a bit more to me. Explain to this to me, like I'm five. Give me a bit more detail on this, give me a bit more detail on that." Right? And then, in the process of threat modeling, I would love to see not one LLM during the whole thing. But small LLM, small agents here and there, being almost creepy in the process. So when I'm explaining to the system, "Hey, this is what I'm building." Okay. So it's going to keep asking me prompting questions of me. So that it gets a better prompt at the end that describes my system. And then when it's coming up with threats, it's going to throw the threats at me and say, "What do you think about this one?" Almost playing elevation of privilege with me? And saying, "Do you think that this one applies? If not why?" Right? So it's almost like having a for the overuse of the term copilot while you're doing threat modeling, that's helping you in those very small tasks that you need to get a good threat model done. But it's not doing it for you. It's bringing you new information, it's bringing you knowledge, but it's not doing it for you.

Chris Romeo 24:23

Yeah, and I'm not going to talk much about devici here, but I will say that's the approach that we've taken. What I'm calling this is "AI infused." So I'm not giving you - we're not building a chatbot - like everybody keeps adding chat bots to their products. And I'm like, That's not AI. Sorry. You're not making my life any easier. But what we're doing is we're figuring out how can we infuse AI in certain points and the threat modeling system to make the results better for you as the person - or faster or whatever. And in a lot of cases you won't even know there's an LLM driving it. That's my goal because I don't want you to know I want I want to be able to tell you on a slide maybe. "Hey, here's all the places LLM 's are making Threat Modeling better for you." But there's no, it's not like a little like, I mean, as much as I loved Clippy, I really don't want Clippy to come out - for those people that aren't old enough to remember this - Clippy was a Microsoft Office construct, they created this little cartoon character. And it would it would pop up on the screen. And it would ask you how how you could be helped. It was really the first the first chatbot I think that was ever in existence, but it provided limited value.

Izar Tarandach 25:22

It was just so annoying. Go and see.

Chris Romeo 25:38

Yeah, and it would bother you. That's right. It would bother you sometimes. And so that's what we're talking about when we reference Clippy. Don't interrupt me when I'm on a flow. And I know what I mean. Right? Right. Don't interrupt me because I need to finish this doc right now.

Dr. Kim Wuyts 25:57

So I like the idea that AI is asking you questions to get more inputs. I also think that the other way can be interesting. If you do like AI assisted manual threat modeling that it will tell you what, well, what questions you should ask yourself or your team for like the guidance threat modeling. That would also be an interesting take, I think, yeah,

Brook Schoenfield 26:23

Well, I think that's actually a space that what I've seen in the AI space has not done, which is the thing that helps. So think about the major software security problem here. And let's bring it to threat modeling, or any part of it. Most developers don't know where to go to start. And what seems to be missing instead of being the All Knowing Genie that seems to know everything. And you give me your three wishes, and I'll I'll make you rich and famous and your software will actually work. Instead of being that something that just answers the basic questions, "And what's my next step?" Okay, I started building this thing. "Is this the time to threat model? I don't know." What I know is you stride and data flow diagram somehow that magic delivers you the threat model after the architecture is all done. "Hey everybody what's wrong with the statement I just said?" After the architecture is all done, after we've all designed things, how many times have they said, "No, we're not ready to threat model." And of course, they're making a huge mistake. Something that they could just say, "We started when do we do threat model?"... that could just say to them, "Now is a good time, you're going to iterate." - Just having that little piece of information, which each of you would say to them. But wouldn't it be great if they could just have a little copilot that would say "Now? And here's where you start?"

Yeah, that's a good, that's a good, that's a good point. Like it, we AI could definitely influence the process as well that people are going through of architecture. And it wouldn't take much to just give people the right nudge and push at the right time to help them and maybe that's part of a bigger AI that helps design products. For both products managers and engineers, both, trademark patent pending.

Brook Schoenfield 28:34

There's a product idea there, listeners, there's a product idea. I'll carry you by the way, my companies are not building that. I don't know about GPT.

Izar Tarandach 28:45

We do, Chris, talk if you do build.

Brook Schoenfield 28:48

But we each get a few, you know, 1000 shares in your company, please.

Chris Romeo 28:53

There you go. So we've talked about the present. And so when we titled this webinar, we also advertised that we would look into the future. And so let's let's change gears and go from the present into the future, what we think is going to happen based on all the things you've studied in AI and the decades of knowledge and experience you have with Threat Modeling. Let's start in the short term right now. So let's keep our scope to the next 24 months. What is the impact of AI in the next 24 months on Threat Modeling specifically?

Izar Tarandach 29:30

There's going to be at least one big breach and the justification will be, "It's because we did what the AI told us to do."

Chris Romeo 29:39

Interesting Tell me more.

Izar Tarandach 29:44

The over reliance on AI on all its facets in the security industry, at some point is going to lead to a negative outcome. And the question is just how big it's going to be. And then people say, "Oh, We should have thought about this before."

Chris Romeo 30:03

This is an issue you and I have discussed and debated in other forums. Every time Izar and I talk about AI it always comes back to trust. For me, it comes back to trust. So how am I going to trust the output of this thing? And that's a great point.

Brook Schoenfield 30:15

And you shouldn't, by the way, so that's the thing I keep saying to everyone. The change from deterministic to probabilistic software. So let me just set a table here, if I may, which is, we're going from something that if it's wrong, given exactly the same conditions, it will always be wrong. And given exactly the same conditions, it will always turn out the exact same answer whether wrong or right, that's deterministic, right? Unless, you know, you get flaws in the hardware that are, you know, sometimes or periodic in some way. But nevertheless, it's deterministic, we are now moved into something, into a world where I can give you, I have an error rate, let's say it's 5%. So out of every 5000 responses, 950 of them will be correct. And 50 of them will be wrong in some way. And we don't seem - as near as I can tell - people aren't prepared to get an answer. And check it because we're used to computers always computing the same way. We're not used to computers that will just give us something which could be wrong, because it's probabilistic. And we know there's an error rate, and how it will be wrong will be different every time. That I think is really important for us to understand as we go. So you know, when it matters, I check the answers. And Pi not to make Pi feel bad, because it's the most polite of all the chat bots. And I love it for that. And it's a wonderful conversationalist. But right now I'm working on the problem around a certain Python problem. And every single link it's given me has been wrong.

Chris Romeo 32:22

So Kim, what are your thoughts on this? This particular issue?

Dr. Kim Wuyts 32:25

Since we're diving into all the things that can go wrong, I want to extend that. So the process behind the LLM? That's definitely an issue because we don't understand, we have no transparency. But it starts at the input, which is typically not like moderated input, we typically don't have control, we have some solutions where you can say, "Only collect information from the company" or something. But if you don't have that, that ChatGPTs and DALL-Es and whatever, basically just scrape everything from the internet. So you, well, first of all, you know that there is some issue because not everything you read on the internet is true. Also, there are some lawsuits coming up there because it wasn't put there with the intention to be scraped and used for different purposes. So there is a lot of copyright infringement, privacy issues. So yeah, I think we will move to LLMs and AI where the input is more moderated. But then again, the question remains, where do you get it? Because what can you use actually?

Chris Romeo 33:41

Yeah, and that just reminds me of one of my favorite tweets that I've seen of all time. It says something to the effect, "You can't believe everything you read on the internet. -- Abraham Lincoln."

Izar Tarandach 33:54

But actually, that reminds me of something that I asked Adam Shostack one of these days. Why does he base his teachings on Star Wars and not Star Trek? Right? What's so good about Star Wars and not Star Trek? And as always, his explanation was, really, to me it was "Okay. I'll shut up and go home." He said that the difference between Star Wars and Star Trek is that in Star Wars, you have a problem. You have the hero journey. Things get solved in the end - happy end - or not. But on Star Trek every time that they had a problem, there would be a gadget coming in that solves exactly that problem. And today, I think that the gadget that we are seeing that's supposed to solve every single problem is the LLM. And so I think that we should be a bit more selective, a bit more careful again, in the trust that we give, But not only the trust that we give to what comes out of the LLM. Now we have problems of trust on what goes in. So garbage in garbage out. We have problems of trust, as Kim said, Where is this knowledge coming from? And is this knowledge that I'm even allowed to use for one reason or another? Right? So there's so many different points in here, where you have to apply trust. And I don't see people doing that, that if I weren't already an insomniac, I would be up at night.

Chris Romeo 35:36

So has anybody seen? Are there any examples of this that anybody can think of about people over trusting AI? At this point?

Brook Schoenfield 35:45

Well, I do want to throw in. And I don't have the specifics here. But my friend HelenUmberger, who's fabulous, because she's in highly regulated industry, she has been studying some of the suits. And apparently, there was an insurance company that wanted to replace its claims adjusters with an AI. And they have a huge class action suit against them, because it denied, like tens of thousands of appropriate claims - the AI - because it wasn't ready for prime time, and then there's an error rate too, as we've already talked about. So there are suits. I don't know the specifics, but I think if you just ask your friendly AI to search for you, it would probably find them for you.

Izar Tarandach 36:32

Matt Coles actually had a very, very nice one, when he said that there are some LLMs that are being trained on large repositories of public code. And they may or may not be taking license into account. So you could ask for a snippet of code, and that comes under GPL. And all of a sudden, you are cutting pasting GPL code into your code. What does that do for you? Right?

Chris Romeo 36:58

Yeah, that's potentially a lawsuit in the future.

Dr. Kim Wuyts 37:01

Yeah, and not a security one, but there was this lawsuit for, I don't remember which airline, that had a chatbot. And the chatbot gave wrong information. And well, yeah, they were forced to, to pay the customer anyways, because they were responsible for what the Chatbot was saying.

Chris Romeo 37:25

So I think where we're where we're landing with these examples in our conversation about trust and AI is, this thing is not ready for prime time. It's not ready for us to put our security - I'm gonna make a bold statement here, and y'all can can argue this point with me if you want to. But AI is not ready to be in the critical path of security decisions.

Izar Tarandach 37:48

Not only I agree with that, but I will say there are some sectors that we are not yet hearing anything about the AI revolution, the LLM revolution. You're not hearing about airplane avionics that are incorporating LLMs. We are not hearing about medical devices that are incorporating another LLMs. And I think that that's a very clear indicator that the technology is just not ready for total prime time. And that goes to a discussion that we have had in the past about agency of these things, right? And the whole discussion about proper injection. Should I be able to give a prompt that bypasses whatever guards an LLM has, that makes that LLM take an action that carries the risk that I'm not ready to absorb? And the answer I think, is a resounding no. So again, that to me says that technology is not ready for responsible use out there. Sure that the chat, bots and whatnot, and yeah, there'll be an error rate. And sometimes things will fall and blow up. That's all good, and...

Chris Romeo 39:04

So when the stakes are low.

Izar Tarandach 39:07

I wouldn't trust it, where I wouldn't trust the person.

Dr. Kim Wuyts 39:11

Yeah, from a legal perspective, at least in Europe, it's not allowed either you have GDPR, you have the AI Act that says for high risk, applications context, that automated decision making is not allowed. There needs to be like human checks. But of course, if that human check is just I press a button because I trust what the system is telling me then, well, yeah.

Izar Tarandach 39:38

I'm not familiar with the AI Act. Do you think they are finding ... so hard recently, in the AI Act.

Dr. Kim Wuyts 39:44

I haven't need to look into that. They have different rates of risk, and they define some some things about that. Yeah. I can add it in the chat later. I'll link to that.

Brook Schoenfield 39:59

So, what I remember reading about is that some time ago, this is in the mid teens, and ML, which is not an LLM - so let's be clear - machine learning. Machine Learning was better at reading radiographs and identifying lung cancer by a huge percentage than a doctor. So when we get really specific, if we can make the task specific enough, we get a higher level of trust, it still makes mistakes. So it still needs oversight. Right. So no disagreement there. But I think that we have to understand that when we get specific for a while now, these things have been pretty good at doing particular jobs, like reading a radiograph, which isn't really reading the radiograph, the way you do know, it's reading a whole bunch of samples of the pixels of the radiograph. And comparing those. So let's be clear about what's going on here. It does not see in the way you and I see. No, it reads a bunch of samples of you know, a bunch of samples thousands of samples of something and says, "Oh, this mathematical pattern is on this side, and this one isn't." And it's gotten pretty good at those jobs. And when we get specific, I think we can have more trust. Do I want to let go of all trust? Are you kidding? And just press the button, as you said, Kim? No way, if you if you're using an ML to judge whether I have lung cancer, and no, I don't smoke, and have never smoked, but you know, you can still get it. But you know, less likely. But still, if I were getting a radiograph read in that way, I would want it to be checked by a doctor, a very experienced doctor. And you know, doing the first range is good. So, you know, again, assistance seems really, really interesting. I'm really worried about low code, think about threat modeling, where you have people who are using low code or no code AI to write for them. And they don't have the experience at all, to even know what a threat model is. I'm not saying they're clueless. I don't believe in calling people clueless or stupid or any of those things, you know that. But you know, that's not their domain of knowledge. We're gonna get a lot of crappy code and a lot of un-threat model code. We already have a huge...

Dr. Kim Wuyts 42:38

I was gonna say it's but because of the AI, because people think that it's okay.

Izar Tarandach 42:45

That's the trust again.

Chris Romeo 42:46

So we have a false sense of security, then in the marketplace right now with what AI can actually do for us.

Izar Tarandach 42:54

Yep.

Brook Schoenfield 42:56

I think that's the bottom line is all listeners, whoever may be listening to the My brilliant co-panelists. And, dear Chris, watch your level of trust. Just watch it. Because yeah. Do you really want to get that cancer diagnosis just based on even a really specific ML? Or do you want a really great oncologists to back it up? Yeah, I'm going to suggest the latter.

Chris Romeo 43:30

You want to combo both, right? Like you want - and this is how I've been thinking about AI. How it can have the biggest impact, I think for the next five years, maybe even longer than that. And that is I want the AI to make Izar 20% better at developing secure code, and Kim 20% better at threat modeling, even though I know Kim is a brilliant threat modeler. We can all, tooling can can help us, and maybe it's making Kim a 20% more efficient Threat Modeling person, maybe it's not better, it's just more efficiency. And so that's the model that I see where I think AI is going to have the biggest impact is - it's not going to replace anything that we do. Like I don't see a world where we just have the AI do the threat model, and we would all sign off on it and say, "Yeah, that's perfect." But there's a world where that AI can help us be better at what we do. And I think that's the near term value proper.

Brook Schoenfield 44:29

And in fact, there are tools. Think of, you know, the thing Stuart McClure is doing again, I don't want to name a product, but where, you know, static analysis has bedeviled us for 20 years, because there's big promise and it just often doesn't deliver. Okay, we need this tool. We desperately need to look at our code and find vulnerabilities where we've made mistakes or just don't understand. We desperately need that tool, and people are applying AI in a very specific way to say I'm going to train you on all the places that code looks vulnerable, and all the places that are false positives, and give you all the information you need, in order to train you to be able to find those and reduce them. That's a very interesting problem. Because again, it's assisted in more efficient, rather than just saying AI, go write secure code for me. And I actually spent about an hour and a half talking to the Product Manager at GitLab about this. It was in the spring. Or maybe it was in January, I don't remember. It was at some conference I spoke at. No, it was the beginning of the fall. And I got a chance to talk to this brilliant guy. And he said, "Yeah, we're not there yet. But we're working on it. We're trying to find the specific cases enough." Exactly as you say, Chris, and I know I'm moving on to threat modeling. But I think it's similar is if we can find specific enough cases where we can start getting some real value. Anyway.

Izar Tarandach 46:07

But that's the thing. I think that we closed an interesting loop here, because we started talking about MLMs, as the solution for threat modeling, as its give us a scenario, get a threat model right. Now, those are the limbs, the the way that they benchmark themselves, if I understand correctly, and I could be completely wrong here is by the number of billions of data points that they have been trained on. So they're trying to be as generic as possible in terms of embracing a whole thing, or if they are extremely tuned. As, for example, the the example of Brooks of the ML, not the LLM, the ML, which is something that does one thing specifically very, very well, because it has a number of parameters that are very well defined, that it can hone only in, right. So if we move to perhaps less data points, but better train so that those things emerge, then we can have those smaller agents that help us along the process of threat modeling, to do the things that we need to get done better, right. So that we can end up as better threat modelers with a better threat model. More efficient, faster, embracing things. I love the idea of AI infused increase because to me, that totally gives me the idea of somebody like holding my hand and going through the process in the difficult parts of it and offering me some backup. But still, I'm leading the process. So I think that we close the whole loop between where we are now what we would like this thing to do, and how do we get to a better place? By tuning both expectations and the models? Together?

Chris Romeo 47:57

Yeah, when you think special purpose to like, imagine if we could train an LLM with all of the threat models that the four of us collectively have worked on in our careers. Imagine if we went inside, we don't have access to we couldn't even if we wanted to, because we didn't keep them they were some of them happened on a whiteboard organically, because someone said, "Can you look at what we're doing?" And we said, "Oh, let's start going through this", right. But imagine if we could take all of that data that we've - where all those models over decades and decades and feed them into an LLM and come up with a special purpose LLM. It's not good at driving a car. It's not good at writing Python code. It's good at threat modeling, and taking a diagram and drawing connections to what the threats are that go with it. I think that's a future that we could aim towards as AppSec people as Threat Modeling people, and as budding AI people.

Izar Tarandach 48:50

Well, I think you said it, unfortunately, my big fear, and I'm just going to throw this out here is that we're actually not going to go down that path. Some of us will, but many humans will unfortunately go down the path of "Its computers, they're always right, and this thing seems to be right, and I'm just going to trust it." And I'm actually afraid of that future for security reasons.

Chris Romeo 49:29

Yeah, that's more of a frightening future where people become reliant on - like if we lose the ability to think as a species, we're in big trouble. And I don't know that I'm gonna go that far. I don't know that I'm gonna go that far, but if you think about 100 years in the future, 200 years in the future, could an LLM replace critical thinking. Like I would already say, in my lifetime, and I know I'm opening Pandora's box here, but in my lifetime, I've watched critical thinking skills - from my youth to where we are today - there's very drastic differences in ability to critically think and solve problems. I know I just opened a big, big, big issue, but let's deal with it.

Izar Tarandach 50:18

But my problem is not that we are losing that capability, it's that we may be giving up that capability, because we are relying too much on today's gadgets. Yeah, that's even worse.

Chris Romeo 50:29

And that's the danger that could be with threat modeling, if we may, if we kind of zero back in now, is people become reliant on thinking that the LLM has the answer, when we all know that there's been many times in our careers where we threat model something and I know each of you have had this experience, and that's why I'm going to share, Somebody will come up with a threat. And you're like, "I never thought of that. In my whole life. I've been doing this for 25 years. And I thought I knew every threat in that category. And you just blew my mind with something that I didn't even, I'd never even fathom."

Brook Schoenfield 51:02

I'm in the middle of delivering a 70 page threat model consulting work, right. Very fancy everything. And I realized I forgot to go ask that, you know, mostly it's, one of the big providers for authentication, but they also have a local authentication store. And I forgot to go down that rat hole and figure out what the threats were to their local authentication store and how well they've done it. And realize I had written most of the threat model and had to, embarrassed, go back and say, I mean, because I make mistakes. I'm just a limited human being. You know, as, have any of you - I know you are all perfect when you threat model, but I'm not.

Chris Romeo 51:44

No, no way. Nobody is. And that's why that's why coming back around full circle, the manifesto that we talked about - the diversity of the team being important. And to Izar's point he made here during this conversation, the LLM is one member of the team. It's one member of an eight person diverse, functionally diverse team with different expertise and different experiences. And that's what allows people to come up with threats we've never heard of. Yes, we make mistakes. And Brook, I love the fact you brought that up, because that's it's so important. Like, collectively, if we counted on this call right now, we probably have 100 years of threat modeling experience, maybe a little bit less than that, but close to it, right. But yet, people need to know we're not perfect, we make mistakes, we don't, and the LLM is going to make the same set style mistakes. And so bringing it all the way back around the LLM is one seat at the table. What might be the title of my next TED Talk, "LLM is one seat at the table."

Izar Tarandach 52:47

It's a very good one.

Chris Romeo 52:49

I's your idea? So you have first dibs on the TED talk, if you want.

Izar Tarandach 52:55

Actually, LLMs are not at the head of the table, it's more like...

Brook Schoenfield 52:59

It's really not at the head of the table shouldn't be and we're too fast to put them - because they seem so all knowing - we're too fast. You know, it might be that primate thing. I hate to go all anthropology on you. But it might be that primate thing, where you know, the thing that seems to know more we you know, as primates we go, "Oh, you should be the alpha, you should, you should lead." And we have to be really careful. Because the it's not a being that we're talking to. It's a bunch of artificially created synapses that we don't even understand how they work.

Dr. Kim Wuyts 53:38

But when the internet - when websites came, people were just looking at Google and taking whatever knowledge you got as like this is true. But now we go a step beyond and take everything that the LLM or whatever is saying as not just knowledge but like valid decisions, and that's it's a problem.

Izar Tarandach 54:01

Brooks prompt just put me in the mind of the opening scenes of 2001. The Apes are on the monolith. That's all I can see.

Chris Romeo 54:13

Yeah, Adrian just had a great comment here. "LLM is an apprentice slash trainee." Like that's the mod. That's the way we need to think about these things. But we're almost out of time here.

Izar Tarandach 54:26

In the last Security Table, we got the conclusion that the best way to treat another limb is as a junior programmer.

Chris Romeo 54:32

Yeah, that's true. That's where we landed with that as well. Maybe that's the same thing. Maybe that's the junior - it's - I don't think of anybody is a junior threat modeler but somebody who's newer to threat modeling - maybe that's the perception we need to take. When we think about AI and threat modeling. Yes. It's as if we had somebody who was new to the discipline, that was a member at the table.

Brook Schoenfield 54:53

That new people often have insights we, that we you know, we don't have so that's not necessarily a bad thing.

Chris Romeo 55:03

Yeah, no, I am yeah, definitely there's something that we can take away from the LLM, being a member of the threat modeling team, it may point out something that somebody else invented a long time ago that we just missed in the industry, because we just never knew. So it could provide us with some value there. I'd love to get just to kind of a key takeaway. You want to you know, we can inspire the audience, we can give them a key takeaway from this conversation. And so Izar, I'm always gonna go to you first.

Izar Tarandach 55:34

Trust but really, really, really, really verify.

Chris Romeo 55:41

Yeah, that's, that would fit on a t shirt too. Trust, but really, really, really, really verify. I like it. Kim, what about you? What's your final thought?

Dr. Kim Wuyts 55:49

Yeah, it's basically the same. I was gonna go for AI as assistant not as decision maker. Because well, we don't know we, if we don't understand something, we cannot trust it.

Izar Tarandach 56:01

That's good.

Chris Romeo 56:02

Brook we'll give you the final word.

Brook Schoenfield 56:04

I don't have anything to add here for once. That's it.

Chris Romeo 56:10

Awesome.

Brook Schoenfield 56:10

What my friends said.

Chris Romeo 56:11

Awesome. Well, this has been a great conversation. Thank you, Izar, Kim Brook, for sharing your brilliance with the audience here. Great amount of depth, and hopefully we got - hopefully people are thinking - that's really what we're trying to do right now is just get people thinking about this stuff. Understand what's possible, but then start thinking about some of these things, because these are all issues that we're gonna have to resolve for the next couple of years. So, folks, thanks for tuning in to this devici webinar on The Present & Future Impact of AI on Threat Modeling, and we look forward to having you join us for another conversation soon. Have a great day.

Skip to main content