Unlocking the Power of Threat Modeling for Secure and Private Design

Learn how to apply secure and private by design to the things you build. Whether you're just starting with threat modeling or looking to enhance your current practices, this session will provide the necessary insights and tools. Register now to secure your spot and take the first step towards a more secure and private software development lifecycle.

Speakers

  • Chris Romeo, CEO, Devici

  • Dr. Kim Wuyts, Cyber & Privacy Manager 

  • Matt Coles, Threat Modeling Author

Transcript

Chris Romeo  00:02

Hey folks, my name is Chris Romeo. I'm the CEO of Devici. And welcome to our webinar here. It's called Unlocking the Power of Threat Modeling for Secure and Private Design. And as always, I'm super excited to be joined by Dr. Kim Wuyts. Kim, I think it'd be great if you just gave a quick bio to help people that maybe haven't met you before – haven't heard you speak. I know you've spoken a lot of big stages now at RSA and global OWASP, Keynote and a lot of other things. But just in case folks don't know who you are, give us a quick bio.

 Dr. Kim Wuyts  00:37

Yeah, sure. Um, I am currently Manager of Cyber & Privacy at PwC in Belgium. And before that, for a long time, I was a researcher in academia, in Belgium, at KU Leuven. And there, I mainly focused on threat modeling for privacy. I also developed a LINDDUN privacy threat modeling approach. So yeah, this topic today is kind of my two passions, combined – privacy engineering, and threat modeling. So I'm really excited to be here. Yeah.

 Chris Romeo  01:10

And I have lots of questions about privacy engineering, too. So I'm still reading books and trying to figure this out as I go forward.

 Dr. Kim Wuyts  01:19

Yeah.

 Chris Romeo  01:21

So I guess the kind of focus we wanted to do with our time here in this webinar, is – we wanted to explore secure by design and private by design. And then also, how does threat modeling fit together with these two things. A lot of attention being paid to secure by design right now because of all the work that CISA has been doing. And they're secure by design alerts, and they're secure by design document and their pledge that people are signing that are swearing, they're going to do secure by design forever, or something like that.

 Dr. Kim Wuyts  02:03

Okay, I think you need to repeat the last part, because there was some glitch there.

 Chris Romeo  02:08

Yeah. We're just gonna talk about secure by design, how you apply these things, and kind of how they all come together. But I think a good place to start, though, is what is sefcure by design? What's private by design? So, Kim, I'd love to get your perspective on both of those things.

 Dr. Kim Wuyts  02:27

Okay. I think the essence is basically the same for both well, just from different perspectives. But the by design part, to me, means that you integrate security or privacy early on, ideation phase design phase, and then it's the shift left thing, and then take it all through the entire development lifecycle. I think you can explain more about security by design than I can. Because while I watched The Security Table podcast, you've covered it a couple of times already. I can give some background on privacy by design. It was a term that was introduced in the mid 90s, by Ann Cavoukian, Privacy Commissioner in Canada, Ontario, I think. So it's been around for quite some time already. She also laid out some foundational principles. Well, one of them is obviously integrated, embedded early on in the development lifecycle. But well, it's also about having users, individuals being in control. Having some transparency, they're doing the privacy by default thing, which is maybe a bit different than security by default. The privacy by default was mainly about having that default setting, to the most privacy friendly part. So if you imagine, think about Facebook, or whatever where you – in the old days, when you made a post, it would be by default public. And that's not privacy by default, it should be by default only with your friends. So that's the by default thing. And then it kind of evolved. And the concept of privacy by design, well actually, data protection by design has even been included now in the GDPR and in other privacy or data protection legislation. So yeah.

 Chris Romeo  04:40

Yeah. And that's good. Thank you for setting that stage for us here and mentioned The Security Table speaking of The  Security Table, I'm happy to add Matt Coles into our conversation here. He literally saw The Security Table bat signal in the sky. And he came rushing into the conversation. Matt, I think you probably just heard us talking about defining secure by design, privacy by design, and Kim focuses in on privacy by design. Matt has a funny voice, but it's okay. It's life – we live and we work at home. This is our lives - how we go. Matt, from your perspective, how about how about tee up secure by design? I know we've talked about it a lot in The Security Table. But for those that maybe haven't heard our ramblings, give us your take on it.

 Matt Coles  05:31

Yeah, so, secure by design is hopefully similar to what Kim was talking about with privacy by design, but secure by design is basically making sure that you bake in, and you design your system, from the beginning to have basic security properties. Right. So whether that's defense in depth, or fail secure, or secure communications and making sure that your data is protected, having your basic defense and depth practices, you know, as part of the system from day one, and not something you bolt on later. You certainly can add to it. But with a secure by design approach, we want to start from a position of security.

 Chris Romeo  05:33

So Kim, you mentioned starting early in the process as well, I think you both talked about the importance of that. I guess let's let's flip it around and look at the other side of the table here. Starting with privacy – what happens if we don't think about privacy from the very beginning? What are the down stream kind of ramifications for system gets to production? What are the challenges on from a privacy side?

 Dr. Kim Wuyts  06:53

Yeah, so I was just thinking, I forgot to mention one of the foundational principles, which is kind of the answer to this question so perfect. One of the things there is also that we should aim for a positive sum approach, meaning that it's not just an add on, and it's not something that will conflict functionality or security or something – it should strengthen each other – privacy and the rest of the things that make the system. And if you build on privacy later on, if you have your system ready, and then you decide, "Well, now we want to do this in an anonymous way." Oh, it's gonna break the system, basically, you have to either be very creative, or change some parts of the system. So that's why it makes sense to do it early on. Same for security. I mean, it's not a new concept, that you cannot just sprinkle it on later on. There's no magical solution that you can plug in and play. You need to really embed that in the design to do it properly. And to have that most ideal combination of security, privacy, safety, functionality, all of those.

 Chris Romeo  08:05

So then, what's the rework perspective from a privacy engineering side? I feel like we talk about security all the time. And Matt, I'm gonna come to you to talk about the security side, but I didn't mean for this to be security versus privacy either. That wasn't my intention. Kim, from your perspective, and what you've seen, how much rework – how much extra things do we have to do if we try to bolt privacy on? Because I don't think I've ever heard anybody answer this question. So this is not for the audience. This is for me, I want to know.

 Dr. Kim Wuyts  08:40

Okay, well, first of all, Matt knows so much about privacy, too. So you shouldn't just ask him the security questions. But yeah, so what changes? I'm gonna have to give the consultancy answer – it depends. I think one of the core concepts for privacy is minimization or minimality. So what changes hopefully early on is that you think about, "Do I really need all these data? Maybe I shouldn't be collecting everything. And if I need to collect everything, maybe I shouldn't process everything. And if I do need to process everything, then maybe after processing, it's efficient to just store the aggregated version or delete some of the stuff that I processed and just keep the summary." So, when you create the processes already think about how can I reduce the privacy risk, which by the way, also has a positive effect on security, because the less information you have, the less you can lose? Um, so yeah, that's already one of those things. And of course you can because people often think like privacy – we have pets privacy enhancing technologies. So again, it's going to be the solutions that we implement. We do some De-identification stuff, differential privacy, I mean, there's plenty of words you can use there. But to me, that's just like the backup plan. That's when you have gone through the design strategies principles. And you've, you've done everything like minimization, separating data as in maybe already processing some stuff on the phone before, and then only sharing the aggregated stuff. So that's definitely, I think, something that is different compared to security.

 Matt Coles  10:51

If I may, I think, Kim, what you're highlighting, and it's consistent, I think, with security as well, is you need to look at this systemically, right? One of the big things – and it goes to the rework question as well, is, you really need to be looking at your system and understanding all of its pieces, all of its moving parts, where the data is not just what the data is, but where the data is coming in, where it's going, how it's being passed around within the environment within the system. And so, if you don't think about it, holistically, if you don't think about it comprehensively, then you're gonna end up with piecemeal solutions or part solutions, right? And so rework for security – I mean, you can look at some basic design principles, you know, that often end up being patched or reworked over time, especially if you have older systems, right? Insecure communications, you can always add security on the communications channel. But realistically, our more practically you really want to look at this more comprehensively and have a consistent solution across the board so that you're not answering the question. "Well, what about this? What about that?" But in many cases, and especially for the privacy cases that Kim was talking about. Those require a holistic solution. Right? They can't be done piecemeal, because data collection and data processing, and data aggregation is not something you can bolt a fix on to? Not really, right. 

Chris Romeo  12:29

That was gonna be my next question about that particular angle. Right? It seems like with security, we prescribe that people do things, do these measures early. But if they don't, there's still a pathway to success. But it seems like with privacy, if I don't do those things up front, and I just let data go wherever it goes and be processed wherever it is. It's hard to walk that back at a later time, because once the data is out there, it's out there. You can't say, "Well now we're going to implement private by design." Well, the data has already been pretty much considered public now as a weakness of your system, right?

Dr. Kim Wuyts  13:13

Yeah, yeah, exactly, exactly. So if you want to maintain that you should have a plan upfront that you know how to manage that how to, I don't know, even do data tagging, or whatever, or some kind of way to keep track of all the data, you can still do that at a later stage. But then you will need to re-process all the data you already have and figure out how to do that. Or we do that in a privacy friendly way. And as you said, Chris, then basically for for a while there, you have been doing it in a privacy violating way. So yeah.

Matt Coles 13:52

And by the way, just going back to the systemic discussion, right? So Chris, you're right that for security we can often bolt on or rework and patch, you know, pieces here and there. But we know that is significantly error prone, right. Fixing cross site scripting issues, or buffer overflows is a great example. Right? You can patch what people report to or what you identify from vulnerability scans or other things. But if you're not doing it in a systemic fashion, you're going to miss stuff. Right? There will be those interfaces that you don't catch – that somebody doesn't immediately discover, or some that chooses to remain undiscovered or remain unreported. And so, you know, unless you're doing it systemically, it's error prone. And so that really, it's really important to consider it that way. I just want to address that I did see a question in the chat from Leandro about earlier when I was talking about secure design, secure by design, I did not mention threat modeling. I did not mention threat modeling by design. So, it is not a threat model – it is not a design principle. It is certainly a tool that we use. But it's not itself a design principle around secure by design. There are other ways to secure by design. 

Chris Romeo  15:15

That leads us into our next question, though, like it was a natural flow, you kind of let us right down the path here, because we – certainly all three of us spent a lot of time thinking about threat modeling, and we have for the last number of decades. And so when we think about threat modeling, what is the role of threat modeling in secure by design and private by design? Matt, I want you to go ahead and take this one first, because it got teed up for you multiple times. 

Matt Coles  15:39

Sure. Yeah. So I mean, threat modeling is, and especially if you do it early, and doing secure by design and privacy by design early – doing that comprehensively, understanding the system as it's being designed and ideally, addressing the gaps before the design goes into production. Threat modeling is a perfect activity for that. It is a early activity, it does not require a code, it needs a design that is being – it needs to design that is far enough along that you can analyze it for threats, and that you can look at those and look at the design and identify, do you have any insecurity or – I don't know what the, non private, I guess, insecure or non private behaviors or system constructs. And you can do that early, you can also do that collaboratively, right? It's not like a code scanning tool that you let the tool run and it spits out results. Although there are methods for that. But by and large, the activity is something you can do early, it's something that you can do consistently, it's something you can do, and that it helps with looking at comprehensively and systemically, your system and your architecture. And so threat modeling is great for that. Again, there are other ways to do this. But they're later in the process, right? So you could do code analysis. So you could do security scanning, you could do pen testing, but all of those you've already, you've already missed the boat on the architecture. And now you're going to talk about rework, or patching or piecemeal modifications. And as we just described around the privacy side, that may be re-architecting entirely how you do data collection. What notices you present to your users? I mean, there's a host of things that you've missed a window on, you know, it's less efficient, and therefore we know more costly if you don't do it sooner.

Chris Romeo  17:39

Yeah. Kim, what do you think? How do you see threat modeling, secure by design, and private by design, interact in your world?

Dr. Kim Wuyts  17:47

Yeah, it's what Matt said. And I actually think I'm gonna quote you on this, Chris, I think you once said like, it's the vehicle that will guide us through secured by design or privacy by design. Because it's yeah, it's really the tool that will give you guidance, will help you analyze the system. And those steps. I mean, I've been trying to talk about security by design or privacy by design, in a different way. But it always comes down to okay, I want to analyze the system. So I need to understand the system, I need to see what can go wrong, and then I can fix it. So those essential steps of threat modeling. I mean, you don't maybe you don't call it threat modeling. But the yes, the essential building blocks, you will always need, I think, and Matthew touched upon the later stages like pen testing, I think in some of the standards, it's also mentioned that you can use the outcome of threat modeling, for instance, as input for what you should look for when you do pen testing. So that results that comes from your threat model will kind of ripple through the different phases too. So yeah 

Chris Romeo  19:02

I knew you're a fan already. But it really is a full lifecycle solution for starting secure by design, private by design, the beginning all the way to your example of red team pen testing. And, you know, red teamers and pen testers are some of the best threat modelers out there. Because they have to think about what they're gonna do before they go invest the time building a solution or a scenario or something. They have to go through and consider what their options are before they decide to invest in, you know, a couple of different options that they think are the highest profile. So, yeah, I think it's fun to hear you explain how there is that full lifecycle there.

Matt Coles 19:53

If I may, it actually goes beyond that. If you think about lifecycle as being a circle, or as a loop, right, as we like to talk about developments are known as loops. Threat modeling comes in even on the other side, right? As soon as you know you have some issue to fix, or you're adding a new feature, right? Threat modeling is an activity that you can drop in at the time you need it. As you're thinking about that new feature, you're thinking about that rework as you're thinking about that bug fix or that enhancement. You know, you can take a, especially if you have original threat model, you can take and now build your new one and predict what's going to happen. Right? It's part of the, if you look at it as part of the design process, or the engineering process, when you're doing engineering processes, you're going to do design your new concept and design analysis. And you're going to do some sort of prediction or a simulation, and you're going to want to do all this prototyping and concept development, whatnot. Well, threat modeling is an activity that fits nicely in that process, whether it's new, or you're coming back around for the second time, or the third time. 

Chris Romeo  20:00

So, what you're saying is my LinkedIn tagline is right. Threat modeling is life is what you just described.

Matt Coles  21:12

As an engineer. Absolutely.

Chris Romeo  21:14

If you ever saw Ted Lasso, there's a character on Ted Lasso, that says football is life. It's his life mantra, I guess. It's a threat modeling is life. But in your example, you're talking about using threat modeling, at various stages and various points around that circle, which I think does have a lot of value there. If when you reach that cultural type of impact, that's where things start to get really fun as a security team, because the team is bringing things to you. And you're like, "Oh, hey, cool, good one, I would have thought of that." I wouldn't have I never thought of it. I want to remind our audience that we do, we are taking questions through the LinkedIn comment process. So if you've got a question or comment or something, please type it in,and we'll react to it in real time, as we see it come by, we'd love to interact with your thoughts on these topics, too, as we go forward. So let's talk about some common challenges, then. And also, we have to give, we can't just give threats, we have to give threats and mitigations, because that's our philosophy, we can't just give you what's broken, we have to tell you how to fix it. So when we think about some of the common obstacles, let's talk in general about secure by design. So I want to get both of your thoughts on this. So I'm not not declaring stick one to secure, one private. I want both of your thoughts on each of these things. But when you think about like, what are some of the common obstacles too, let's start with secure by design? And Matt, I'll come to you first give Kim more time to think about it. But when you when you think about obstacles to organizational advancement of secure by design, what are some of those blockers just start with what some of the blockers are. Of secure by design? Yeah, yeah.  So, I think the biggest blockers I can think of, that I've seen, I guess, over the years and in talking to folks is in part, one, a lack of understanding. So secure design principles, you know, are not always obvious, right? So defense in depth as a property, least privilege as a property, you know, fail secure and these other practices, other properties of a system that we as security professionals, you know, this is our bread and butter, but it's not always obvious, you know, why wouldn't you want to run everything as root, right? Now, that's changing, but that I think is a common barrier to secure by design. So, lack of understanding or lack of experience as to why that's important, and how to prioritize it. And actually, maybe that's the other piece, the number two is lack of ability or inability to properly prioritize it. So which is more important, least privilege are a feature. Right? And so secure by design, and I would – just as an aside, since we're gonna talk about mitigations – when you build a feature, it should have security properties therefore, you know, least privilege and defense in depth etc, should be part of your feature design. Right, should be part of your future discussion. So there there are those and then of course, there's the iron triangle, right?  That time, resources and capability, right? How much time and effort does it take to build in least privilege? Well, if it's cheap in terms of time for your developers to run everything as root and not worry about privilege management and worrying about what SELinux properties do I want to set on my containers? And, you know, etc, etc, then sure. And that becomes a barrier to building in some basic security properties that they'll have to patch later. Right. So you you either take it now or you take it later. Right? So I think that there's lack of understanding, some lack of experience. And, and then the inability to properly prioritize that within the process. Yeah, and let me let me get Kim's take on this. And then we'll talk about potential mitigations. Because I have one that I want to throw on the table that I think might wrap all of these things together into one overall mitigation. But Kim, what are your thoughts on obstacles, challenges? And we're specifically talking about secure by design?

Dr. Kim Wuyts  25:50

Yeah. I think I'm gonna take maybe a bit more of a abstract version of what Matt has sort of said, I think one of the main blockers for security by design is the fact that it's considered a blocker. The fact that it's, well,

Chris Romeo  26:11

Very meta of you.

Dr. Kim Wuyts  26:16

Yeah, the fact that it's been considered as something that is taking too much time, too much effort slowing down the process. So why bother, we will fix it when the issue occurs.

Chris Romeo  26:30

Yeah, I think that's a solid summary, though, of like, you kind of bubbled it all up to the one issue that because we are guilty of that, and maybe we as security, people are even guilty of portraying that, you know, putting it out in front of there. So I'm going to talk about one mitigation. And then there's a couple of questions that have popped in here that look juicy that I want to dig into. But let's let's talk about mitigation. So what are your thoughts on paved road, or paved roads as the overall solution, and I'll set the stage just in case people haven't heard this term before. Paved road being this idea that security and privacy teams can build components that are easy for developers to include into a solution that they're building. And things like access control authentication, things we don't want people rebuilding over and over again, they should be able to take something off the shelf, plug it in, and and push the easy button and have it start working for them. So paved roads, are they the solution map to the things that you were just describing here? How would you see those paved roads interacting with your list of potential obstacles?

Matt Coles  27:43

Paved roads and guardrails? I think they are certainly a necessary building block. So they're necessary building blocks if you have a component that's been pre-vetted to have certain certain properties or certain capabilities, or that has been designed specifically to work in a particular environment, and a developer can take that and makes it easier. It's an easy path to implementation. And you get sort of a checkbox - I want to be careful, like it doesn't, it doesn't eliminate the need for other work. But it does eliminate the bulk of the core of that problem, right? It's take the solution that we already know, meets the security and potentially privacy properties that we're looking for, and integrate it. Otherwise, do XYZ blah, blah, blah, blah, blah, blah, blah, and down the list. Right? And so it is, I think it's critically important that organizations can provide something like that to their development teams, especially if they have a well known or, you know, architecture base, or they have technology base that they're familiar within. You know, just sort of as an aside, I, you know, one of the early things that we talked about from a security, like day one, or Security 101 is don't build your own crypto, right? We want people to use well vetted validated crypto algorithms, and implementations of those algorithms. But what we fail to bring into that, and what paved roads, these pre-vetted, predefined components does, is it brings in not just the validated crypto algorithms, but also the wrapper and the way that those are integrated into a system in a way that is hard to get wrong. And that's the missing piece. Right? It's great to have a well-vetted implementation of AES. But if you don't have one that also manages the keys that go with it, or does proper memory management or other things that the developer has to think about when they integrate that AES algorithm then there's lots to go wrong and there's lots then where vulnerabilities come from. Paved roads help to bound that conversation.

Chris Romeo  30:01

Kim is this a concept that's floating around privacy engineering these days of our paved roads and guardrails things that people are thinking about now? Is it something that's brand new? Like where is it on the spectrum?

Dr. Kim Wuyts  30:16

Well, maybe I haven't found those resources yet, but I am not sure whether privacy is there yet. I think privacy is still a bit less mature than security. And it's still struggling with getting the other solutions already useful, usable. Yeah. So I think, well, let me quote, well, first of all, let's quote one of our other threat modeling friends, Abbie who said, "Security at the expense of usability is at the expense of security," which also applies to privacy. So I think it goes beyond the paved roads and the guardrails. It also applies to, as you said, Matt, to the solutions, the crypto libraries, but also to the way we can do threat modeling and the tools that support is there and guidance that does make it easier to link that back to developers. So I think it's the whole process, the whole privacy, engineering, security engineering process, all the different steps that could use as much usability as we can, though. I can only speak for myself as a privacy engineer. But I guess most security engineers have the same thing. Let's focus on well making it as secure or privacy respecting as we can. And then maybe let's see if it makes sense for the developers or for whomever we're giving that requirement to. While actually, one of the priorities should be let's make sure it's workable. So that that reasonable aspect, Chris, that you talk about often, I think that makes a lot of sense. And finding that balance between making it reasonable and usable. I think that's the two key terms to make it actually successful. I'm sorry, it's getting late here, and I'm losing my way words, apparently. I think. Izzar, our friend, Izzar Tarandach, is the one who originally quoted or came up with reasonable, pushed reasonable. And then I attached myself to the project and started using it, I probably use it more than he does now. But he he was the one that I heard say at first. I want to interact with this or interact with both of these questions. But I want to start with a question that we have from Katia here. What are your best tips for building threat modeling into the product engineering process? And then what are the key enablers to succeed with it, especially for threat modeling for privacy, which may be less known? So I know you've both had a lot of experience from this perspective. And so I'm curious to see what are some of the things that you would put forth – starting off as what are tips for for building threat modeling into the product engineering process? Kim, do you want to give us some thoughts on this one first? I think my getting started tip would be start small. Don't go for the big full fledged, heavy threat modeling process or having that, I don't know, tool that has a lot of guidance, but it's maybe not the stuff that is doing the way that you want it to do. So I think the most important thing is to start small. Figure out how it works for you, what you need. Do you want, I don't know, do you want a report? Do you want it to be closely? I'm really struggling today. I'm sorry. Do you want it to be more embedded in the development lifecycle and talk with the developers more closely? What does what do you need in your organization? And then you can expand on that and grow and find some tools, which I know some people here in this webinar can help you with. So I think the important thing is to not get too caught up in "Let's use this big threat modeling framework that I know my friend at another organization has been using." Just figure out what works for you and and start small. Yeah, okay. So Matt, what would you add on top of that?

Matt Coles  34:53

So I agree with what Kim is saying I think it's important maybe that you get opinions from - there are others who have done this before - and there's a body of knowledge available to help. Right? So I suppose I can plug the the threat modeling capabilities as one opportunity here. You know, part of the threat modeling manifesto, the group that we created a set of capabilities to look for when it comes to building out a threat modeling practice. And whether you're doing threat modeling for privacy, or doing threat modeling for security - the things that you might aspire to, right? What capabilities do you need in your threat modeling process to be successful. So that can help you build out a roadmap where you start small, you start with a pilot program, you demonstrate success, you show value. And then you scale that out to the rest. There are other papers available from other organizations that we can probably provide links to as well or that are linked from the capabilities page on threatmodelingmanifesto.org. And so that's probably the best thing. And I guess it's important we talk about threat modeling as a practice, and independent of whether it's security or privacy. But there are some things that come in with privacy, I think, just in a little bit of learning I've done and talking to Kim extensively on this, that with security, you do need to have, you should have your engineers and you should have documentation and QA and you know, other folks within the expanded engineering organization be part of that discussion very, because threat modeling is, both a security analysis activity, but also it's an information sharing exercise. When it comes to privacy, the extent of the individuals you probably need or the roles that you need in that conversation, go well beyond what you need for security. Right? You probably need lawyers, you probably need data scientists, you probably need people who work with third party integrations, right? There are things that happen in a privacy space, that are important, and you need security to be represented, but isn't necessarily the core there that. so whatever process you implement for threat modeling, needs to keep in mind the roles and responsibilities that people will have, and making sure that the right people are connected to the process.

Chris Romeo  37:30

Makes sense. So the next question is in regards to zero trust. And so I'm gonna show a threat model because as you both know, I spent a lot of time last year researching zero trust. And so the question about as we move toward zero trust architecture, do we see this as a barrier to secure by design, when threat modeling is moving away from boundary and perimeter protections.

Matt Coles  37:53

I don't want to tackle this, , but go ahead, Chris.

Chris Romeo  37:55

Let me just show you this threat model that I I did. I did extensive research last year into zero trust. And I really wanted to understand it myself. I saw lots of people talking about it, but I didn't really understand. So you can go look up my talk on YouTube from a couple different conferences, zero trust threat modeling. But this is, I'm just showing this inside of the Devici platform as well so people can see what we do and Devici. But when you think about zero trust, and you can think about these various different trust boundaries, one of the things that I realized and the research I did last year is with zero trust, the trust boundaries are just different. It's no longer just an outside inside. And Google said that in the BeyondCorp document. The first, before it was even called zero trust, they came out and said it's the outside and the inside, there's just no longer that demarcation point. Yes, we do want to still use various segments of the network and try to protect them as best we can. But in a zero trust world, you've got control plane, which I'm showing at the top of the policy engine, the policy administrator, you got data plane, which is your app or your policy enforcement point. And then you've got various enclaves behind the policy enforcement point where you're delivering services, and you've got SIM you've got CDM, PKI, identity management, all of these things rolling together. And so from my perspective, yes, there are, threat modeling has a lot of differences when we start thinking about zero trust, the core is still the same. But there are some different things that come into play here. But I think secure by design is still, I apply secure by design the same way that I do for zero trust as I do for web apps as I do for mobile apps. Because secure by design and private by design. It's really just using this type of a process where we're thinking about like, what are the patterns that we want to reflect and then using threat modeling as a vehicle to unlock what are the challenges, what are the things that we need to mitigate? All of that still, for me is an agnostic solution. It isn't tied to any given stack. Just like threat modeling can be applied to anything. But I'm curious. What what do y'all think about this?.

Dr. Kim Wuyts  40:11

Yeah. So I, I haven't looked into it that closely. So maybe my interpretation is a bit naive. But to me, it's zero trust is kind of like a paranoid version of threat modeling. Like, I mean, you just have more trust boundaries or are more protective of everything. So I don't think that there's some inherent difference. It's just being more careful about the assumptions you make, but, well, correct me if I'm wrong if I interpret the concept, wrongly, so?

Matt Coles  40:46

No, you got it? You got it. Right, Kim, I think. So two parts. First off, the LinkedIn user, we don't know who it was, it's anonymous question is "Is zero trust a barrier to secure by design?" No, it's a requirement in zero trust, you must have secure right design and and you're basing your security, you're basing that trust. Zero trust, that doesn't mean there is no trust, it's just that you don't start from a trustworthy position. And so the first part is secure by design must occur. And therefore it isn't a barrier to secure by design. It reinforces secure by design, number one. Number two, and this has sort of been a evolving thing for me, at least over the past 15 years, whatever, doing threat modeling, we may have done ourselves a disservice in the engineering world, by having trust boundaries in the first place that we're as expensive as they were. Right? So we look at doing things like input validation and output sanitisation on a web server, or on a database server, or on intermediate, you know, interior architecture, intermediate servers. If you do it in all the various places, and they're all within one, sorry, let me take it back. If you do it correctly, you do at each layer, where you have input coming in, you do input validation, you do output sensitization, and data that goes through the system gets properly vetted, validated and checked. And if instead, you had drawn a single boundary around your entire interior architecture and said, everything that comes in, I have to trust and everything that goes out I have to have to have to sanitize, I don't do anything in the middle, you're doing it wrong, probably. And so we may have done ourselves a disservice by having expensive trust boundaries in the first place. In fact, when I've done threat modeling over the years, I almost ignore trust boundaries. I use them as containers more so than groupings, more so than actual indicators that, as soon as stuff comes into the into the boundary, we don't do anything with it, because we trust it. But in zero trust, you can't. The trust boundary is the thing that you're building. And the secure design principle is you do validation at every point in the process anyway.

Chris Romeo  43:09

Yeah. And as we see, there's, you know, we did uncover lots of different threats that impact the zero trust architecture there. So yeah, I'm glad we got a chance to look at that, because I spent a lot of time doing that research last year. And I learned a lot along the way. But so I guess we're kind of coming towards the end of our time here, if somebody wants to sneak another question in feel free, but if not, I guess what's some advice that you would leave our audience with here? As we prepare to wrap up this conversation? Like, where do we get started with secure by design and private by design? What's the first step?

Matt Coles  43:48

Education? Be familiar, get familiar with the concepts? Right? Yes, there's some academics involved here. You know, Kim knows this better than I think the rest of us here. Through her years of experience in the academic world, and the solid research that her and her teams have worked on. But you have to be knowledgeable, right? And then that knowledge should be tied to the technologies that you're using, understand what properties your technologies have, and understand what those threats are, that come from them. Right? So practical information resources, things like CWE - the common weakness, enumeration, common attack patterns, the mitre attack framework. Sorry, Kim, privacy design tactics, but it's not that..

Dr. Kim Wuyts  44:41

Analogies.

Matt Coles  44:43

Thank you.

Dr. Kim Wuyts  44:45

So these have tactics, so yeah.

Matt Coles  44:49

But matching them to your technology, right? If you're using c versus go versus rust, you have a different set of threats. So you have to worry about different set of security properties and principles that you need to understand and know. And it's with that, allows you then to do the critical analysis and to bake things in upfront and avoid avoidance of risk, not acceptance of risk. For me, that's the biggest thing is the education, the knowledge and making it in context with what you're building. And that's maybe where guardrails comes in. Right? Whether that's you build that knowledge or that your organization builds that knowledge. It's key to have.

Chris Romeo  45:31

Yeah, Kim, what do you think? 

Dr. Kim Wuyts  45:34

Yes, and no, I mean, yes, I completely agree, I think to do it, right, to do it properly, you will need to invest a lot of time in getting the background, getting the foundation, the foundational knowledge. But on the other hand, I do not want to scare the audience and saying like, let's first get everybody, let's get a PhD on security and privacy before you can get started. I think you can really start moralize weight and for privacy, for instance, just include one question, whenever you build a new feature, or something, which is "Do I really need all the data?" I mean, there's already some small steps you can take and you can evolve into that more heavyweight stuff that will make it more successful or more strong. But don't let it scare you. You can start small, you can start lightweight. And you can evolve together with your skills with your team and build that knowledge.

Chris Romeo  46:43

Yeah, I'm glad you both enlighten me because I thought the option was going to be to go sign the pledge. Oh, well hey, just leave that out there for you. I'll leave that for another day to unpack what that means. But you can look up the CISA Secure by Design pledge and read all about it. And you can even listen to various podcasters describe it.

Matt Coles  47:04

You had to get it in there, didn't ya?

 Chris Romeo  47:04

I had to get in there. Come on. It's been an honor to have to have Matt and Kim with us. And I just want to say, you know, Devici, we're putting these on, we're trying to bring more knowledge to the industry, but if you're in the market for a threat modeling tool, we have a free forever plan. So one of the things that I'm really passionate about is how do we bring threat modeling to the masses. And you don't even have to give us any money. There's a free forever plan, you can do three threat models inside of the Devici platform that are free forever. So you can just work on them and continue using the tool. Because I want to see more people threat model. And I think we're all going to agree about that. The more threat modeling we get happening in the world, the better we're going to be at secure by design private by design. There's gonna be less things to find down field, downstream, if we start doing this upfront. So, check out the Devici platform, devici.com. There's a link right there - it'll take you to make a free forever account. Get in there and start threat modeling. I mean, what do you got to lose? It doesn't cost you anything. Dr. Wuyts, Matt Coles, thank you so much for being a part of this event with us. And I just love any chance I get to interact with both you. You're both brilliant, so I love to learn from you. And that's what I just did here. I got a little course in privacy engineering and some secure by design flavor. Both awesome. Talk to you soon. Thanks for having us.

 Matt Coles  47:26

Thanks Chris, for having us.

 Dr. Kim Wuyts  48:28

Thank you

Skip to main content