Skip to content
 
Episode 33

Red Team 101: Offensive Security with Joe Vest

EPISODE SUMMARY

The 401 Access Denied team is joined by Joe Vest, co-author of Red Team Development & Operations, author of the original SANS SEC 564 Red Teaming and Threat Emulation course, and former technical lead for a DoD red team.

Joe gives us the nitty-gritty on the fundamentals. How can organizations know if they are handling risk well? What are the right ways to build and run a professional red team? And what security strategies does Joe find organizations lack the most?

More info on Joe’s practical guide to red teaming is available at Redteam.guide.

Subscribe or listen now:  Apple Podcasts   Spotify   iHeartRadio

mike-gruen-150x150
Mike Gruen

Mike is the Cybrary VP of Engineering / CISO. He manages Cybrary’s engineering and data science teams, information technology infrastructure, and overall security posture.


Joseph Carson:
Hello, everyone. Welcome back to another episode of 401 Access Denied. We're really excited to be here for another fantastic award-winning show, and we've got some great topics for discussion today. My name is Joseph Carson, Chief Security Scientist and advisory CISO at Thycotic based in Tallinn, Estonia, and really excited to also be joined again with my co-host, Mike. Mike, you want to give us an update into what we're expecting today?

Mike Gruen:
Mike Gruen, VP of Engineering and CISO here at Cybrary in DC, and today we're going to be talking with Joe Vest about pen testing and his new book. Joe, do you want to give an introduction and tell us a little bit about yourself?

Joe Vest:
Sure, sure. My name is Joe Vest. I've been doing IT and security for a long time. So, I was doing security or IT, but when it was called information technology before the word cyber was popular. So, I often call it still information security. So, I shifted over to security maybe in 2008 or so plus or minus when I actually shifted to that as my day job. And I've been doing some version of security since then. Today, actually about six weeks in right now I've joined HelpSystems as the tech director for the Cobalt Strike project. So, I've spent most of my security career early days in the application security space, but then shifted quickly over to I'll say the threat space, penetration testing, red teaming, and such. And I've really just spent most of my security career on that threat emulation side.

Mike Gruen:
Awesome. Well, awesome.

Joseph Carson:
It's fantastic to have you here, and I'm really excited because for me tell us... I mean, I've read your book, and I really, really enjoyed it going through because a lot of times... Sometimes for me when I was getting a lot of requests from companies to do penetration testing or doing risk assessments or vulnerability assessments, and red teaming, a lot of cases that I really struggle to get the point to say no because a lot of them did not understand about what it meant. They didn't understand a lot of cases what they were asking for. And sometimes, actually, a lot of the early discussion point was actually educating them on what actually they were asking for, what red teaming was. And a lot of cases, most organizations were even not ready. They were not ready to go down that path.

Joseph Carson:
I thought, for me, when I read the book, and went back I was hoping this is something that I should actually give the companies and say, "Read this before you come and ask me what you really need. So you can actually understand." So, can you give us a bit of background into what was the idea for creating the book? What it helps organizations do, and who should be the intended audience for reading?

Joe Vest:
Oh, yeah. So, this book was created honestly, the concept ideas were created long before the book was written. I was never an author. I still don't consider myself a practice or seasoned author. There was just, I had a need to train new red teamers, to be honest, and to actually take a lot of these ideas and concepts and put them in my own head and normalize them, so I could speak better to this topic. So, taking you back a little bit. So, I was actually, when I shifted over to red teaming I was part of a DoD red team.

Joe Vest:
This was many, many years ago now. Probably in the 2011 plus or minus timeframe when I started working in that world. And I was going through this process, great experience, great time, really lots of good exposure to what I'd say the threat emulation, red teaming space. And actually at that time I was working on a lot of engagements where we help... Like purple teaming more or less, but with really large engagements with military and DoD personnel and such. And the actual I worked with Ben Clark and the Red Team Field Manual ... we did that as well.

Joe Vest:
So, that was all at ... I helped him with that and did it. I said it was successful because I designed the cover. That was my first exposure to a silly world of creating a book and such. But as I went through this, I was starting to train and teach. I was going down the SANS route for a while trying to figure this out. And it was just really demanding to try to do SANS. So I was teaching some of their classes. Eventually, I left and created my own company, and I was a consultant for four years. So that was depending on myself, so even more need to be able to express these terms and concepts in a way that made sense to these clients.

Joe Vest:
I did propose to SANS to create a red teaming course. So I created their first red teaming course, which was a two-day course, red team development operation, or red team... Oh, my gosh, I forgot the name of whatever I called it. It was a red teaming management course, if you will. It was a two-day course, and I ran it for almost two years trying to get through the beta pieces of that. And honestly, the book came from that. That was where I solidified all my ideas I taught students. It was very much non-technical. There were some technical aspects where we dove really, really quickly technical, but we came back out because then and today what I saw the gap was the whole herding cats syndrome. We've got a lot of really, really smart people.

Joe Vest:
It may sound crazy, but organizations don't hire you because you're really, really smart. You have to translate that into something that they need. Some business risks, something they care about. I always say that we in the red teaming space, penetration testing space, we are always invited to someone else's playground. So you may have all the greatest tools and toys and everything. But if you're not sharing or it doesn't have value, it doesn't matter what you bring, you're not going to be invited back. So that gap of how to come and play in a professional way on to represent in a realistic threat is what really this book came from. And it was basically I tried to create a practical guide.

Joe Vest:
Basically, the same thing that I was teaching anyone that I was working with, say new red teamers, all the fundamentals they need to understand. I wasn't worried about the technical. I hate to say it, that's easy. You can go find and go digest that. That's time and experience. But the interaction of how to build and run a professional red team is not so easy. And that's really where the book came from. So myself and James Tubberville wrote this, jammed out that course in a few months. Really just busted butt on that and knocked that out. Started teaching it, worked through a lot of the logistics, but the demands of running my own company and trying to go and do the SANS instructor out, just was too much. So I said, "Well, thanks SANS. I'm gone." So, I left that, and I had all this great material.

Joe Vest:
So, James and I said, "Why don't we take this?" And we spent about three months, again, translating that material into the book. And that's what we see today. So that came out, oh, gosh, a little over a year ago, I guess. March or so was the year anniversary of when this thing came out. And I thought maybe 100 people might like it, and there's a few people but it's actually more people have commented, and I think it really still fills a gap on the professionalism of that side and red teaming to understand what you really need to do.

Mike Gruen:
So before we get too far away from that, first of all, I forgot that it's already been a year. I said a new book, but it's been a year. I feel like anything that's happened after 2012 is new.

Joe Vest:
I hear you. I hear you.

Mike Gruen:
But what's the name of the book? Where can people get it?

Joe Vest:
Red Team Development and Operations on Amazon. So it's, I think, 15 bucks now. We just dropped the price to keep it simple. It's not like any... I would love it to be a retirement plan. That would be awesome. You learn that it's not... Any sort of IT or security book is just not that. So, it's fun. I use it... It also keeps me in check to make sure I keep things in line with my perspective and understanding what's going on to challenge myself, to grow. I've already started to enhance some of the verbiage and the language I use since I wrote the book just because I'm always in this space trying to deliver these messages on threat emulation, and how do we actually portray and measure our ability to detect and respond against threat. It's really what it all comes down to.

Joseph Carson:
Absolutely. I completely agree. A lot of times even a lot of my writing that I do, a lot of it just starts off with me creating notes because I can't remember everything. It basically just goes into me putting things aside, documenting, so I can do a quick search and come back to it when I need to. So, that's great. I mean, that's an impressive to do it in a couple of months. Because I know writing books...

Mike Gruen:
Yeah, that was nuts.

Joseph Carson:
We've had quite a few authors in the show, and some of them have taken one to two years in writing some of the material. So, it's pretty impressive that you... I think it comes from probably the training side of things because you're putting it in such an organized structured manner rather than doing it from scratch. And that can accelerate the path as well. But one of the things that I wanted to ask as well is when... A lot of organizations when they come and there's so many different types. There's so many different types of risk assessments, penetration tests of red teams, blue teams, purple teaming is becoming quite popular recently. Where does a company know when to start? What should they be looking for to know what type of penetration test they should be looking for? Is their maturity a type of model that organizations can look at to say what's the right path for them?

Joe Vest:
Yeah, so this is probably one of my number one focuses that I have when it comes to how do you handle risk well. I always start from the... So, right to left, instead of let's figure out what test we're going to do. I say, "Let's start at the end." And really, you're starting with I call security operations. You have spent time, money, energy to protect something. So the first question I ask is what are you protecting against? And the big gap that I see in the world right now, which is why we see threats being successful is we are focusing on tools, malware, a technical aspect, a one and done exploit. So you see the news, "Oh, there's someone got phished, and then everything went bad."

Joe Vest:
Well, that's a story in itself that should never occur. It should never be one and done. So, you have to see what is your security operation's plan really protecting against? And this really starts to miss the big gap I see is we're missing the intelligent threat actor. That's really what we're missing. So yeah, they have tools and stuff and all those change, so you have to understand that. But when you start to look at that, when you break down the three tests that can help you guide towards that vulnerability assessments, penetration testing, and red teaming. I'll just keep those three together.

Joe Vest:
Again, I start right to left. And I say, Okay, why would you do any of these, you're going to apply some mitigations. So you want to think of these in terms of your mitigations. So if you started with a vulnerability assessment, which is geared towards identification of flaws, you're finding bugs and flaws. And if you mitigate that, what do you get? Reduce your attack surface. So honestly, vulnerability assessments, which is flaw identification is actually an effort in attack surface reduction. That's what I say. So, that's your benefit. You get to reduce the attack surface. Then you move into penetration testing, and now you're doing penetration testing. I like to call it as attack path validation. So, you're actually validating attack pass through a network, you're doing attacks, you're moving through the network to understand how these things operate. You're finding flaws, you're finding... But those flaws are in context to some sort of attack path scenario.

Joe Vest:
Do these things overlap? Of course, there's some gray areas where they line, but I keep them simple. Again, regardless of those, those gray areas, what do you get from the results of a penetration test? You should get a set of mitigations, and if you apply those mitigations you reduce the attack path, or you're reducing the attack surface. So, I put vulnerability assessments and penetration testing in the attack surface reduction bubble, which is great. You want to reduce the attack surface. But if all we did was reduce attack surface, are we dealing with a threat? We have not dealt with a threat yet you make it harder for them, but you can't reduce it to zero, that's not possible to reduce to zero. I mean, zero days attacks come out all the time.

Joe Vest:
I always charge everyone who's creating a defensive posture to say, "You need to create defenses for current and future threats." That's where we get into this red teaming. And to be honest, I'm not a fan of the word red teaming anymore. I feel like that word has been taken from us to a degree. I prefer something in the realm of threat emulation, adversary emulation, one of these. Not that any of these matter, but we need a common definition and framework. But now when you say okay, I do a red teaming engagement. A red team is a scenario that's a threat driven scenario with the goals of measuring security operation's ability to prevent, detect, and respond to a threat.

Joe Vest:
So, it's the overall Why did we spend all this money on this security program? People, processes, technology, everything. Let's create a scenario to exercise that. From that, when we get results from that we're not... We will find flaws, but our goals are not to reduce the flaws, but it's to understand, I like to call our detection story. Our ability to deal with a threat and prevent a threat from succeeding in their nefarious goals. And that's what we're trying to address. So when you have all three of those, you reduce the attack surface, make it harder for the threat. In essence, reduce the noise that you have to deal with from a security defensive posture, then you start to understand, are we actually even focusing on the real things through adversary emulation? Take these threat scenarios and walk through this to see, do our defenses work, do our hypothesis, do our assumptions actually defend what we think they defend against?

Joseph Carson:
Yeah, I like to call it the noise meter. Ultimately, what we're trying to do is force the attackers to create more noise. So, Mike, you were going to ask us something there?

Mike Gruen:
Well, no, the point of using the red team as a way to validate. I think that's one of the most common questions is how do you know that you're doing a good job? How does your security team? How do you know that these things are doing what they're intended? And the only way to do that is to either get attacked or emulate an attack, and I think we'd all prefer to have it emulated and controlled. So, yeah, so I like that idea of the red team, in that sense of helping to validate the cost that you've spent on all of those other things and how your team is...

Joseph Carson:
You want to make sure your security is working as well. You want to make sure that all of the measurements that you have. If all of a sudden, somebody gains access to a network, you want to have the visibility. You want to know that your security that you invest in is actually working. And if you don't do simulations, and emulations, and attack pass, you don't know. You assume your security is working just because you've got a dashboard and it says something green or red on it. But you don't effectively know until you actually put it through the tests.

Joseph Carson:
That's why car companies do crush tests to check and see if all of the systems that have been put in place in the vehicle is actually working. Do you want to test the car after production on the road in real life? No. Organizations do so many simulations to make sure that everything that they've done possible. Well, actually all those sensors, all of those controls, the seat belts, airbags, everything they put in the car to make sure that it saves lives is actually working. And we should take that same approach. And I think fundamentally, this is really, it's the crash test of security. That's ultimately what we're trying to...

Mike Gruen:
Totally agree. I mean, I think the crash test analogy, just extending a little bit further makes a ton of sense because one of the things that I don't remember when, but at some point they went from testing crashes, front end collisions to realizing that in the real world what we observe in the real world is that people actually swerve. And so, you don't tend to hit things head on, you actually tend to hit things with the corners of your cars. And so, using the real world information, adjusting the tests, and then that's where having experts who are constantly aware of what... How things are changing, and how threats are evolving, involving them in those tests. Otherwise, you fully automate tasks, and you think you're doing a great job, and blah, blah, blah. But meanwhile, the threats have continued to move and change. And so, I think that analogy holds up.

Joe Vest:
So, the gap I see right now is when people are designing these defenses, again, everything's in good intentions. But sometimes there's things missed. So you have a... I like to say a group of intelligent people designing the security solutions do not add up to understanding how the threat operates. So, we have a lot of smart people doing stuff, and it's just the wrong path. So often, I say the threat is not included. They don't have a seat at the table when you're designing your security defenses across the entire program, whether it's preventative controls over to detection and response controls, the whole path. We need to threat to have a seat at the table. That's what red teaming allows us to do.

Joe Vest:
Another thing, I like to have silly little sayings, but I don't remember where I heard this, but you often hear, you got to think like the threat, think like the adversary. Well, I like to say, "Okay, so that's like saying, I want to think like a chef, and then I can cook gourmet meals." Thinking like something is not enough. You actually have to be able to act and perform those actions. So if you can't, or someone on your team cannot actually act as a threat would and say, "Hey, let's not just hit head on. Let's go to the side. Let's really analyze this from an intelligent threat actor's perspective." Well, then you've got to bring someone else in, and that's what the whole adversary emulation, threat emulation is about. It's to challenge those assumptions.

Joe Vest:
I'm a big proponent on detection strategies, and detection engineering. So, actively proactively creating detections not just outsourcing it to a vendor. We often do that a lot. I know that there's reasons to do that. But when you outsource things to a vendor, you've created a strategy that may not be in line with what the threat is thinking. And they had the abilities to say, "Okay, I know you're doing this." Me as an intelligent factor can modify my actions and bypass whatever detective controls, or whatever you have.

Mike Gruen:
When you do that, I think that there's a little bit of I don't want to say myopic, but I can see a security team being focused maybe on some of the areas, but not really understanding where there might be other risk or other information or data or things that a threat would really be after, and they just ignore it. Not consciously, just don't even think of it. And so, I think that's again where bringing in a third party can be very eye opening in terms of what am I actually trying to protect? Am I protecting the right things? That type of stuff.

Joseph Carson:
Oh, yeah. It reminds me actually one of the things I've been... I read Sully's book a long time ago, which was about the Highest Duty. And I've actually been recently re-listening to it again on Audible. So, basically going through the book again. And Joe what you're mentioning is it resonates a lot with actually what Sully was saying in the book around airplane safety. A lot of airplane companies have actually outsourced, for example, maintenance to third party companies. But one of the things he was raising in the book specifically was they might be great engineers in regards to fixing the engines and planes or doing the maintenance, but they don't know how the plane has been used. They don't know, basically, the experience, basically how it's actually working together.

Joseph Carson:
One of things he was actually getting to, which I thought was really interesting, and actually reminded me of some of the concepts of your book was actually it's all about the connections. It's all about how everything works together. It's about basically, that supply chain of components is that and everything's a chain reaction. Is that it's no one specific failure in the process. We hit a lot of the recent attacks. Maybe somebody had a poor choice in a password. Somebody had a poor, basically, choice in what software they were running or running vulnerable software. A lot of that, that should not be the only issue. That's just one chain in this entire chain reaction is that once that account gets abused, what other controls were in place to detect it and to show that it was being abused? What other controls are in place so that we have these chain reactions?

Joseph Carson:
I think it's really important is that to show all these different associations. You're mentioning the threat intelligence thinking and doing the simulations, the adversary, and working together with defenders to make sure that the right detection indicators are compromised, the right noises come to the surface. And we have to step back, we have to think about all of this working together, and all of those basically intersections to really make sure that everything's working as basically let's say interoperability and orchestration. I think those are so key in the industry today.

Joe Vest:
Oh, yeah. There's a concept I talk about in the book, and I talk about it pretty much whenever I have a presentation about this topic. There's a lot of attack diagrams that go through all these different phases and such. I like to boil these things down into three pieces. You get in, you stay in, and you act. So these three phases are something that I like to put in there. And the reason I put those in there is not because it just simplifies it. But the get in phase is what we focus on a lot. I phished and gain access to a user.

Joe Vest:
When you look at detection, and opportunities, if I send a single phish, someone clicks, and it allows me to gain remote access. That's a really, really quick effort. And the actual artifacts generated are smaller in number. Are they good? Sure. But once I stay in, and now I'm living in your network. I'm living and breathing in your network, I am constantly moving and adjusting, persisting, laterally moving, doing all this stuff. So the artifacts I'm leaving behind are tremendous. So that post exploitation analysis of your capabilities are really, really important. I would argue more important.

Joe Vest:
Unfortunately, a lot of times when we look in the news, as such, we see, oh, someone had this VPN password out there, or they exposed this. And those are terrible things. But those are a door into your network. I would argue if you have 1,000 people in your company, you've got 1,000 people who can just start typing in their computer and do anything that they want. So what's this magic wall that we have to break through because those walls don't exist. Computers are on the internet. There is no real boundaries, there's just rules. Some rules are more open than others.

Joe Vest:
So, you really have to understand if these are my assets I'm protecting, I need to make sure that no user on my network can elevate their privileges and do all of these bad things. Because when you start to look between a phishing attack that elevates access to a user sitting there doing it, there's not much difference. Sure, I got a trick a person, but I'm not a big fan of doing phishing engagements. Those are my least favorite things because if I prove I can trick a person. Okay, so that's I can be fooled. We can all be fooled. I look at this and say, okay, the so what factor. So what? Bob in accounting clicked on this email. He said, "Yes, yes, yes. Please run this, I want to do it. Give me that Excel spreadsheet," because that's what we are going to do. And that is where you need to start measuring things. So, what this drives is, I really think we need strong threat-based scenarios.

Joe Vest:
I'm also not a big fan of the black box, come hack me, red team engagement show me what you got. I'm like, that's a waste of time and energy. I like to have defined scenarios that have defined goals that I need to trigger. Where there are certain techniques, certain targets, where I want to have this really worked out together. A lot of people will start to call this purple teaming. Again, I'm not a fan of that word because of the connotations that come along, but it is a collaboration because I actually don't think we have an offensive security field. We're all defenders. That's who we are. So, if you're in a red team, you are not in the offensive security space, you are a defender. And you've got to start thinking about yourself in those terms because what you do matters on how you're trying to help these defenses, typically, in the detection and response space. Again, that's one of our biggest gaps that we have right now.

Mike Gruen:
I agree with you going back a little bit to the phishing engagements. I think from my perspective phishing engagements are a test of not whether or not I can get in or trick somebody because I can. I mean, I know how to, there's no question. I think it's more about do I have in place the reporting structures? Do I have in place... It's again the, are my employees who maybe once they click the link, they're like, "Oh, nuts," and they know what to do after they do that. And similar to my last company, we did a lot of user analytic, and it was really about inside threat.

Mike Gruen:
But what we learned very quickly was there's very little difference between a malicious insider, someone who's been hacked, or some other thing on the network. So, it just goes back to that same point of just assume that somebody is compromised, or that maybe they're malicious. Like you have these users. If you have enough employees, there's probably someone who might be a little annoyed or frustrated or whatever, and not that you want to assume that you have that. But at the same time, there's really very little difference. And you need to have a system that's able to detect that as well, and it's the same stuff.

Joe Vest:
... having a goal planned scenario to where I asked, "If someone says you got to phish in, we're not going to give you access, no assumed access." I say, "Okay, is your goal to see if you can prevent this or is your goal to see if you have that infrastructure in place, and this is a detection method to maybe your users are going through the process?" If that is the goal, perfect, let's run through that. If those are not the goal, if the goal is to say, "Well, I really want to see if you gained access, and you can laterally move here and do all these other things." Then don't put that in the scenario.

Joe Vest:
I see this as a big flaw in our red team community, and those who are requesting these services to not understand that. So, you always got to start with your goals in mind and design a scenario around that. And I am much more a fan of open scenarios back and forth discussions, honest discussions, no blame, let's work through this on real life production systems because it can be done safe. And if you do this, you can actually have a realistic understanding of what does and does not work. That's easier said than done because there's a lot of politics involved. So when things don't work, people just want to say, "Hey, we spent all this money. Why isn't this thing working?" There's egos and humans get in the way of ourselves on these things.

Joe Vest:
I've been in too many times on the red team side, where the defensive, the blue team just hate us right off the bat. And I had to spend a ton of my time building those relationships. Now, it's always been really, really important to me to say, "Let's work together. I want to help you. Where are your gaps? What are you struggling with?" Let's design our red team engagements around your areas of concern. And once you build those things change immensely.

Mike Gruen:
Yeah, I mean, I wonder how much the black box, the old school thinking of like, oh, we're a black box, come and hack us has created that tension or animosity or that us versus them. When really, it shouldn't be us versus them. And by trying to do more open engagements, just automatically changes the dynamic of that conversation of, hey, we're all on the same team. We just want to exercise our offense, or we want to exercise our defense team approach.

Joseph Carson:
Just working in different tasks. ...as part of the same goal.

Mike Gruen:
What's the goal? Yes.

Joseph Carson:
It's the same goal that we all have, and we're all... Sometimes even get the buddy up and actually get somebody so you actually cross skill and actually get people that they can actually be better at detecting. So, I think at the same time, Joe, absolutely, we're all on the same team, we all have the same goals, and a lot of organizations sometimes they get in that competitive scenario, that mental thing where it's us versus them. But ultimately at the same time, we're all basically wanted to make organizations more resilient, want to make the internet a safer place for everyone to use, and actually get as much value as we can as possible. So, absolutely, I really like that approach that ultimately we are all defenders. We might be wearing different hats or having different tasks at the time, basically. But we should get into.

Joseph Carson:
Another thing you mentioned as well, one thing that I find at a lot of organizations is that sometimes in the detection side of things as well is that a lot of attackers when they get that initial foothold, one thing they'll do is they might run certain enumeration, and then they'll delete the logs. And a lot of things we're not even doing is looking for log deletion, looking for gaps in logs of where there may be a few hours. And that's also things that organizations, they're not basically looking at that measurement side of things. They're not looking at what things are suspicious, so that they can actually use those as potential ways to investigate further. So, for me, it's something that we... One question I want to ask you is more about what types of skill side, the resources, the people that should be getting involved? What types of new skills do we need in this area to really develop this much further into where it's really the concept you referring to?

Joe Vest:
Well, so you've got two sides of this, you've got to have the cats and the cat herders. So you've got to have really, really sharp technical individuals who understand the things that are being attacked. So, if you're going after, say, a Windows environment or Linux environment, you've really got to have an understanding of those components. And when I say understanding, I don't mean like, oh, yeah, I know how to write, inject something into memory. But I mean, I need to understand the relationships, how Windows works, how the communications path works. All of these components, those relationships.

Joe Vest:
And then once you have that basic usage understanding, then you can look for abuse cases because another thing I've started to realize is the word malware, we see this malicious stuff. I don't want to say there's not malware or software, it's all software with different... The person who wrote it has different intents. Because I hate to say that, and I'll go back to PsExec for an example. You have this tool written by SysInternals that does what it was designed to through the APIs provided by Microsoft. So, I can use it for good and evil, and it seems silly for that. But it really comes down to fundamentals understanding of the architecture of these components. So, understanding C# and really looking at Windows internals is really important, because again, what's the difference between injecting memory from loading, like say, a DLL from this versus loading a DLL through reflected loading through in the memory.

Joe Vest:
There's some behaviors, which lends ourselves towards understanding techniques. And that's where we start to focus on to say, "Okay, these are general categories of approaches that threats use to have a more malicious intent in their goals. So that means we got to have people who understand those technical pieces. But we also have to have the cat herders who really understand how to take these scenarios and design those and take, how do I identify the gaps on our detection strategies, work with our defenders to see do they even have detection strategies? And I would argue push forward to have a proactive approach to not just buy a bunch of tools, but to actually understand why we're implementing things? Where the limit limits are, what's good or bad.

Joe Vest:
Palantir has the alerting and detection strategy. I don't know if you've heard of this. Palantir's ADS strategy. I'm not saying it's the best one. But it's a framework that you can use for mapping out detection strategies. You start with a hypothesis and say, "I think I can detect a threat doing these things. You create boundaries, you create understanding of what works and what does not work. And it's a really, really good way to model a detection at a atomic level, or a small level, whatever you put those bounds around. So, we need someone on the other side to design threat scenarios to be able to test that. And to know that we need a vulnerability assessment and penetration test. What sort of security assessments needed to measure that thing?

Joe Vest:
Again, I don't want to discount the need for the technical side. And it's really, really important. But there is a lot of people, especially now as I see security going through a new phase. Maybe because I've done this a while I see new people coming in who all these tools, techniques during the say 2010 plus or minus years when things are really starting to explode. Well, those now just exist. So, they're coming into the space, and that was just part of it. This existed already. So now they're bringing a whole new set of skills and understanding to those. So you've got to be able to take those and push those team members to the right direction to test those hypothesis for detections.

Mike Gruen:
I think it's interesting that all the parallels with technology overall. So, I started as a software developer, and all of the lessons that we've learned of like, "Hey, you know what, if we have our QA team working more closely with the software developers, it creates less of an adversarial relationship and we get more done, we're on the same team, blah, blah, blah." We're learning that again with security. Same thing with tools. Yes, as a developer, I have all these things at my disposal now that when I started I didn't have that allows me to do more. But I think as people are coming in the industry, it does allow an entry level person to do a heck of a lot more.

Mike Gruen:
But at the same time, you do need to start training them and skilling them on some of the things that maybe are older techniques and older skills that you just don't get through the IDE or through whatever tools if they really want to progress in their career, and it's the exact same thing in security. Great, you have these tools, you can do a lot more on day one. But if you really want to understand what's going on, you need to look behind that curtain. You need to go beyond what you're seeing. It's just funny how we just keep learning the same... It's just the same things over and over again from industry to industry.

Joe Vest:
Yeah, I thought about the same. Like when I was starting off like, "Oh, man, I'm figuring this out. I know what's going on." Those old timers before me they didn't know what's going on. I was like, you know what? It's all full cycle. It's full circle to where we can't assume any of that is the case. So everything is just different. But we as people are still making the same mistakes in security. We have new technologies that might enable things to be quicker, or faster, or different. But those fundamentals are still there. We have to understand the technology, and how those technologies can be abused. And without the foundational knowledge, whether it's in actual architecture. When you look at cloud architecture, what's going on? How those APIs interact, the software, the coding principles that actually build these things. If we don't have some of that, then we're just making up stuff.

Mike Gruen:
Well, the networking principles... Sorry to interrupt, but I think the ...

Joe Vest:
No, no, yeah.

Mike Gruen:
... of like, "Hey, you. So what you're in the cloud. You have so much more control. You can actually make sure that this machine and this machine, they're the only two that are allowed to talk." If this database is only expecting connections from one machine, then why would you allow connections from any other machine? You have so much more control.

Joseph Carson:
That's the challenge, though. One of the things is that in those scenarios is that most of those basically configuration setups and wizards, it's all off by default. It's an open, open, open, open, open, open, and people don't like to go and have to change things. So we have these default settings. And that's why one of the things you'll see in the Verizon data breach investigation report that misconfigurations and default settings are one of our biggest pain points in regards to most organizations becoming victims of cyber crime. It should be the opposite. I always get into it to be off everything should be basically private by default. And that principle of least privilege where basically you have to say explicitly what you want us to do. Otherwise, if it continues to be that default, yes, yes, yes, people are not going to basically know what's the right settings or what they're specifically doing. So, for me it's let's get ...

Mike Gruen:
Or what they're leaving open.

Joseph Carson:
Exactly.

Joe Vest:
Yeah, yeah. Right. No, it's those are all important that we have to look at. I mean, there are security fundamentals. Forget about even detection response. If we want to minimize and make the threats job harder there are so many things that we do poorly. And one of my favorite examples is look at client server model. So, I've seen that and it's like, oh, clients talk to servers, not the other way around that certain clients only reach out to get stuff. What do we do in a Windows environment, Windows 10? Do we allow clients to talk to clients? All the time. So, when you're talking about, yes, there's a lot of things in mitigating but the Windows Firewall, just alone, turning it on and preventing client to communications is a huge, huge roadblock, and forces the threat actor to change.

Joe Vest:
They can't move laterally across the same space. Now they're forced to go up and down and up and down and cross different privilege boundaries that gives much better capabilities. But I still see Windows Firewall is turned on or turned off so often because clients are not being treated as clients. They're are treated as a server because for whatever reason, and I look at when you start to look at... I'm going to just say I'm going to blame our risk and compliance groups for this. Because this is what they're supposed to be doing. They should actually be deriving true risks and compliance capabilities. And there's a tremendous amount of power, if our risk and compliance groups better understood what threats can and cannot do, and not just have the big bullet list, but to say, "Here's why we need this. And here's what's going on."

Joe Vest:
I say that because I used QA when I did application security testing many, many years ago. I realized no one cared about what I said. I said, "Oh, you got this SQL injection flaw and I can get remote code execution or whatever." But when I said this is a quality issue. Boom, straight to the ... I was like, "Oh, that's how this works?" I created relationships with the QA team to start to have my deficiencies rated as quality issues. Whether that was right or wrong. I was early in my career doing this. I was like, I found a loophole to get this stuff fixed. And it's all back to governance, risk, and compliance and those are powerful and boring things that can help a lot.

Mike Gruen:
Yeah, totally agree. I mean, we do the same thing. We treat security vulnerabilities like any other bug because it's a quality problem, and it goes through the same exact process. I think that that's for anyone out there that's doing application development and trying to figure out how do you handle these things. I think that's one of the piece of advice that was given to me really early on was just treat it like a quality problem. What's really the difference? What's the difference between a security vulnerability? It's just a different type of bug. Get in that same exact workflow, and you'll see that it's going to get prioritized appropriately and fixed relatively quickly.

Joe Vest:
Oh, yeah, absolutely. So that's part of the skills is to have someone on the team who recognizes how to get things done and push those things forward. That's a tough one. I'm definitely buying into that, because I spend a lot of my time working to that side.

Joseph Carson:
What about reporting side? Reporting back, what are the skills, and what do you recommend how to report back to the business so they can actually take action? Because that's another major area I find what we're lacking.

Joe Vest:
So, you always, look, you got to have some sort of I say, actionable report, which means someone should be able take action on it. I actually am not a fan on providing risk scores through a report. Because I don't think that me as the analyst or as the security technician can give you a risk score. I actually prefer and it never happens, or very rarely, to have my report brought to a risk team to do a risk analysis on it to actually decide what is or is not important. Because what happens is I say, "Hey, this is a high level, or this is critical." It's like, "Well, no, because we got these other compensating controls," and then it's changed. And me as the security practitioner, whoever wrote this, I don't care what you call this, this is my facts that I'm giving you, you perform your risks score.

Joe Vest:
So, part of your reporting should include a post-risk analysis on those things to get into your cue for fixing. That is another area that I think we as an industry could really step up our game to make having higher quality of output. But those reports, whether it's pen testing, or red teaming, they need to focus back on the original goal why you ran this thing. I call it the so what factor? So what you found this law? So what we're running outdated TLS. So what? Those are important to actually perform those risk analysis on those individual items. But if you break those down into actionable things, then at least somebody should be able to take that and move forward.

Mike Gruen:
Is there some metric rather than risk score that you would advocate for like probability or something along the lines of... To give some idea because... I'll just let you answer the question rather than ...

Joe Vest:
I learned this from Chris Crowley, he's an instructor at SANS and does a lot of really great work on the security, the SOC side of things. And he just has something simple, a 1, 2, 3 score, and it's in terms of ease to fix. So, one would be there's a known issue fix, maybe a security misconfiguration. There's something known and easy to fix where just we haven't done it through ignorance or just laziness and keep it simple like this because oh, it's rarely available. Two, would be there are known fixes, but there's some sort of control or modification that might have an impact that we need to consider. So, it's like a little bit higher level. And then three is there is nothing. We can't fix this. There's no mitigation out there right now. So this is going to actually be really hard to fix. We've got to dive into this and understand it.

Joe Vest:
Then it gets into terms of ease of fixing. So it's not even a matter of this flaw is more important than the other. Because if you play that game, and I give you 20 flaws, it's like, what does it matter? If I gave you 20 software bugs to fix. Sure, there's some ranking in that. But you're still going to rack and stack those and get those work through. But you can also say, "Wow, we could probably knock out four of these on this next sprint." Let's knock out those really fast and easy. These other, we're going to have some resources to work through.

Mike Gruen:
So, putting on my product manager hat for a second, I'll tell you what my way of racking and stacking all of those is. There's three things I care about. Level of effort is one of them. So, it's the same thing. Is it easy, hard? I use a five-point scale. Doesn't really matter. Level of effort, urgency, is there some sort of deadline, timeframe, something related to this? Frequently in the security space that's not really the case. But sometimes maybe there's a contract that's dependent on addressing this particular vulnerability, so urgency. And then business value or some idea of risk, which doesn't necessarily come from the engineers who are figuring out the level of effort. It's coming from product, or coming from the business, coming from sales, coming from marketing to say, "This is the business value."

Mike Gruen:
And then you combine those three things. And then you can say, "I want to work on the lowest effort, highest value things, and then break ties based on urgency." It works phenomenally well because what ends up happening if you focus too much on the easy stuff is the big stuff never gets done. And if you focus too much on the big stuff, then the easy stuff never gets done. And so, it's a nice way of coming up with a way to rack and stack things of various sizes. And then you can also figure out what if you are doing agile and you are fitting things in Sprint's then you can say, Oh, well, we don't really have room for all of these big things. And we can sprinkle in some of the smaller ones and stuff like that.

Joe Vest:
Oh, yeah. Those are great for doing flaw and bug hunting and bug fixes kind of things. The one thing I see as a gap, and I don't see this happening very often, which is actually measuring your detection strategy capability to say, "If I said I did, I landed on this box, and I moved to here, and I did this, and I show you a story of what I did. Each one of those are, should be analyzed as a where was our detection failures? Why? What is missing from our strategy to say detection opportunity? So I like to include in my report detection opportunities to show that during... Here's the techniques that were used, and here's areas of detection.

Joe Vest:
Now, if you're a third party, you're going to have a less understanding of the organization's detection strategies and what they should be doing. But if you're part of their internal team, you can actually work this through with the defenders and create a solid report to show here's our areas we could have worked on, why did we miss this? Why were you able to move from A to B, and have no resistance through preventative controls, or no detection through our detective controls? And that's a huge gap that I see in our... That's our next level. I mean, we have really good processes for bug hunting and bug flaws, pen testing ... vulnerability assessments work great. But we need to move forward on those into the threat side.

Joseph Carson:
Absolutely, the fire alarm test. So, that's basically, it's making sure that when smoke is happening, that you want the fire alarms to go off. As we move towards the end of summary, I just like to get a what's your lessons, personal lessons learned from all of this? And if there is one thing you can say that we focus on to really improve the area? What's that lessons learned? And so, what would be the one thing that we take away from this?

Joe Vest:
We need to build out our detection story. We need a strong detective story. Our detection response capabilities are what's lacking in the space right now. We've got a lot of tools that's supposed to stop and do, stop all the bad things. But we really need to have... What I always say is, if I could give my analyst a reason to look, most analysts can do the instant response and go find and identify the bad. So, it's that gap of how do I cut through the noise? So, we need a really, really good detection response, detection story.

Joe Vest:
Using frameworks like Palantir's alerting and detection strategy framework is a great way to move forward on that. I'm also going to just throw a message out to Jared Atkinson, if you go follow a lot of his blog posts he's done. Just look at what he's done. He's with SpecterOps. He does an entire series on detection engineering, and the approach of very granular detections to very wide detections, and the pros and cons of this, and how to break down a detection hypothesis into its component. So you can take, decompose say a technique into something that you can actually deal with, and stuff like, I may not have 100% coverage, but I can get 30% coverage. And 30% is huge on some of these techniques as you start to build these up. But he has a really, really good series about that. So for me as a red teamer, as a threat emulator, my goal is not to hack and crack things. It is in order to enable your detection response capabilities to be more effective.

Joseph Carson:
Absolutely. And just kind of a summary, it's wise as well can you remind us the name of the book and where people, the audience can go and get ... from Amazon.

Joe Vest:
Yep, so the book, I wrote it along with James Tubberville. You can find it on Amazon Red Team Development and Operations and feel free. You can reach out to me on Twitter @JoeVest. I don't have any fancy hacker names or something like that. And anybody can feel free to reach out, connect to me, send me messages. Especially as with my new role, I'm much more involved in the community than I have been in the past, so it's been good.

Joseph Carson:
Awesome. And you do have the website also has a lot of resources as well...

Joe Vest:
Yeah, there is a companion website redteam.guide is the companion website that I put some extra material out there, some definitions, a few templates and such out there. So just a few resources to help to go along with the book.

Mike Gruen:
Awesome.

Joseph Carson:
We'll make sure we get to put into the show notes as well. So, Joe, it's been fantastic having you on the show. And for me, I was... I read the book when it first came out. I'm a big fan of books. So when books come out, I order them. Sometimes I don't get to them as quick as I would like to because I do have... I've still got a large backlog of books I need to still go through. But when something is of an interest to me, I do go through it, and I find your book fantastic, really insightful, and something that to be honest, I'm so happy that a book on this topic is available because it was something that I wish I would have had five, six years ago available to give to some of the companies and organizations to really help educate them. Because this is we're in a space that we definitely do need education. So, many thanks for having you on the show. It's been fantastic.

Joe Vest:
Oh, thanks. It's been great.

Joseph Carson:
Really looking forward to hopefully getting to see you in person at some of the events in the upcoming future. For the audience-

Joe Vest:
Yes ...

Joseph Carson:
Yeah, absolutely. So, for the audience, many thanks for tuning in. Hopefully this has been interesting. Definitely get a copy of Joe's book. It will definitely help make sure that you definitely know if you're mature enough is to go down different aspects of pen testing, red teaming, and so forth. And really make sure that you focus around that response and instant response detection capabilities. So, stay safe. Tune in every two weeks to 401 Access Denied, and look forward to having Joe hopefully back on the show again in the future to discuss further. So, thank you very much, everyone.