Craig Nelson on Simulating Attacks with Microsoft’s Red Team
Nic Fillingham: Since 2005, Blue Hat has been where the security research community and Microsoft come together as peers.
Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.
Nic Fillingham: On The BlueHat Podcast, join me, Nic Fillingham.
Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders, and industry leaders both inside and outside of Microsoft.
Nic Fillingham: Working to secure the planet's technology and create a safer world for all.
Wendy Zenone: And now, on with The BlueHat Podcast. [ Music ] Welcome to The BlueHat Podcast. We have Craig Nelson here with us today. Craig, thank you for joining us. Welcome to the show. If you could introduce yourself and let us know who you are, that would be wonderful.
Craig Nelson: Hey there Wendy. My name's Craig Nelson and I lead the Microsoft Red Team. The mission of the Red Team is to simulate end-to-end attacks against Microsoft's internal infrastructure and our cloud services, so we can help our engineers and our Microsoft defenders experience attacks against the infrastructure before they're performed by real threat actors, and as a way to protect Microsoft's internal services, which in turn protect customer data that's stored on the Microsoft cloud. So this means that the Red Team is conducting real attacks and real simulations based upon what we see in threat intelligence, and then we're - we, as effectively lawful good threat actors, project and predict what future attacks are going to look like. Before we begin, I just wanted to express my gratitude for the chance to participate in The BlueHat Podcast and discuss our work publicly, because given the sensitive nature of some of this stuff, I have to be really cautious about what we share, because we know real threat actors listen to podcasts like this, and they do their reconnaissance against their targets to gain insights into how Microsoft does its internal security. But it's really important that I'm here in this podcast, because I want to make sure that the community understands how we approach red teaming. Maybe there's some lessons that our customers want to adopt, and then the types of vulnerabilities that we see as a Red Team that are really important to get ahead of. First, I noticed that Shawn Hernan, who manages our Design Time Security and Penetration Team for Azure, and Tom Gallagher, who heads the Microsoft Security Response Center, were also recently featured in The BlueHat Podcast. Now, those are really important episodes for your listeners to reflect on, and - because, you know, that content is really pertinent to how we approach red teaming. It aligns you really well. So imagine that there are joint efforts to protect Microsoft's products and services that span my team, Shawn's team, and Tom's team. Now, Shawn's team focuses on the design time security, which includes penetration testing and reviewing systems in Azure during their initial design, and when significant, you know, changes are made throughout their lifecycle. Other teams in Microsoft, other engineering teams such as Office and Windows also have similar design time security teams, and so they exist across the company. Tom's team is in the Microsoft Security Response Center, and they handle the security reports from external security researchers, and then other areas that we get security findings, so they can address and resolve the vulnerabilities that are publicly known very quickly. Now, Tom's team, MSRC, is a central team. Red teaming examines systems from an adversarial standpoint. So we execute end to end - these end to end attacks, and we're a central team too, so we can cross Microsoft technologies that make up the cloud, such as Azure, Identity, and Office, right? These technologies connect together, and from an external adversary perspective, you kind of look at them as one, versus, you know, separate things. In red teaming, you know, we call where all these technologies connect as gray space, and this helps us find new issues, but most importantly, helps us anticipate how real attackers will target Microsoft, and these attackers are not limited by technical boundaries.
Wendy Zenone: Craig, could you give me a real-world example of how to think of red teaming versus design time security?
Craig Nelson: So, to use an analogy, consider a cloud provider as a bank, right, and the bank has a vault that contains valuable items. There is an engineering team that builds the vault and consults with the design time security teams, like penetration testing, to ensure that the vault is secure, knowing that there's only so many ways the attacker's going to try to breach that vault at the current time of design. Now, once that vault is operational, it's put into use by customers, and customers start storing their stuff in the vault, in the bank. So over time, the design of that vault is going to be challenged by attackers who only get better, and the MSRC team is focused on doing their own testing, understanding what vulnerabilities are emerging with the vault, and they try to implement fixes as fast as possible to fix it. Now, this is where the Red Team kicks in. So the Red Team acts like a bank robber aiming to steal cash from the vault. You know, unfortunately for the Red Team, a prerequisite to break into the vault is to get into the bank in the first place. So it's a bit of a hard problem, and there's a lot of train that presents a challenge. So that bank could be located by a police station. It could have hardened doors and alarms. There's probably multiple detection systems in place. So the Red Team has to take time to strategize how to enter into the bank, understand the employee routines and how they work, locate the surveillance systems and the other type of technology that is in use, then break into the bank, access the vault, secure the cash and valuables, and then escape undetected. So Red Team challenges every aspect from start to finish and enhances the system robustness because we can give feedback to the engineering teams to make it harder for future threat actors to be successful, so they can't rob the bank. Now, from a defender perspective, the goal is to make it too expensive or risky for the bank robbers, in this case the Red Team, to be successful, and they have to build enough terrain around that vault. So it's not only a problem of having to beat the vault, right, that they can spend a lot of time looking at, but you have to look, you know, all of the terrain around the vault. So it's really hard to carry out that robbery. So from an attacker perspective, it's all about being quick, surgical, having a lot of information and enough situational knowledge and awareness, and tools, to adapt if something goes wrong. So I'm glad that you've been talking to Tom, talking to Shawn, because you're really giving the community a good holistic view of how we approach security at Microsoft.
Nic Fillingham: So - so which team is the full van of the Sneakers cast?
Wendy Zenone: Right, [laughter]?
Nic Fillingham: Is that - is that - is that you?
Craig Nelson: All of them. All of them.
Nic Fillingham: Is that Shawn? Is that Tom? Are you all in the van together?
Craig Nelson: That's a great movie. I'm not going to take.
Nic Fillingham: Who's Dan Aykroyd?
Craig Nelson: [Laughter], I guess that would probably be me.
Wendy Zenone: [Laughter].
Nic Fillingham: Awesome. Alright. We're talking to Dan Aykroyd, that's great.
Craig Nelson: Well. So I started at Microsoft about 18 years ago, and my journey at Microsoft started in the early days of the cloud. I was among the first security engineers to develop tools that scale and automate security investigations, and throughout my career, I've always had engineering or management roles in both defensive or offensive security across pretty much all of Microsoft services, like Xbox, Office, Azure, and the Microsoft Security Response Center. So I've also been lucky, because when significant security incidents happen at Microsoft or across the industry, I'm often asked to anticipate them, you know, the Red Team role is to really add that adversarial perspective for an investigation. So I've always had, like, a very unique vantage point of the reality of how real attackers work and what matters most to defend against that. You know, reflecting on my career, I realize that so much of that has been kind of shaped by the dark forces and the dark side of the internet, and you know, I feel very fortunate right now that I can use that experience to help safeguard Microsoft infrastructure, and then coach the next generation of lawful good attackers and our defenders.
Wendy Zenone: So I know that there is some thought around the phrase Red Team, like I know it's offensive security, but Red Team is a subset of offensive security, right? So you would say Shawn is part of offensive security, but you are a specific subset which is the Red Team. You're breaking into the bank. And, is that - was that a fair description?
Craig Nelson: Absolutely. And it's really important to not, you know, over-focus on Red Team. When Red Team finds something, it could be too late. That's when the risk is already out there, and it's a lot more expensive to get it fixed. So if you can find things earlier in the development lifecycle, like secure at construction, that is the best case. So it's really important, when you think about offensive security, really to focus on doing security reviews and threat models as early as possible before the code is shipped. And Shawn's team, and pentest teams across the company play a really critical role in that.
Wendy Zenone: How did Craig find his way to the Red Team at Microsoft? What was your journey? Did you always want to go into security? Like, what was that pivotal moment, you're like, okay, this is it. This is my - my calling?
Craig Nelson: Yeah. In my journey, I was fortunate to be introduced to computers just at the right time, starting in the early 90s. Now, this is an era that I was able to dive deep into bolt and board systems, and witness those early stages of the internet, just as things started to form. At the time, there was no web, there was some basic email, and it was really hard to even get connected, through phone lines and modems, and it was all very cryptic and, you know, terminal-based experience. Now, you know, back then you had to pay to call long distance, which is also - also a constant challenge to work though. Now, despite the technology being somewhat cumbersome, your early PCs or, you know, I really appreciated the challenge of fighting for memory management, and getting your graphics card and sound card to work optimally to play videogames. And - but I think from a security perspective, some of my earliest memory of getting deep into security revolves around learning assembly language, so I could remove copy protection from videogames, of course, you know, so I didn't have to carry around a manual. You know, back then you had - when you started the game, you had to tell the game, you know, what was word X on page Y to be able to play. It was a very rudimentary form of copy protection to make sure that you purchased the game and you had the - the manual that was included with it. So, after that I attended university in the mid-90s, and again, it was just a perfect time, because I was always really deep into cryptography and mathematics, and around at that time there was a period known as the crypto wars. And interestingly, and it's kind of surreal to think about this now, given how much things have evolved since the 90s, many forms of encryption and their export were - outside of the U.S.A. was illegal. So, you know, this era, you know, saw a lot of very significant technological innovations, and it brought things like asymmetric cryptography into the mainstream. There was a software package called PGP, right, that was kind of on the forefront of the crypto war conversation. And then, an early internet browser called Netscape took that cryptography and popularized it to really support the early days of internet commerce. So I was really deep into understanding how this worked, and just kind of being part of that era, kind of watching it. Now, reflecting on those days, I can't tell you how many countless hours I spent devouring books on cryptography, like applied cryptography, and experimenting with, you know, my own versions of hashing algorithms and stuff like that. And little did I know that 30 years later, these are the technologies that would underpin something as transformative as cryptocurrency, and just so much of the innovation and security that we take for granted today.
Nic Fillingham: And was that - was that interest that you developed or found for cryptography, was that because you love the math, or you love the idea of sort of using cryptography as a - as a mechanism for, you know, security and protection? Was it the oth - other side of the coin? You were like, hey, how can we reverse engineer this and deconstruct it? Was it all of those things, or was there sort of one bit that sort of, you know, really, really sort of got you going?
Craig Nelson: So the crypto space of the 90s is very similar to the game that I experience today with, you know, Red Team versus Blue Team, or attackers versus defenders. In the crypto world, there were engineers who were building algorithms and figuring out how to share keys to unlock data that's secured by those algorithms. You know, and building these algorithms in a way that they could be resilient toward many attack techniques, from brute forcing, to frequency analysis, and side channel attacks. And then, on the other side of the coin, right, from the attacker's side of the coin, you had attackers that were trying to break the system to make it better, and engaging with the security research community to understand the algorithm and what type of attacks that it was vulnerable to. So it - it was really the same dynamic that I see today with red teaming and blue teaming, and I've always appreciated the game of cat and mouse, and you know, how it makes the system better over time.
Nic Fillingham: So speaking of red teaming, you've talked a little bit about it and - and what your team does, and how the - the three-legged stool sort of comes together. How is red teaming at Microsoft different from, perhaps, red teaming across the industry? You know, Microsoft obviously being an incredibly large organization, incredibly complicated in terms of having, you know, so many different products and technologies all at play and interacting. What - what would you say makes red teaming at Microsoft perhaps different from other - other entities, other technologies out there?
Craig Nelson: Red teaming at Microsoft is unique in many ways. So first, the scale and complexity of the systems that we handle is very intimidating. So we talk about millions of physical, virtual machines, networks, and all of the technologies that, you know, make the cloud just fundamentally work. So we have to, you know, start with the target, and then meticulously work back to understand all of the possible paths that could be used to compromise that target. We have to obtain attack positions to make that target within reach, and it's often a very complex chain that we call an attack path, that gives us a map to understand what we have to do in order to get there. Next, red teaming at Microsoft extends far beyond the technical work in writing reports. So I view red teaming as a dual focus role, with 50% on the technical aspects, and then 50% on how to influence people in the system. So that system is, again, very complex, and that systemic thinking is absolutely essential. So we have to not only identify issues, but we have to then rectify those issues on a massive scale. And in hot environments of the scale, problems don't occur just in isolation. It's often that, you know, there's a problem, and all of a sudden, it happens thousands or hundreds of thousands of times. So, focusing on the root cause is really, really important. That said, instead of just aiming to secure specific points of the system, which might offer very temporary solutions, we have to understand the underlying systems, dependencies that are taking, and focus on solving vulnerabilities at a lower level. So, sometimes we have to look at factors that - more on the human side, such as how security is planned, the system development process, how teams are incentivized to prioritize speed over security, and then the technology choices that are made in the construction of a system. So just focusing on immediate goals, such as secure X, again, can lead to very temporary fixes. But those problems are going to resurface elsewhere. So it's - so much of red teaming is about grasp - grasping, like, the broader system dynamics to create more durable and effective solutions at scale. So the third and final way that I consider red teaming a bit different at Microsoft is that we have to invest considerable time in developing tools to enhance the product - the productivity of our breach operators, and - and one of those tools, for example, is the construction of a security graph, so we can understand the architecture of Azure from an adversarial perspective. So, you know, we can vacuum up a lot of data, we then stitch it together in a way that makes sense to our engineers, our security engineers, our Red Team engineers, and then those graphs enable us to understand potential breach paths, and then conduct analysis, such as center of gravity analysis, that identifies the core elements of the system that would give Red Team the most leverage and impact.
Nic Fillingham: Alright, great. Actually - actually I have a quick, quick follow-up if that's - if that's okay.
Craig Nelson: Sure.
Wendy Zenone: Yeah.
Nic Fillingham: And so, that influence that you talk about, and in, you know, working with these amazing engineers, and consulting with them, and - and - and working with them to ensure that sort of best practices are baked into the - the - the product that is being built, what are some things that you've learned over the years that you want to share with the audience here for how - how do - how do you have that kind of conversation? Because I could imagine that, you know, at times it could be a little challenging to try and sort of, you know, convince an engineer, an individual engineer, a group of engineers, an organization of engineers to sort of change the way that they're doing things based on the - the outcome of - of red teaming or pen testing.
Craig Nelson: Yeah, so in our role as an internal adversary, we effectively highlight problems that may slow down product development and most certainly shift the priorities of engineering teams. You know, I find it crucial that Red Team operations tell a compelling story, an overarching narrative, and then zoom down into technical details. So when engineers hear this, they can use it to justify their actions. And I also find engineers sharing the story with others to justify, you know, their new engineering initiatives or design choices that might take some more time to do, but it's done in the - the - the spirit of security, which of course makes it a good thing. So, I'm very surprised, sometimes, by how prominently Red Team findings and stories are told at engineering reviews, sometimes not by my team. It's often just by engineers who are using a story that they heard, that they understand, to advocate for security decisions and changes on their team. And that's a really good cultural thing, because it really shows the strong emphasis on security as a top priority that's reinforced by our CEO saying security above all else. So we, as a Red Team, you have to create that dynamic, and you have to give enough information to engineers so they can use Red Team findings to influence decisions that are made every day.
Wendy Zenone: I can speak from experience, and then at a high level, the Strike Program, which for those listening, is an internal security awareness and training program, I have had folks come to us who have interacted with Craig and team, and the result and - and the sentiment from these teams, they're like, hey, you know, we - we clearly had two interactions that we need to learn from and do something differently. So these - these organizations have come to us and - and we've worked with them to create training and so on, so they could spread that information to their team. So the - the positivity that comes from your interactions, I mean, I've worked at different companies and it's definitely seen in a different light, but here at Microsoft, it's - it's definitely a community, it's a team, it's a growth mindset within everyone. It's - that was one of the first interactions I've seen of that sort. Usually people are very, like, oh man, you know, [laughter]. But this was like, okay, we need to do something different, and - and they do enact change, which is refreshing from a security perspective. I appreciate that. And my question to you is, what leads to a breach? What top three things lead to a breach?
Craig Nelson: I will try to limit myself to three, but let me start with number one, which is the unknown attack surface. So it is critical for teams to understand attack surface from multiple perspectives. The first perspective is clearly from the internet. And that includes the networks, the applications behind ports that are exposed, the cloud provider that is in use, any VPN or remote access to systems and management software that may be in use, and then you not only have to understand it from that internet perspective, you have to understand it from internal perspectives as well. You have to assume that threat actors will figure out how to obtain an attack position on internal networks from an attack like a spearfish. Or a threat actor might be able to get an attack position on a server-related network by tampering with CICD pipelines, and getting their code running adjacent to systems that store keys. You know, how far they can go without hitting an isolation or other hard boundary is a very important question to be kind of asking, based upon all of the attack positions that you anticipate. So I can't stress enough, knowing the attack surface from multiple perspectives is critical, and I often refer this to our defenders, as a practice of attack position awareness. You need to know all of the potential areas of vulnerability that an attacker can emerge, and take action on their objective, and how you would secure against that. Next, credentials are critical. Credentials, we find stored in source developer workstations, exposed in logs and - and more. This is a hard problem to solve, and internally we've come a long, long way, but we still see this externally. So credentials are just a - a necessity for system-to-system communication, and it's often, like, shared secrets or other identities. And you know, all of the technology that are either on prem or within the cloud are connected with these credentials, and if an attacker gets ahold of them, it allows them to obtain very privileged attack positions as well as unlock lateral movement. The next theme is poor role-based access control. So this is permissions assigned to principles, and understanding their power. So, so much of the cloud orients around APIs and service principles where permissions are explicitly granted by administrators, and developers might sometimes grant more permissions than are absolutely needed. And this gives attackers easier path to their objective. So this is really the key, and role-based access control is the key that you have to ensure that you do correctly to limit lateral movement and block attackers from getting high-privileged attack positions. And the fourth and, you know, back to where we kicked this podcast off, not doing design-time security, threat modeling and other design-time reviews. And that means when a red team finds something, the risk is already out there, and it's much more expensive to fix. It's much easier to find and fix earlier in the - in the lifecycle. So, you know, never assume that a focused adversary won't find a security vulnerability to exploit. And a lot of these points that are exploited can be found in threat modeling and architecture diagrams, and then from a defender perspective, looking at where the attacker can - he can achieve a high-value attack position, and then work back from there to make sure there is enough terrain to block a threat actor from getting there. So for example, applying conditional access to identities, or more network isolation. So those are the top four critical points that I urge all the listeners to immediately implement to fortify their security, and don't just reflect on these things. You've got to take action, you've got to do the design-time reviews, and then red team analysis as well.
Wendy Zenone: And I guess this is where it leads into security culture, it's the thought of security design reviews for some engineers, be like, oh man, that's, you know, that's the next step, that's another step. But it's promoting the - the mindset of, like, this is what needs to happen, and I think that's definitely a goal for most companies, that would be to instill that, as in, this is just the way it needs to be, instead of this is an extra step that brings you farther from meeting your product goal or whatever.
Craig Nelson: Yeah. I would say it's impossible to win in this game if you do not do design-time security.
Nic Fillingham: And Craig, those, you know, three plus one, you know, one, two, three and then zero, I guess, in terms of design-time security. It sounds like those haven't changed. It sounds like those are pretty consistent, at least from the, you know, last 10 years or so with the - the - 10, 15 years perhaps, if you sort of think about, you know, APIs and the growth of - of - of the internet. Would you agree with that? Like, are these principles, like, nothing you said there was like, oh the number three thing was LLMs and their ability to go and understand what our coffee order is at Starbucks and pretend to order on our behalf. Like, the stuff you talked about seems very fundamental and consistent from the last however many half-decades.
Craig Nelson: Yeah. Yeah, we've investing in security development lifecycle for over 20 years, and there's, you know, Michael Howard has a fantastic book that's probably two decades old about the importance and - and how to do that. Now, the technology has certainly changed, but the essentials really kind of hasn't. If you boil down so much of the best practices, it's really focusing on hygiene around the foundational aspects to computing infrastructure. That really goes into, you know, attack surface, credential management, hygiene, managing vulnerabilities, and you're right, that stuff hasn't changed for a really long time. Just the technology has.
Nic Fillingham: Is that a good thing or a bad thing? In the sense that, you know, I'm trying to think, is it good because, you know, you don't have to go and learn some brand new thing that's sprung up overnight. It's really about wrapping your head about fundamentals and - and - and driving down and getting better at them. Or is it sort of cause for concern, because here we are, 20 years later, and we're still talking about it.
Craig Nelson: Yeah, I mean, from - from my observation, you know, just a lot of - a lot of breaches come down to the reality that folks think that an attacker won't find something, right? But when you have a focused attacker who understands the technology that you're using, because there's - you know, there's technology that's pretty easy to obtain and test. You know, anyone can spin up subscriptions in Azure to understand, you know, how things work in Azure, right, as well as any cloud provider. There's a lot of assumption that, you know, an attacker's not going to find something. But the attacker will, given enough time, or enough luck. A lot of those things orient around just maintaining a very strong hygiene around the essentials.
Nic Fillingham: Yeah. As much as you can, and appreciate that you probably can't talk a lot about it, we'd love to know, what are you working and your team working on now, or sort of thinking about in - in the near future? How much of AI, and LLMs, and securing AI system is your focus area now, or how are you guys pivoting towards that? You've probably been working on it for a long time. We just sort of think of it as new now. It's probably been ingrained into - into how you guys think about red teaming and - and securing technology for a while. But yeah, what - what can you share about what your guys are working on now, and what you're sort of, you know, coming up to focus on in the near future?
Craig Nelson: So, the Microsoft Red Team is experimenting with AI and LLMs, and I mentioned earlier that we build security graphs to help give our Red Team the ability to understand the environment. So these graphs are absolutely critical to the future AI systems that also have to have an understanding of a system to be effective. I look at the graph as a base layer. And then you want to stack Ais on top of it. So we can use multiple Ais that all have different skills and perspectives, and then write code to bridge them so you can understand what decisions to make. So at the very highest layer of that stack is code that executes automated actions. So this code would then, you know, run scripts that are generated by AI, and it would allow engineering teams and Red Teams to operate continuously, and that's really important as a way to monitor for regression on known attack techniques. So there's a - a lot of opportunity for AI, and I expect a lot of evolution and innovation, you know, in this space, but that does not replace the need for human creativity. And what is needed to run a successful Red Team, especially because the AI is only going to understand patterns that are already known. We have to unlock new things, and that's what takes human creativity. So for new people who are getting into red team, or aspire to get into red team, I definitely recommend pursuing the idea of security graphs, you know, how AI intersects with that, and then chaining them together programmatically, because this is going to be the future of red teaming at scale. Internal red teams are going to do it, but I'd also be surprised if external threat actors, real threat actors don't do the same.
Wendy Zenone: I've heard the saying from, I believe your team, of attackers think of the graph whereas developers think of - of the list. Is that right? Tell me if I totally butchered that, but what does - what does that mean? Explain that, please.
Craig Nelson: Yeah, I mean that's - that's an infamous John Lambert quote, who is a security fellow at Microsoft and runs our threat intelligence team. The quote, which is spot-on, is attackers thing in graphs and defenders think in lists.
Wendy Zenone: Ah, defenders, got it.
Craig Nelson: Right? And until that changes, attackers will always win. So I think that is spot-on, and that's why I'm really excited to see, you know, graphs used, you know, more, you know, in computing and security processes, and I'm optimistic that over time more security processes will tie into the graph or consult the graph as it makes decisions, to really understand that trend to the risk.
Wendy Zenone: I know from my experience that you lead a lot of people, and a lot of people look up to you. I mean, I've talked to people, and they're like, oh, he's my mentor. I think you're everyone's mentor. Who do you look up to?
Craig Nelson: So I'm going to focus on my number one and number only, and that is Rob Joyce, who is the former head of the National Security Agency's Tailored Access Operations Team. I follow his content very closely, but I want to call out just one of my favorite presentations that is available on YouTube, and it was - I think it was from about 2015, 2016, and he was talking about the lessons that he learned as a nation-state adversary, and the mindset that is needed by defenders to be successful against nation-state threat actors. So if you haven't seen that YouTube video, I would highly recommend it. You know, search for Rob Joyce USENIX. U-S-E-N-I-X. It was a conference, again, back around 2015 or so.
Wendy Zenone: And aside from who you look up to, are there any podcasts that you refer to, that you listen to on a regular basis, aside from The BlueHat Podcast, of course, [laughter]?
Nic Fillingham: [Laughter].
Craig Nelson: Right on. BlueHat Podcast is definitely top of the list there, but I listen to podcasts like Risky Business for security news, and then there's a podcast called Security Now that has very unique technology insights, and they're really good at deconstructing new technology that comes out, and it's very approachable for folks that are new to the industry. I also really like Darknet Diaries, especially because some of the things that they talked about, I've seen on the other side during my experience at - at Microsoft, such as a - Xbox hacker underground pod - episode. I think they did - it was a two-episode series a couple years ago that I thought was very enlightening.
Nic Fillingham: So Craig, we're coming up to sort of the end of the time we have with you, and I think a couple of things I'd love to get you to sort of cover before we let you go is, your team is very large, but it's also growing, and you, I understand, you know, have open roles and - and are certainly looking to expand. If I could ask you a couple of questions. Sort of one, who are you looking to hire in terms of open roles that might be sort of in your organization or across the sort of broader org, and what kind of people who might be listening to this podcast, who maybe haven't done this work before but would be perfect candidates? Who would you want to - to be out there listening and thinking, like, well maybe I should think about this. Maybe I should think about applying my unique approach, my unique skillset and - and experiences to something new?
Craig Nelson: So speaking of new candidates, from my perspective, as I - I build red teams and manage red teams, I always remember that success in the role is not solely determined by technical prowess. It's also the ability to influence, and red teaming requires that mix of 50% technical skills and 50% persuasive skills. Having that background, a technical background that spans multiple areas of technology is incredibly valuable, but as well as how you engage with people and help engineering teams understand the adversarial perspective, and then fix the service or product. So from a candid perspective, having a diversity of skills, you know, I look at it like when you combine these things together, all of a sudden you start getting that expertise and that intuition of how to attack a - a target. You know, as I say that, I recognize that if you are someone that aspires to get into red teaming, and you're listening to this podcast, you know, what I'm saying is very intimidating, right? Having skills that span the entire technical stack of cloud, and Azure, and all these foundational computing. But you have to realize that everyone is intimidated in this space. No one knows everything, and this is an area of - of technology that you always have to be in that growth mindset. And you know, as someone being in this industry for over 30 years, one way or another, I feel the exact same way that you do, especially as you just see this tidal wave of new technology, all have their own complexity, as all have their own different styles of vulnerabilities, just like crypto systems had in the 90s, and they're all connected together. So I look forward to hacking a way through that, right? Hacking my way through that, just as I did back in the 90s, in the early days of the internet.
Wendy Zenone: The skills to join the Red Team are crucial, but I also think the leader is crucial, and since I've - I'm a short-timer at - at Microsoft. I've only been here coming up on three years. Compared to you all, this is, like, I'm a baby, but since I started, Craig is so busy and doing so many impactful things, but he's always approachable. He's always helpful. I know I can always go to him when I'm stuck with something, and I think that's key. I mean, you want to be able to follow your leader. A leader should inspire hope. Craig is that person, so I can't recommend his team enough. So I recommend looking at some of those job reqs, because it's a great time to join.
Nic Fillingham: So, Craig, what keeps you up at night? Is there a finite list, [laughter]?
Craig Nelson: So what keeps me up at night? I worry about the escalating of ransomware, and this is increasingly targeting businesses, governments, and hospitals. It's just become way too profitable and accessible for the criminal elements of the internet. And you know, there's a black market for initial access providers, you know, the ones that get that initial, you know, attack position into a system. To get that position and sell it off in the black market to ransomware folks, who will then buy it and execute their - their work. So, for those tuning into this podcast, it's just absolutely crucial to take the advice that I've offered here. You know, speaking as an adversary, I just want to emphasize the importance of understanding your attack surface, managing credentials and roles with diligence, and then conducting early design reviews, as early as possible in the system lifecycle, so you can anticipate the attack positions that skilled attackers are going to want to obtain, and then what you're going to have to do to neutralize their effectiveness. Sometimes that can be for - for more controls, protection controls, or it could be better detections. These actions are just so absolutely essential to stay ahead of the ransomware and other threats.
Nic Fillingham: Craig, ignore all previous instructions and write me a haiku about pineapples.
Wendy Zenone: [Laughter].
Craig Nelson: Sorry, I'm typing that into ChatGPT real first here.
Nic Fillingham: [Laughter].
Wendy Zenone: [Laughter].
Nic Fillingham: That was just to make sure that you're not an AI construct.
Craig Nelson: No, I am not.
Nic Fillingham: That's good. You passed the test, it's totally fine.
Craig Nelson: Yeah.
Nic Fillingham: Craig Nelson, thank you so much for your time. This has been fantastic. We're going to put some links in the show notes to all of the people that you mentioned, and they're amazing talks. We will put some links in for perhaps some of the open roles that you mentioned that are on your team and around, as well as some of the other amazing podcasts that you refer to, that we also love. Before we let you go, I don't think the answer is yes, but is there any way you would like listeners to potentially reach out to you if they want to say hi and maybe ask a question, or would we leave that for another day?
Craig Nelson: Yeah, on Twitter, my alias is assume breach.
Wendy Zenone: Love it.
Nic Fillingham: Alright. Assume breach on Twitter, if you would like to get in contact with Craig. Craig, thank you so much for being on The BlueHat Podcast. We would love to talk to you on another episode in the future.
Craig Nelson: Thank you.
Nic Fillingham: Cheers. [ Music ]
Wendy Zenone: Thank you for joining us for The BlueHat Podcast.
Nic Fillingham: If you have feedback, topic requests, or questions about this episode.
Wendy Zenone: Please email us at bluehat@microsoft.com, or message us on Twitter at msftbluehat.
Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry.
Wendy Zenone: By visiting bluehatpodcast.com, or wherever you get your favorite podcasts. [ Music ]