Cyber Things 12.15.25
Ep 2 | 12.15.25

Cyber and its "Hive" Mind

Transcript

Rebecca Cradick: Welcome back to Armis's Cyber Things. We're back for Episode 2. This is our short series in homage to "Stranger Things." But obviously, always join us for our Bad Actors podcast at Armis. And spoiler, of course, we have watched Volume 1 of the final season. So if you do not want to know what's happened so far, come back next week once you've watched. If you are up to date, carry on listening to us because we're going to talk a lot about the hive mind. And of course, in "Stranger Things," the Mind Flayer didn't just attack. It observed, adapted, and learned. It isn't just one monster. It's many connected through a single invisible network. And like cybersecurity, there is no single villain that we are trying to defend our country or organizations against. There's no one all-powerful adversary. The greatest danger well most of that activity happens in the shadows, long before an attack is even launched. So today, we're going to dive into how organizations can defend against the unknown. Joining me is someone who understands this world better than anyone. He is one of my favorite people at Armis. He is our CISO and customer advocacy officer, Curtis Simpson. Curtis, welcome to Cyber Things.

Curtis Simpson: Thanks so much for having me.

Rebecca Cradick: So you are a massive "Stranger Things" fan like I am. And we are excited to dive into this hive mind conversation. But I think first we need to really talk about what we mean by that. This distributed, adaptive, learning from every encounter, especially powered by AI now, that is what a lot of threat actors are benefiting from, and, of course, the dark web ecosystem. How close an analogy do you think that is into what we are trying to help organizations face today?

Curtis Simpson: It's a really good question, and I think it's incredibly close. Gone are the days where threat actors are isolated individuals operating from a basement or on the complete opposite end of that scale, nation-state organizations being funded only and entirely by more malicious governments targeting other governments. That's not the reality we're facing today. It's one massive network. When you look at the dark web specifically, you've got tooling and services that attackers can subscribe to. You obviously have geopolitical tensions and such around the world that are driving folks that may be struggling today with trying to find a source of income with potentially considering moving in this direction, and then having rapid, easy access to tools. You've got forums where attackers are communicating with one another in terms of tactics that work, tactics that don't work, etc. When you look at the dark web in general, and what's always been relevant on the dark web is selling information that one attacker has potentially compromised from an environment to benefit others that may want to buy it and use it in support of some of their strengths, tooling, capabilities, etc. And then, just like a legitimate organization, I think that one thing that's important to remember, folks that used to work in more malicious and criminalistic nation-state organizations start their own criminal organizations on the dark web. Attackers or -- in the threat network in general -- attackers that succeed in those realms go off and start their own malicious organization and start targeting organizations, individuals, etc. The reality is, is the definition of threat actor now is very broad. The information available to threat actors is literally at their fingertips, and their ability to just be effective is immediate now because of AI-based tooling that's literally been built to make it easy for them to give someone money to fundamentally become a rapid expert at executing attacks.

Rebecca Cradick: Yeah, and it's interesting, we're going to dive into this a lot because that collective adversary, learning behavior from each other, sort of supporting each other in many respects, so that they can find the quickest route to an attack, whether that is, as you say, ransomware or geopolitically sponsored on critical infrastructure. I guess we take a step back and work out how organizations can defend against that because, to your point, if you have so many different methods and attack vectors -- whether it's exploit developers, botnets, the dark web, learned behavior -- how does that change the strategies that organizations need to use to defend themselves in the world that we're operating in today?

Curtis Simpson: Yeah, it's another great question. One of the things that I think is most relevant today is best practices have helped and enabled us for a long period of time, but they're no longer that flag we can plant on the hill and continue to build our entire programs around. The reality is threat actors know what our best practices are. They build their tooling around them. They build their tooling to target where we don't have time and don't get to exposures we know exist in our environment. So, as an example, we've long since defined very consistent time frames around how quickly we remediate vulnerabilities based upon their publicly known severity. Well, attackers know this. What that also means, by definition, is most organizations rarely ever remediate traditionally designated medium risk vulnerabilities. What does that mean as an attacker? I'm going to target as many medium-risk vulnerabilities as I can. I'm going to string them together to build an attack that allows me as the threat actor to deliver the outcome I'm looking for: compromising systems, stealing data, executing ransoms, etc. That's the reality of today. So what does that mean from a defense perspective? It means that we need to actually build our prioritization efforts, both in terms of what we prioritize proactively and reactively based on where our business is most likely to be attacked and impacted. What that means is we truly need to operationalize intelligence and information around what attackers are going after in our industry, what attackers are going after in relation to the technologies we consume and then be able to -- when I say operationalize, that information needs to be consumed in the platforms where we prioritize the vulnerabilities we're remediating, where we prioritize the detections that we're applying. And a lot of this is easier than it's been before because of the adoption of AI and the enablement of AI from a workflow perspective and otherwise, but again, the key is, is that we have to think this way. We can no longer think that I'm just going to build based on best practices and I'll be good. It is about truly understanding that intersection of what matters to my business, what's likely -- what's most likely to be attacked and exploited based on what threat actors are doing today or thinking about doing tomorrow, and then how do I operationalize that information to truly prioritize my reactive and proactive efforts?

Rebecca Cradick: And it's interesting what you've described there because I think there are many forward-thinking organizations that have that level of threat intelligence. I mean, many, many, many of our customers obviously receive that from Armis. But what we're talking about here is such a vast network of threat actors, you know, attacking different organizations and different paths that they choose. I wonder what the defensive equivalent of that hive mind would be. Do you feel that individual organizations like us and some of our peers need to come together as a unit better to help organizations defend themselves? Because individually, we're providing individual information pertinent to organizations, but is there enough being done more publicly to defend against this hive mind mentality that's happening in the dark web?

Curtis Simpson: Yeah, I think in many cases we are operating in that capacity more than we realize. And it is, again, actually because of AI. So when you look at what a lot of tooling and solutions and services are built around today, when built using AI, is they're pulling all information around research that's been done and published. Period. Full stop. They're taking all of the research -- or they're using -- and just like with our solutions and many others, we're using AI to analyze all research that's recently been done and previously been done. We're using AI to analyze conversations in dark web forums and otherwise. We're using AI to just read all publications in general around new tactics, evolving tactics, and all of these details at a pace at which we wouldn't necessarily be able to operate in terms of sharing with one another as third parties, let alone consuming as the defenders. AI is doing that for us now. And if we're using the right tooling that's actually doing that analysis, that consolidation, that correlation, we are actually building that hive mind that's learning from all of the good work that all of us are doing. The key is that we're thinking that way, consuming solutions that are thinking that way. And in cases where we can, and it makes sense for us to build AI-oriented workflows that operate that way, applying that same logic. How do I do this analysis to not just have a bunch of data about threats, but rather to build data around what threat actors are actually going after that I can then intersect with how I look for vulnerabilities, how I build detections, and actually starting to really bring those things together operationally, whether it's doing it ourselves, leaning in on third parties to do it through their tooling, leading in on our partners. The key is, is that we start thinking that way and then looking at what can we already do with what we have, what should we be building, or what we should be looking to others to build under that mindset and under that concept.

Rebecca Cradick: And just taking it sort of a step further for a second. Do you think that collective knowledge and the industry has got a lot better at sharing information more publicly? We've done a really good job in the last three years of, you know, really talking about open vulnerability disclosure, and the process is quite established now across all the manufacturers and all the cybersecurity vendors that take part. So it's definitely a step forward. But we know that, obviously, the modern environments are so interconnected. It only takes one single blind spot to become the gateway in. Do you think, from your perspective, organizations have sort of -- have built some sort of resilience? They have so much data available to them, but that complexity in itself has become a risk because the deluge of information and so much more awareness of risk creates a little bit of where do we go, what do we do first?

Curtis Simpson: Oh, 1,000%. We used to build much of our programs around the more data I have, the better. Then we overwhelmed ourselves with data, including threat information. This is why I really look at it and say it the way that I do. And the challenge we have as the defenders, as the third parties that are operating in that realm that you spoke to in terms of responsible disclosure, etc., is if we as researchers identify something bad in terms of a way that an attacker can compromise technology, we go through a process of making sure everyone who owns the remediation of that and is impacted by that is aware. We then work through a process of determining how long it's going to take to build patches for those types of things, publish them, publish all the details. That could be 90 days, could be longer depending on the scenario. The attackers don't have any of those obligations. They are not operating on a moral or professional responsibility plane. They are just looking for opportunities to compromise technology and then exploit that technology. But this is where being able to hone either the intelligence we already have or to get new intelligence and start leaving some of the noise behind is important because what we need to be able to look at is we need to assume there will be zero day vulnerabilities that nothing tells us about through the traditional means in terms of, "Hey, this is -- these are new vulnerabilities you should care about." Because when we say these are new vulnerabilities with CVEs that we should care about, the problem is they've already gone through that process. They've taken time to be exploited. And it's still good to have that information to be able to prioritize based on it, but we can't rely entirely on it. What we have to have is information that's built around what threat actors are talking about in the dark web forums. What threat actors are actually testing in real-world scenarios that are being observed by Armis and others through honeypots that were built to look like every single industry on the planet, and then to monitor what those attackers are testing today. And then making you, as the defenders, aware of what those attackers are going after that actually don't have a CVE today, and how they relate back to your environment. And then whether it's through -- inherently through the technology you're consuming or should be consuming or through AI that's bolted on top of that, either self-built or acquired through partners and solution providers. You have to be able to pull all that together. You have to be able to say, "Well, there is this new CV -- or there is this new vulnerability that is being exploited. There is no -- there is no identified CVE, so there may not be a patch, but I can apply these specific control points within my environment to prevent the exploitation that is being seen. And all of that was actually made aware to me through these tools, through the information that was built by this overall capability that's identifying that attackers are doing this, how they're able to get away with it, what control points can be applied." This is the information that is gold to us now because, otherwise, if we don't have that information, don't prepare with those insights, the chance that that zero day is going to affect our business in spite of everything else that we've done is just far too high.

Rebecca Cradick: And it's funny because we talk a lot about, you know, awareness is power, but it's actually the control that really matters. And, you know, shameless plug, we focus -- we are focusing a lot on early warning intelligence, like you say. Like trying to get ahead of the threat, trying to provide information of potential risk. But monitoring that and transforming that into real, actionable defensive advantage is very challenging, right, because of all the things you've just said. Do you think, as we roll forward into 2026 and beyond that, actually, to get ahead of it, you know, in the "Stranger Things" concept of this two-world upside-down state, we do need to really double down onto that early warning indicators of where the next threat might come from, even if it never transpires? It's better to have been prepared for something that may not happen rather than actually have to respond to something that did.

Curtis Simpson: Oh, 100%. We have to assume that this is the new reality that we're facing, that threat actors are going to constantly be using AI to be assessing technologies and their exposures, to then test exploits against those exposures, and be able to go from end-to-end in terms of discovery and exploitation validation in a matter of hours to days. We have to make that assumption, which also means we need to build and optimize our programs around that mindset. And in many cases, that does mean that we either need to build new information, build those new capabilities, build those new workflows, because whatever it takes to operate in that capacity needs to be the priority. And if you think in that "Stranger Things" mindset, it is turning everything upside down. It's moving away from those best practices. It's not worrying about all of the constraints of yesterday. It's about thinking about what is possible in this realm. And I stress it that way because often in security, we look at, well, we have too much information, there's too much noise, too many false positives. How do you make this true? How do you make it true that you have the right data to surgically identify the priorities that you should be considering from a control plane perspective? Because that is critical. It's just as critical as saying, "What are the things I need to patch?" Not just based upon, "I committed to critical patches in 30 days, highs within 60, mediums within 90." No, no, no. What do you need to patch that actually will defend your business against the next attack? And one of the other things that's important to really turn upside down on its head is just because this is how we're audited today, as an example, doesn't mean that we can't change the controls and the language around how we're audited. I've done this many times over the years when it comes to corporate audits. If you show auditors -- internal, external -- that this is a better way to prioritize what matters to my business, and here's how I'm doing it effectively, they will change the way they assess the control. Period. Full stop.

Rebecca Cradick: Interesting. So if you had to describe an organization that is, like, getting it right, not perfect, because it's always an immovable situation, but someone that's really defending against the unknown and sort of thinking about the habits and strategies they've deployed, what would you give advice to others in trying to sort of put that together?

Curtis Simpson: In many cases today -- and this is where -- one of the questions we talk about with this industry all the time is, "Where should I be dipping my toes in AI?" "Where should I be bringing AI into my portfolio to actually have it make sense?" I think there's something really compelling in the AI landscape right now. There are workflow tools that have been built to make it very easy for you to take a workflow that you've already conceptualized and you've generally thought through, and rapidly build it, and then test it, and then actually apply it. And I say all that because if you already have very interesting data sources around threat intelligence, but you haven't yet operationalized them, well, with these tools, you can point the workflow solutions to where the data lies, have it do the analysis, do the deduplication, do the correlation. Take that information, feed it into the platforms that you're using for prioritization. Have it do again, the same thing, the correlation, the analysis. Establish the priorities. Cut through some of the noise. We have reached the point where we are beyond these orchestration challenges we've had in the past. And those that I see that are doing this well are, again, leaving behind what's always been a problem and looking at AI as an actual new way of solving this and addressing the issues we've had in the past and bridging the gaps between data, the tools that need to handle the data but can't necessarily analyze it well, AI can do the analysis for you. AI can think, or it can -- not necessarily think, but can analyze on your behalf and accelerate the output that would take you too long, and do the analysis that the downstream tools can't do on their own. This is where we can build the workflows, have the appropriate AI platforms do the analysis, take the output of the analysis, feed it into the platform that needs to act on it. We can build all of this today and test it in a way that gives us the confidence to actually start implementing these capabilities in an increasing scale. If you're going to really excite your teams by playing with AI, by dipping your toe into it, by showing value, this is where your teams should be spending their time. And this is -- the companies that are doing this well are really pushing their teams in this direction. Because otherwise, playing with AI can be that. It can be a toy that ends up consuming a lot of time but doesn't deliver a lot of value. Everything I'm talking about here is valuable and allows you to start moving in the direction of being able to operate in a similar capacity to the threat actors, but actually do it in a way that enables you to defend at a similar speed to which they're going to be attacking you.

Rebecca Cradick: Yeah. I totally agree. And I think this is funny how this year has shaped up, because if we look back to conversations you and I had at the beginning of the year of, you know, the concern people had about AI and especially in cybersecurity, we knew that it was being weaponized, but equally that defenders needed to use it and leverage it to sort of get control and get ahead. And then we've come to the end of 2025, and it seems to be people have really starting to talk about the proactive need for AI tools to everything you've just said for workflow management, for really siphoning out the information that is needed to defend the organization. So it feels like a real full-circle moment. Final question for you. And going back full circle to "Stranger Things" and the luxury that is that program where you can physically see the monsters that lurk in the shadows. Cybersecurity, you know, it's very difficult because you don't know where they're coming from. You don't know who sits behind that keyboard. You don't know what their intention is. There's lots of myths around what organizations still believe about cyber threats and, like, where their priorities need to be. If you could summarize -- I know it's hard, but if you could summarize into one thing that you still think needs a lot of attention and a lot of discussion around, what do you think that might be?

Curtis Simpson: Yeah, I think there's two points there. One we talked about already, which is surgically prioritize everything you're actually defending against based upon what's likely to be attacked. The other thing that we really need to continue to be focused on -- because it's what keeps CISOs up at night -- is how resilient am I if and when the horrible thing happens? Like if business capabilities are compromised at scale, if data is compromised at scale, will the data be lost? Can the data be recovered? How rapidly can the systems and capabilities be recovered? Even before you can get to that point, do I know what the most important systems are? Do I know what their downstream and upstream dependencies are? Do I understand what the ecosystem looks like in terms of how that information is being backed up and otherwise? Are those backups protected? One of the things we commonly see is those backups themselves can be compromised and impacted through ransomware. We have to think about this from the two sides. It's how do I defend and prevent the majority of these attacks? And then the other side of that is, how do I ensure that if it happens, I can contain and minimize its impact? And as I think about impact, how do I recover the business and minimize the impact to the business in terms of long-term impact like data loss, actual loss of capabilities for days to weeks, and those types of outcomes that can actually cripple the brand of the organization for a very long period of time? It's really those two sides of the coin we need to think about, not one or the other.

Rebecca Cradick: It makes perfect sense and I think if we're going towards 2026, I hope that a lot of work and conversations that have been had in the community helps guide some of those strategies and sort of starts prioritizing that business conversation to your point about the impact but also the resilience that is needed to tackle the attack surface that the organizations are facing now. Curtis, thank you so much for joining me for the second episode, and we are obviously a few weeks away from the next download of "Stranger Things," which is happening on Christmas Day. So for those that are listening or watching, we wish you a happy holidays. Enjoy the next download of "Stranger Things," and we'll join you in January for the final wrap, and hopefully, the answer to what we're all waiting to hear as how this series ends. So thank you so much for joining me.

Curtis Simpson: Thanks so much for having me. [ Music ]