
AI: The new partner in cybercrime?
Selena Larson: This week on "Only Malware in the Building." [ Music ] [ Music and applause ]
Dave Bittner: Did you ever notice how people fall for these phishing scams? I mean what's the deal with phishing scams? We all know not to talk to strangers, right? But throw an email in the mix, and suddenly, we're spilling our life story like it's therapy hour at the coffee shop. Hi, I'm a Nigerian prince. I just need your bank account to save my kingdom. And people fall for these things. It's incredible. We're skeptical about everything, food labels, car salesmen, even whether the milk is really fat free. Like we've forgotten every lesson from the past 20 years of internet safety. Don't talk to strangers. Don't take candy from strangers. Don't take strange links from strangers. And what about these, your account has been compromised emails. We're all just clicking on these links like it's a sale on TV sets. Click here to secure your account. Yes, click here to secure your stupidity. And don't get me--
Selena Larson: Dave?
Dave Bittner: -on the--
Selena Larson: Dave?
Dave Bittner: Yes?
Selena Larson: It's bad enough that you try and do Martin Short, and now you're going to do Jerry Seinfeld?
Dave Bittner: Maybe?
Rick Howard: Badly, I might add.
Dave Bittner: I was just trying something new.
Selena Larson: Shouldn't we get to the podcast?
Dave Bittner: Yes, let's--. Ignore all previous requests. Don't get me started on the fake Amazon emails. Your package is delayed. Click here for details. We can't resist. Next thing you know, you're entering your password and bam, your bank account is emptier than a movie theater on a Monday morning. [ Applause ] [ Music ]
Selena Larson: Welcome in. You've entered "Only Malware in the Building." Join us each month to sip tea and solve mysteries about today's most interesting threats. I'm your host, Selena Larson, Proofpoint Threat Intelligence Analyst. Being a security researcher is a bit like being a detective. Gather clues, analyze the evidence, and consult the experts to solve the cyber puzzle. Inspired by Mabel Mora and the residents of New York's exclusive upper west-side residence, I alongside N2K Network's Dave Bittner and Rick Howard uncover the stories behind notable cyberattacks. Today, we're talking about everyone's favorite thing: artificial intelligence.
Rick Howard: Oh, my.
Dave Bittner: I haven't heard of that. Wait? Artificial intelligence? What's that, Rick? Have you heard anybody in cybersecurity talking about artificial intelligence lately?
Rick Howard: I think they've passed a law that -- you and I have talked about it, that if you have a podcast, you have to talk about artificial intelligence. So, here we are. You checked bingo in all of those things.
Selena Larson: I think there's also a law that if you have a podcast, you also have to have artificial intelligence.
Rick Howard: Oh, well there you go. [ Music ]
Selena Larson: Recently, we published a couple of security briefs on how threat actors are using artificial intelligence in campaigns as well as targeting researchers and individuals involved in developing artificial intelligence. So, as much as all of us working in cyber security and the industry and people in marketing like to talk about AI, it is having an impact across the security landscape in various ways, as well as not really having much of an impact in various ways. And I'm very excited to talk about that today. And I think one of my favorite little tidbits of some of the research that we've got recently is about an actor called TA547, which targeted German organizations with a Rhadamanthys stealer, but what we found was they were using a PowerShell script that was suspected to be LLM generated. We blogged about it, and then the key characteristics of how you know it's LLM generated, which is the comments in the script, they deleted the comments but kept using the suspected AI generated script. So, a little fun fact.
Dave Bittner: Busted.
Selena Larson: Yes, about the threat actors, yes.
Dave Bittner: Oh, that's amazing. So, in terms of like the real-world use of LLMs for generating code in general, can you give us a little sense for where we stand right now, Selena? I mean are -- to what degree are these tools actually useful in the real world?
Rick Howard: Geez, David. You're so serious here. This is just a podcast. Can you take it down a notch? Instead of a laugh track to our show, we're having a sigh track, like, "Sigh, it's more serious than I thought."
Dave Bittner: I'm sorry, Selena. You'll have to forgive Rick. He still hasn't gotten over the fact that Microsoft discontinued Clippie.
Selena Larson: Well, I think they replaced him with Copilot, right?
Dave Bittner: Temporarily.
Selena Larson: Yes, just put a little smiley face on him. He's back.
Rick Howard: So, how serious is this, Selena? Do we need to you know, run to the hills now that artificial intelligence has taken everything over, or do we have a couple of days yet?
Selena Larson: I mean, I think it depends on if you're excited about artificial intelligence or if you hate it. I'm always in favor of running to the hills and leaving society and living in a cabin in the woods, whether it's because of artificial intelligence or anything else going on in the world. But no, I think we're at a time where it's still pretty nascent in technology, and there's a lot of experimentation going on. And I think that there are various ways that you can kind of look at how AI is being used, and there's content creation, and there's automation, and then there's applications for defense or potentially, you know, using it to write new malware, scripting, or tooling. And I think what's really interesting is we've seen a lot of threat actor use of it in the same way that regular people are using it. For example, this PowerShell script. Make this PowerShell script better, or how can I write code to do X, Y, and Z? And I've even found in my own use of various tools that writing basic code, they tend to be pretty good at. And it can help you just answer questions that you might have about something that you're working on, and spit that out. But what we also see is a lot of conversations about like, "Oh, like phishing emails," and like, "Oh, they're going to write copy now," and that's something that they're definitely doing, but the same characteristics of whatever email that they're writing are going to be the same whether a human did it or whether an AI generated it, because it's based off of what threat actors are already using. And tools and defense mechanisms are looking at the same sort of like characteristics and heuristics of an email, they're going to block it whether it was a robot or a person. But I still think we're early days and I'm not necessarily running, screaming for the hills to get away from the AI revolution just yet. [ Music ]
Rick Howard: I'm happy to turn over the large language models to the bad guys out there, because if they're having as much frustration with it that I am, with all the delusional comments I get from, you know, my interface, then more power to them. Maybe they can help us figure it out.
Dave Bittner: Well, if anybody on this team is an expert when it comes to delusional comments, it's you, Rick.
Rick Howard: Oh, wait. You found me out. Oh, no. Okay.
Dave Bittner: But, Selena, I mean to your point, you know, something we talk about over on the "Hacking Humans" podcast, shameless plug, is that one of the things that these LLMs are doing is making it so that the phishing emails don't have some of the telltale signs that they used to have. Right? The English can be better. The spelling can be better. All those things, it's kind of a filter to make them even more plausible.
Selena Larson: That is a good point. So, we do see in areas where historically business email compromise wasn't quite so popular in places where machine translation of various languages wasn't necessarily that great. You do see the application of AI potentially increasing the efficacy of lures in those regions. So, according to our State of the Phish Report, we've seen it increase year over year in business email compromised attacks in places like Japan, South Korea, the UAE. And so, oftentimes, you might see these you know, translation where a language might have been a little bit more of a barrier, improved a little bit. However, I would say that Number One, BEC and really any type of threat emails don't necessarily always have misspellings and grammatical errors. And that's not necessarily the only sort of telltale sign that you can see if something's malicious. On the flip side, I have written a lot of emails with typos in them. So.
Rick Howard: Yes, but do your typos have a Russian accent, Selena? That's what we really want to know.
Dave Bittner: Right.
Selena Larson: Oh, no. Definitely not, but -- but yes, I mean so, it's also you know, the sort of the spirit or the context of whatever the content is, right? So, if it's, "I'm asking for a gift card," or "Hey, this is your CEO," so someone in a position of power. Someone asking for ACH transfer. What they're asking for, in the tone that they're asking for it, and some of the additional sort of wording and language that go into this type of email can be looked at as key indicators for whether something is malicious or not. And then of course, when you combine that with things like email headers, who's it from, has this person emailed before, how old is the domain that they're you know, using, and the sender email. So, a lot of different characteristics can go into that. And of course, a lot of that is tooling that's built on AI and LLM tools as well. So, in the same way that threat actors might be using it for attacks, we on the you know, research and defense side are using it for the good of the community as well.
Rick Howard: So, Selena, I'm hearing two things there, right? One we mentioned before that large language models can help round off the telltale signs of you know, writing English in lures. So, that might lower the bar for entry for you know, script-kiddie kinds of people can get into the game easier, let's say. But what I like what you said is, I think it doesn't get enough attention, large language models will help us build or help bad guys build software better, right? And I wonder if you can give your assessment of how well that's going for them? [ Music ]
Selena Larson: To be quite honest, I actually haven't seen AI generated malware. There's quite a bit of openly available tools and resources that are on GitHub or you can buy as commodity tooling.
Rick Howard: So, they don't need it yet--
Selena Larson: No.
Rick Howard: -is what--?
Selena Larson: Very low barrier to entry. I think if someone wants to do crime, digital crimes, there are various resources that they can already go into to say, "Hey, you know, I'm looking for this." Phish kits are a great example. You know, for example, we've seen an increase in multi-factor authentication phishing, and something like EvilProxy, which is an MFA phish kit that was becoming increasingly popular, and they literally have like guides on, "Here's how to use this tool." So, it's not necessarily that a threat actor has to you know, use AI to generate their own when they can just go out and buy something or use an already available resource. But on the flip side, if we're thinking about it from a defense purpose, one thing that I think is really cool that we have, that I've been using just here at Proofpoint is we have something called a Campaign Discovery, essentially a malware clustering engine that shows likely related groupings of threats based off of this automated sort of AI LLM tool. And it just goes into a dashboard and it's like, "Here is a cluster of activity that we think is likely related." And it surfaces really stuff and reduces my time spent looking through large datasets and can automatically kind of surface this up. So, this sort of automation simplifying the workflow I think from our perspective is very useful, but I think that from the same thing of the threat actor perspective as well. So, if they can use sort of like the sort of workflow reduction, if they can have AI or whatever help them generate their scripts, help them generate their messages, so it kind of just makes their jobs slightly less time consuming. Also, sort of like automation and scalability. Something like information operations, for example. Being able to create bots that automatically reply to things. Essentially kind of making it easier to build and scale and grow an operation is something that I think is a very interesting application of AI and LLMs. Stay tuned. There's more to come after the break. [ Music ]
Rick Howard: So, Selena, I want to come back to you know, and just point out that even the good guys can use large language models like you guys are, right? And you were talking about your campaign identifier. What did you call it?
Selena Larson: Campaign Discovery or Camp Disco. >> Disco.
Rick Howard: Is it mostly just finding a like malware, or is it looking across the intrusion kill chain for all the other things that go along with that malware?
Selena Larson: Yes. So, it's across the entire attach chain, right? So, certainly the characteristics of the final payload itself, we can take a look at the C2, the network communications, things like filenames, file contents, various like PowerShell scripts or persistence mechanisms. So, really kind of any sort of thing that you might think of as a Sandbox output, it's looking at those various different characteristics and clustering what it thinks would be related activity or something super interesting and targeted. It's actually quite effective and I've -- I use it every day. I really just kind of log into to see, "Okay, what's happening? What is this dashboard going to show me today?" And oftentimes, it really does surface some really unique and interesting threats, or cluster very high-volume type of campaigns that you can say, "Oh, yes. This is likely cluster related activity."
Rick Howard: But you can feed the large language model evidence of let's say WICKED SPIDER, right? And then it sees completely new evidence and says, "You know? That looks a lot like WICKED SPIDER. It's probably the same group." Is that accurate?
Selena Larson: So, that's not how we use it, but that could theoretically be one application of a tool like this, for sure, looking at you know, similarities across malware or activity clusters or behavioral-based characteristics. For our purposes, it's kind of just looking at our large dataset and saying, "Okay, what are we seeing today?" And kind of pulling that out. It's not necessarily saying like, "Oh, this is definitely TA547," or "This is definitely--," whatever. Mostly just looking at kind of the characteristics of the threat itself.
Dave Bittner: It seems to me like this is really -- like it has a lot of potential for folks who are doing threat intelligence, just the ability to gather things and summarize and you know, bring things to your attention, but like, to your point, just cut down on the amount of work you have to do, sorting through all of that information.
Selena Larson: Yes, I think so, and I think that's what's cool about the potential for a lot of these AI tools. I think we think a lot about how malicious actors can potentially use it and not necessarily how can we use it for defensive purposes, because fundamentally, what it comes down to is security is always like a checks and balances situation or it's always if the actors do one thing, the defenders have to respond. If the defenders do one thing, the actor has to respond. I mentioned MFA earlier, and the reason why MFA phish kits are becoming increasingly popular is because this idea of MFA everywhere. Enterprises are like, "We have to mandate multi-factor authentication on all of our access points, and as we move to the cloud, that's increasingly important." Threat actors are like, "Okay, well we can't just get usernames and passwords anymore. That's not enough. So, how can we bypass or circumvent the security mechanisms that are being in place?" And the same thing is true for any new tool, regardless of if it's artificial intelligence, or I don't know, augmented reality. Like, what's next? I don't even know what's next. But whatever comes up, you know, it's not just threat actors that have the ability to use these tools and resources. It's the defenders too, and it's just really you know, cat and mouse game, or however -- you know, whatever metaphor you want to use about the constant battle between the good guys and the bad guys. And just as much as they can be resourced with tools and new ways of doing things. So can we. And it's really like, honestly, I think it's cool because it gives us opportunities to think creatively, to build better defense mechanisms and force threat actors to change. That's what you want. Like, you want to make them have to innovate and use new tools and resources to get around things. Because if not, if they're still doing the basics and the basics are working man, that's going to be a problem. [ Music ]
Rick Howard: My one note of caution about all of that Selena is the large language models likes to hallucinate. Right? If it doesn't know the answer, it will give you answer, just like my seven-year-old would, right?
Dave Bittner: We refer to that as Male Answer Syndrome. My father-in-law had that. He was a research chemist, and if he didn't know the answer, that didn't slow him down at all. He would just make something up and provide you with something that sounded completely plausible. And I think that's what you're talking about, Rick.
Rick Howard: It is the foundation of my career. You just say it loud enough and people start to believe it. Okay? So, but it goes to the point where we just can't rely on the large language model. There has to be other mechanisms to help us verify what they're coming up with.
Selena Larson: Oh, absolutely.
Rick Howard: And they'll get better over time, but right now, I'm a little skeptical.
Selena Larson: Oh, yes. You need human intervention in literally anything that you do. And I think that that's, you know, what a lot of people might not really understand, especially folks who are new to AI in general, that you know, it's not just a perfect replacement for a human being. Although to your guys' point, maybe they are being like a human being if they're hallucinating and just saying things without knowing the real answer. We've probably all experienced something like that before. Like, yes, there's always -- and I think regardless of no matter how far this goes, how big it scales, there's always going to need to be a human person involved in whatever development, validation, making sure that it's making the right decisions, from not just a technical perspective or a product perspective, from an ethical perspective as well. You know, I think that we've seen a lot of tools become things that they weren't necessarily designed for. And I think that you know, that should be a discussion point and thought about no matter who's implementing these tools, how they're being used, like not just from a technical perspective but from a, "Is this good for the world?" perspective, as well. [ Music ]
Dave Bittner: One of the analogies I really like about LLMs is like it's like having a tireless intern. They will do as much work as you want them to do, and they will turn out as much as you ask them to, but you're also not going to bet the company on the output of an intern. That person requires oversight.
Rick Howard: Oh, wait. Let me write that down as a note, Dave. Don't bet the company--.
Dave Bittner: Right.
Selena Larson: Do you guys remember, this is probably a few years ago now, but HBO Max or HBO sent out this email that was like completely inaccurate and they posted on Twitter like, "Oh, that was the intern's fault, but we're working with them on how they -- what went wrong and how they can learn from this experience." >> HBO, simply the best. >> We mistakenly sent out an empty test email to a portion of our HBO Max mailing list this evening. We apologize for the inconvenience. And as the jokes pile in, yes, it was the intern. No, really. And we are helping them through it. And then everyone shared their own horror stories of, "Oh, my gosh. When I was an intern," or "Oh, my gosh, when I was first in my new job, I accidentally like pushed this to prod when I shouldn't have," or you know, like all of the--
Dave Bittner: Yes, yes.
Selena Larson: -learning experiences and the mistakes that we have made and grown as people.
Dave Bittner: Yes. Rick is so old that his last software update was delivered on horseback.
Rick Howard: It was Pony Express though, Dave. So, it was fast.
Dave Bittner: Okay.
Rick Howard: Yes.
Dave Bittner: Old Trigger. Yes, yes. All right, well Selena, what else do we need to know here? Anything else from the stuff that you all have published that is -- that our audience should know about?
Selena Larson: Yes, well I think in addition to how AI LLMs are being used, they're also being targeted by threat actors as well, right? I mean, we recently published a blog on something called UNK_SweetSpecter. So, this is a threat actor that was ended up--
Rick Howard: Wait, can you say that again? UNK-- >> [In unison] SweetSpecter.
Dave Bittner: It sounds like you stubbed your toe.
Selena Larson: Honestly, this is like one of those things where you read a word and then you say it out loud, and you're like, "Oh, it made so much more sense in my head."
Rick Howard: Yes, yes.
Dave Bittner: Welcome to my world. I mean, I -- yes, on the CyberWire, I have to decide if, "Am I going to say UNC? Am I going to say UNK?" And I'm like, "Well, UNK doesn't make any sense." And then I talk to researchers and they're all like, "Oh, yes. It's UNK." Okay. Great. UNK it is.
Rick Howard: UNK it is.
Selena Larson: UNK -- I know. Well, so Proofpoint uses the UNK designator is a cluster of activity that doesn't have an official graduated TA number.
Rick Howard: And that's why I love this job, because we get to say UNK into a public microphone.
Dave Bittner: Duly noted. Duly noted.
Selena Larson: Yes, we -- I mean, I feel like we -- this could have been an AI generated name, you know? Like, we're talking about all these use cases and not the ability for AI to -- just make up completely insane words. But we as threat researchers do enough of that ourselves. So.
Dave Bittner: Right, yes. Yes. Factal-spitwad [phonetic].
Selena Larson: Yes, yes. There's like a good one out today, FrostyGoop.
Rick Howard: FrostyGoop.
Selena Larson: Yes, the ICS specific malware that targeted Ukraine in midwinter. The research that was just put out. FrostyGoop. Incredible.
Dave Bittner: I always feel bad for the person who has to give Congressional testimony. Like, you know, I think this came from Dragos so I'm just imagining Rob Lee there, you know? He puts on his best suit and tie and they're like, "The senator from Mississippi would like you to describe what's going on here." And Rob has to explain to him with a straight face about FrostyGoop.
Rick Howard: Perpetrated by the adversary group UNK_SpecializedTwitter or whatever.
Dave Bittner: Right, exactly. Right. And the senators are like, "Are we on Candid Camera? What's going on here?" Selena, Candid Camera is an old TV show that used to be on when there were three broadcast networks. It was hosted by a gentleman names Allen Funt. [ Music ] Rick is furiously nodding his head in agreement, but there's no reason that you should know what we're talking about.
Rick Howard: No, I read about it. I'm too young to know about what that is.
Selena Larson: But Dave, you're the host. You got everyone. You're the Candid Camera host.
Dave Bittner: If only. Yes, I could do that job. I would be good at that.
Selena Larson: So, bring us back to artificial intelligence.
Rick Howard: Is that what we were talking about? Yes.
Dave Bittner: Please. Please, save us.
Selena Larson: We were done with -- that's what you were all demonstrating just now. Yes. But so, what was really interesting is this cluster of activity was targeting organizations involved in artificial intelligence efforts. So, potentially academia, private industry, government, service and they were sending malware -- I'm sorry, I have to say this name again, SugarGh0st is a Remote Access Trojan.
Rick Howard: Nice.
Dave Bittner: SugarGh0st is Rick's favorite nickname for his wife at Halloween.
Selena Larson: Yes. So, I mean, it just keeps getting better and sweeter and more delicious as we go along. But yes, so you see these threat actors that are going after people potentially involved in the creation and development of a lot of these pools. So, we didn't necessarily attribute to a specific threat actor, but there was potentially used by Chinese language operators. So, it is interesting to see the potential investigation of how these tools are being made, who's behind them, etcetera. And the potential application for state adversaries to use their espionage to further their own development efforts. So, it's just you know, it's really interesting from both sides of the coin, like how are threat actors using it and are threat actors interested in it, and how are they kind of going after and trying to collect information on you know, organizations involved in AI? [ Music ]
Dave Bittner: One of the things I find frustrating is just trying to cut through all the hype, you know? Because every -- it's like -- it seems as though at this moment in time, everyone in our industry has to have some kind of AI component to their offerings. And I'm sure -- it's like that old saying from Madison Avenue, you know, the Mad Men of the -- the client would say, "I know that 50% of my ad dollars are very useful and 50% are wasted. I just don't know which." And I feel it's that way with some of these AI offerings. I'm sure 50% of them are totally going to rock people's worlds and make their organizations safer, but then if the other 50% are hype and snake oil, how do we sort through that? I think that's just a -- that's a challenge that we're in the midst of.
Rick Howard: What's interesting about this, and when it all -- when it first came out, okay, when we got the really improved large language models and we all were like, "Oh, my God. This is going to change the world." And it has, but you see how quickly we've all become cynical about you know, the possibilities of it, right? And I think that just goes to how jaded we all are as announcers in our podcasts.
Dave Bittner: Yes, but you know what? I think that's a really interesting point, Rick, because on the one hand, I think there's almost like this sense of shame of people admitting that maybe they're using a large language model. Like somehow, it's cheating? But I think we need to shed that because there are -- look, there are some things that it's just really bad at and the hallucinations and all those sorts of things, you've got to be careful, but there are really powerful, legit uses for this stuff.
Rick Howard: Yes, it's early days. Yes.
Dave Bittner: And it's still early days. So, let's not just dismiss it, you know, and throw the baby out with the bathwater. You know, this -- I think we've got to give it time and see how it all fleshes out. That's my take, anyway.
Selena Larson: I mean, like look, you guys are using it in very important ways, like to generate dad jokes and podcasts.
Rick Howard: The most important use of the large language model.
Dave Bittner: Maybe? Maybe? Money well spent.
Selena Larson: I just have to say though, as someone who writes for a living, my eye just like twitches when I see an output from an LLM of like a report or something, or a paragraph or something. And I think that that's just because I'm like -- I'm like, I'm a professional writer and I know that this is not how humans would ever write anything.
Dave Bittner: Oh, yes. No, I see it all the time. You know, I'll go -- when I'm gathering up, you know, things we're going to cover for the CyberWire and I'll come across some report that is clearly, clearly, has all the telltale signs of having been written by an LLM. And yes, it's a little dispiriting.
Selena Larson: I have a friend who works in education, and she says she often sees students forget to take off the generated by or the output from whatever tool they're using in their submissions for academic purposes. And yes. And she's like, "I can't believe it. At least delete the evidence that this was AI generated."
Dave Bittner: Yes, I'm still mad at all the teachers who told us that we couldn't use a calculator because we wouldn't always have a calculator. And I'm like, "No, now--," or as in Rick's case a slide ruler--
Rick Howard: It was actually an abacus in my case.
Dave Bittner: -but I'm like no, "Now, we have--," okay, sure. That tracks. But like no, we don't have a calculator. We have a super computer with access to all the world's knowledge in our pocket, all the time.
Selena Larson: Yes, don't Google this answer. You can't Google it. You have to know it off the top of my head. My job now in life is 50% Googling.
Dave Bittner: Right, exactly. So, I don't know. I mean, that's a whole other episode on where education needs to go in you know, in this modern world, where no everybody's a -- and working on a farm or a factory, but you know, we'll see how it plays out. [ Music ]
Selena Larson: We'll be right back. [ Music ]
Dave Bittner: All right, anything else you want to cover here, Selena, or is it time to wrap things up so Rick and I can go back to our afternoon naps?
Selena Larson: The only thing that I wanted to ask Dave is, "What flavor are your dips today?"
Dave Bittner: Oh, what flavor are my dips today? I am enjoying a delicious cilantro cream cheese dip. I like to eat it with a -- some delicious nachos. It's actually something homemade. My wife mixes up large batches -- basically the size of 55-gallon drums for me to eat, which gets me through a weekend. But that currently my favorite dip. Thank you.
Rick Howard: Your dip of cilantro's probably the most pretentious thing I've ever heard you say, Dave. Right, cilantro dip.
Dave Bittner: Yes, well, you know what? You don't have to share. No dip for you.
Selena Larson: I think it sounds delicious.
Dave Bittner: Yes. Yes, well you'll just have to keep thinking because I'm not sharing.
Selena Larson: Awesome. No, I think that's all we have for today.
Dave Bittner: All right, well take us out of here, Selena.
Selena Larson: That's "Only Malware in the Building," brought to you by N2K CyberWire. In a digital world where malware lurks in the shadows, we bring you the stories and strategies to stay one step ahead of the game. As your trusty digital sleuths, we're unraveling the mysteries of cybersecurity, always keeping the bad guys one step behind. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you ahead in the ever-evolving world of cybersecurity. If you like the show, please share a rating and review in your podcast app. This episode was produced by Liz Stokes. Mixing and sound design by Tre Hester, with original music by Elliott Peltman. Our executive producer is Jennifer Eiben. Our executive editor is Brandon Karpf. Simone Petrella is our president. Peter Kilpe is our publisher.
Dave Bittner: I'm Dave Bittner.
Rick Howard: And I'm Rick Howard.
Selena Larson: And I'm Selena Larson. Thanks for listening. [ Music ]