
Inside Data Breaches and Human Behavior with Troy Hunt
Ann Johnson: Welcome to Afternoon Cyber Tea, where we explore the intersection of innovation and cybersecurity. I'm your host, Ann Johnson. From the front lines of digital defense to groundbreaking advancements shaping our digital future, we will bring you the latest insights, expert interviews, and captivating stories to stay one step ahead. Today I am joined by Troy Hunt, the founder of Have I Been Pwned? Troy has helped the world understand data breaches in a way few others could, translating billions of compromised records into insights about attacker behavior, human weakness, and the future of security. Troy, welcome to Afternoon Cyber Tea.
Troy Hunt: G'day. Thank you for having me.
Ann Johnson: So your work has forced the industry to confront a really hard truth. Security often fails not because of technology that is broken but because the technology does not work for people. And that is an important lens for our listeners because, when we strip away the headlines and we strip away the numbers, breached data is really a story about us. It is about how attackers adapt and how people keep repeating the same mistakes. Now, like me, you have seen breached data at unprecedented scale. So what do billions, and I literally mean billions, of compromised records reveal about how attackers actually operate?
Troy Hunt: Yeah. Good question. And maybe we should begin by quantifying billions. As of the time of recording, we've got just over 17 billion breached records in this service, nearly 7 billion unique email addresses. So we're sort of two and a bit breaches for each email address. And I guess one of the things that tells us is, when someone gets breached, they usually get breached more than once. And, of course, many, many different factors involved in that. I myself, because I've been on the internet since the mid-'90s, I have been in many dozens of data breaches. And I guess one of the things is this tells us is that time on the internet increases risks and increases likelihood of exposure. And then, by having multiple breaches for each individual, the enrichment of data, for want of a better term, some of the parts of these different breaches ends up exposing very rich datasets about individual victims.
Ann Johnson: That is a really large number. And I wonder if -- and maybe throughout the course of the presentation that we're talking about, if you have any context of whether the same user shows up or the same classes or the same groups of users show up repeatedly. But, before we get to that, when you actually analyze this data, what is the single-most frustrating human weakness you keep seeing over and over again?
Troy Hunt: You know, you got to the bit about what is the single-most frustrating, and immediately my mind went to disclosing to organizations. And maybe one of the human weaknesses we have here is actually not on behalf of the individual victims but on behalf of the corporate victims. Actually being able to disclose a data breach to an organization and have them take it seriously, have them even reply to my emails and then have them disclose to individuals I think shows serious weaknesses in the psyche of the humans that work for these organizations. And that's honestly to me, like, that is probably the number one biggest problem I'm dealing with at the moment. How do we get the humans in these organizations to let all the other individual human victims know about these breaches.
Ann Johnson: I can understand, by the way, why that would be so challenging. If you report a breach to a very large organization, they probably don't actually have a mechanism to go talk to the individuals. What they're going to be focused on is whether that individual's compromised credentials are being used in any way that's malicious within the organization is my suspicion.
Troy Hunt: I think that's part of it. And, you know, part of that mechanism, and I've often put this to organizations I've spent time with where I said, Look. If you suddenly had to turn around and email every single one of your customers about a data breach, could you do it? Like, could you do it for -- how are you going to send things in 72 hours? I think we think of three days being a bit of a gold standard. And very often they can't. But I find that really that the biggest blocker for organizations disclosing is that their number one priority is not to their customers, despite what their disclosure emails often say. Their number one priority, and it's probably not surprising, is to shareholders. And what that means is protecting organizational value, making sure that the share price doesn't take a hit, that investors don't lose confidence. And I think organizations end up in a bit of a nasty position where they're trying to publicly demonstrate how important security is. Every one of these disclosure notices always starts with we take the security of your data seriously. And then they go on about how maybe they didn't do it enough. But, inevitably, it then leads to this protecting the organization, being careful what they say, not admitting to fault. And it's not just to me. I spend a lot of time with law enforcement agencies, and I sit there and I have exactly the same discussion with the likes of the FBI where they're like, Look. It's difficult to find people in an organization to actually take a data breach seriously.
Ann Johnson: I think that's probably right. It's so interesting when you think about the psychology of it, because they probably do take it seriously, but they're only looking at it through the lens of how does it impact me? It's -- I don't want to use the word selfish; it's a strong word. But they're just using it really through this myopic lens of how could it possibly impact me as an organization versus how does it impact the individuals.
Troy Hunt: Well, I think you're right. In fairness, the individuals who are the victims of data breaches -- and I assume that includes you. It definitely includes me. Yeah. We have a selfish interest as well. Like we're interested in how does it impact us? And I'd argue that selfishness is often the behavior that drives negative outcomes. And an example of that is very often I see individuals joining class actions. So a service will have a data breach, and you're now in your 20th data breach or whatever it is as an individual. And this service has lost nonsensitive things: your email address, the password that you've probably used everywhere, your phone number, your home address. And people then join class actions. And I'm, like, why are you doing this? And then they'll say, Well, I want money; and I want retribution. You're going to get $1. Like, you're literally going to get $1 if you get anything. And the whole idea of punishment, this is why you have corporate regulators. And what this behavior is doing in this pursuit of selfishness, this gratification and making, like, literally a buck is it's driving organizations to be very standoffish and very defensive and not be transparent and not be expeditious and, in some cases, not disclose to individuals at all. And that's just a terrible outcome.
Ann Johnson: It is a terrible outcome. And I will tell you -- we'll get back on track in a second. But, when I receive the notification -- I receive, you know, probably one a month, it almost feels like. It's probably not that many. It's a big yawn at this point in time because I'm like, okay. My data's out there. I'm just going to make sure I have more alerts and more strong authentication on my accounts, right, because I can't prevent the fact that my data is out there and it's being sold and it's available.
Troy Hunt: Yeah. And that's this conundrum, I guess, that people are referring to as data breach fatigue where we're getting so many of these notices that we're sort of like, oh, well. You know, it happened again. And it's -- I guess it's a bit sad in a way where it's just become normalized to the point where we're like, okay. Whatever. But maybe what it's doing, as well, is changing our behaviors or necessitating that we change our behaviors. And we stop sort of treating each individual incident as some major thing, and we structure ourselves such that we expect breach and we're resilient to breach. You know, so that might mean having a password manager and strong, unique passwords; having some form of identity protection and credit monitoring. And I think these now should be the norm, such that every time there is a new data breach, you're like, well, I'm as best prepared as I can be for this.
Ann Johnson: That's exactly right. And we extrapolate that concept into enterprises. You're going to have an event. How you recover from the event and how you protect yourself and be as resilient and contain the event as much as possible has to be the philosophy because these are going to continue to happen. They just are. Well, this is very much like a business continuity exercise, right? Like, organizations invest all this time trialing, testing, war rooming what will happen if the network goes down or if we lose a site. How do we roll out the other one? And, if we had data breach response or security incident or whatever band we want to put it on there, if we have that as part of that organizational preparedness, and we did that more consistently -- there are certainly some organizations that do it well -- I think that would make a really big difference. I completely agree. Well, let's talk about you talk to a lot of boards of directors. If you could put one chart, one slide, one piece of information in front of boards of directors based on breached data, what would it show; and why would that be the thing you focus on?
Troy Hunt: Well, given my self-interest with this service, I just put -- Have I Been Pwned in front of them. Whack your email address into here. And, if your email address isn't in there, then your partner's probably is. Not necessarily like your partner in the office but your wife or your husband. Your kids are probably in there. Your parents are in there. And I think what Have I Been Pwned's been, I guess, unexpectedly very good at is bringing the reality of data breaches right in front of people's eyes where they enter their email address and like, Oh, wow. Yeah. I did sign up to that fitness app, you know, all those years ago. Or think that is the thing, that that has a way of always coming full circle, you know. Like, I registered for Adobe. I put my data in, and now this website's coming back; and it's showing me that my data from Adobe is out there. I feel that has a great impact.
Ann Johnson: I think that does have a great impact. If you can personalize it, people pay attention, right? People are -- by the way, we talked -- you said selfish earlier. People are just inherently selfish. It's part of human DNA, right? So, if you can personalize it to them, they're going to pay more attention.
Troy Hunt: Sure.
Ann Johnson: So I don't necessarily like thinking about humans as the weakest link. I talk a lot about digital empathy, that your system should be resilient enough that, if a user clicks on a link, it shouldn't bring down, you know, the entirety of a corporate network. And I feel the same way about, you know, humans' information or data being stolen and be part of a breach. So I don't want to talk about the humans necessarily being, you know, a major weak link. However, humans do make mistakes. So, after decades of these awareness campaigns we are doing, why do you think users keep making the same mistakes?
Troy Hunt: Well, let me speak from my own personal experience. And I'm the Have I Been Pwned cybersecurity guy. I think about this a lot. I think I do a reasonable job of it. I do a lot of education and spend a lot of time, I guess, speaking to important people of this. And I got phished earlier this year, like, proper, successfully phished. And I'll talk about how that happened because I think that contextualizes it for everyone and shows that we all have a human weakness somewhere that attackers can exploit. I was in London. As you can probably hear, I'm normally not from London. I'm from Australia. I was jet lagged. I very ironically had had a meeting with the National Cybersecurity Center, the British government the day before about moving from -- particularly moving from password advisory around things like create pass phrases, you know, long and complex and moving towards pass keys because pass keys are so important because they're phishing resilient. And I was jet lagged. And I caught up with a friend, and we had a couple of beers that night. And then, the next morning, I woke up early and I had this email allegedly from MailChimp about my account being locked because of spam complaints. And that seemed very feasible, and I followed the link. And my password manager didn't autocomplete my strong, unique password, which happens every now and then because organizations have got their assets spread over all sorts of different domains and didn't autofill. So I copied and pasted it. But I'm a responsible cyber guy, so I had two-factor turned on. And it requested the six-digit token, which I copied and pasted from my code generator into the phishing site. And about five seconds later, my brain went, hang on a second. You know, this isn't right. And about five seconds after that, I got the alerts from MailChimp that my mailing lists have been automatically exported. So I demonstrated these human weaknesses that social engineering and scams undertakers take advantage of. One of them was fear, losing access to my mailing list. It caught me in a moment of weakness. People have moments of weakness, you know. They're tired. They're rushed. They're concerned about losing something. And now here we are. And I think the lesson of this story is that, if I can without, I guess, being too self-ingratiating but, if I can specialize in this area and give this so much thought as a career for however long now and I get caught, well, any of us can get caught.
Ann Johnson: I think that's a great story, and I know you were open and transparent about it. And I appreciate you doing that because we have to acknowledge that humans, even experts, make mistakes. And we have to -- it helps other people feel not so bad, but it also raised awareness. You know, I was reading about it at the time; and I was thinking this is fantastic because it actually is going to raise awareness, as bad as it was for you, right. As bad as it was for you, it helped raise awareness; and people started thinking about their behaviors.
Troy Hunt: I agree with you. I -- at that moment, I wasn't feeling it was fantastic. My wife said she's never seen me jump out of bed so quickly. And I was literally, like, laying in bed having a coffee. It was 6:30 in the morning or something like that. I did very quickly realize that it was actually a really good opportunity because it gave me the chance to have discussions like this. I don't feel any stigma for it. I know a lot of people feel stupid for falling for scams. I think I've spent enough time sort of looking at the exploitation of human sentiments and so on to understand that it's normal. It happens. It got a lot of mileage in terms of discussions like this, in terms of pushing the pass key mandate. I mean, ironically, that meeting with the NCSC was like how can we demonstrate the value of pass keys. And afterwards my wife say, Are you sure the British government didn't phish you, just to make them -- no; they didn't do that. I'm quite sure it's not that. I look at that as an upside. And you -- look. I had some email addresses exposed, and I wasn't happy about that. I put them in Have I Been Pwned, and that gave me an opportunity to demonstrate precisely the behavior that I'm pushing other organizations for, as well. You know, respond quickly. Be transparent. I got all of it out there I think within about 35 minutes of the breach, which is much better than 72 hours. There's benefit to me getting phished.
Ann Johnson: Yeah. There is. But -- and hopefully it taught people what to look for. And for you I hope the recovery was, you know, reasonably painless, right?
Troy Hunt: Yeah. It was. But, you know, sometimes it's not for other organizations, that I've got one in mind, which I won't name, but they had a very typical, normal data breach some years ago. And it's a small organization. It only took one floor, obviously, the hackers were able to exploit And the guy behind that contacted me some time ago and went, I keep getting hit by class actions now, you know. Like, this happened like two years ago, and I just keep having these lawyers pop up and demand money. And it's killing my business. And that's a really sad case to be.
Ann Johnson: That is a really sad case to be. And I guess that leads me a little bit into the next question, right, to avoid those type of situations. Knowing what you know, if we could design identity systems, authentication systems around human nature instead of the way we've historically designed them -- we've tended to design them with a little too much complexity -- what would those systems look like? What would you advise?
Troy Hunt: I think identity is just a really fascinating problem, and it ties into other things that seem really obvious on the surface; then you start scratching. It ties into things like trust. And I'll give you a good example of this. We are here in Australia at the start of December. In six days from now, we will become the first country in the world to ban under 16's from having social media. Now, my 16-year-old son is not too worried about this. But my 13-year-old daughter is not happy at all because we made her wait until she was 13. She got social media, knowing that this ban was coming. And I'm like, look. You know, we'll do the right thing. We'll wait till you're 13 because that's what the platform's ask for. But just be conscious that, in December 2025, the government wants to compel the social media companies to no longer make that available. And this is interesting because then it brings up this question of identity and verification because how does social media platform now figure out who is 16 and who is not? Now, this is age verification, but that's tied very closely to identity verification. And everyone was speculating and said, well, look. You know, the only way you're going to be able to do that is you have to upload things like your driver's license. And who really wants to upload their driver's license to -- no offense but to an American social media website? Even if it wasn't a concern about geographic boundaries and things like that, who really wants to provide that information to another party when in many cases people using platforms like, say, X under a pseudonym. And there can be all sorts of good reasons why they don't want to tie an IRL identity to an online identity. But what does that mean? And what you start to realize as you dig deeper is that it's very, very, very hard to do any sort of identity insurance or verify or trust based on any of the assets that we have available to us today. And the message I'm looking at from my daughter, which came through only about a half hour ago, is the way Snapchat is doing identity verification or age verification is using a facial scan. So they're asking to scan your face. Now, there are privacy concerns about that, obviously. The other thing is, she's 13. She obviously looks 16 because she just flew through it, no problems at all. And apparently she is now -- I'm not quite sure what the word is, but she's almost like the guinea pig for her friends because she can verify all of her friends' ages now.
Ann Johnson: Oh, dear.
Troy Hunt: Which I find fascinating when it comes to this whole premise of actually verifying identity and making that, I guess, play nice with privacy.
Ann Johnson: Yeah. And I think that that is an example of a regulation that has good intent but doesn't understand the practical implications or the unintended consequences.
Troy Hunt: Well, I mean, how many times have we said that legislation always lags behind technology by some number of years?
Ann Johnson: Yeah.
Troy Hunt: I mean, just look at all the recent issues. At the moment, it's all about AI. It wasn't that long ago it was all about crypto. And, again, all of this was technology has moved forward, and regulators are suddenly going, Hang on a moment. Like, what do we do with this? Even IRL outside the digital realm, we have this problem as I'm sure you do in many other parts world at the moment with e-bikes, and kids and the regulation always takes so long to catch up.
Ann Johnson: It always does. We -- and we, you and I, some other time we will talk about e-bikes and also scooters because, living in Seattle, they are -- it's a challenge for us to drive and to share the roads because even, technology aside, our ecosystems and our infrastructure aren't prepared for e-bikes or e-scooters and all of those things to share the road with a car.
Troy Hunt: Well, you know, that the funny thing is we've just come back from the Netherlands. We were there for a Europol event in The Hague. And, of course, the Netherlands is the bike capital of the world. Everywhere is bikes. And now we're seeing lots of e-bikes. And I was saying to my wife, it's like, well, yeah. Obviously they will have all of this organized because they're the Dutch; and they do bikes better than anyone. And then you start looking up. And you're, like, actually, they've got really big problems there, as well, because it's the same sort of -- probably less cars hitting bikes, but you still got kids being idiots on e-bikes and, you know, all the same problems that are prevalent there too.
Ann Johnson: That's true. Well, let's talk a little about Have I Been Pwned. It is a critical early warning system for governments, enterprises, individuals. When you started it, did you visualize that it was going to evolve into what it is today.
Troy Hunt: Well, we were also doing this on a good day because today is Have I Been Pwned's 12th birthday.
Ann Johnson: Ah. Congratulations.
Troy Hunt: Yeah. I know it's wild, isn't it? It's funny seeing all the responses. So when people are, like, hang on a minute. Is it 12 years? Like I remember when. And no one is more surprised than me that it has not just become useful but arguably pretty essential part of the fabric of many parts of the internet. It has taken me personally from a desk job that I hated into a world where I get to travel around a lot and spend time with really interesting people and organizations. And I think what I'm probably most happy about is it has actually had real-world impact on the security posture of individuals and organizations and governments, as well; and that's a very fulfilling place to be at now.
Ann Johnson: That's fantastic. And what have you learned, though, about the power or the limits of transparency?
Troy Hunt: I think one of the beautiful things about transparency, I wrote a blog post just before this podcast along a similar line. And the great thing about transparency is that it's -- it's almost like a self-evident proof. And the blog post I'd written today is someone gave me a one-star Trust Pilot review. And I really shouldn't care about Trust Pilot for all sorts of reasons. But the point they made had been raised many times before, which was I entered a junk email address into Have I Been Pwned, and it came back and it said it's in a breach. This is a scam. And, well, okay. We can explain this using some of the principles of transparency. So, first of all, here is the code that we use to process data breaches. And it literally just regexes our email addresses. We run that over text files, which is what data breaches are. Here are some examples of email addresses that look ridiculous. I can't say the alias or the one that I use as an example, but it's funny. Look about -- here's an example. It passes validation. And you can see the validation logic because it's in an open source GitHub repository. That email address was entered into Adobe. You can go to the Adobe website, and you can enter that into the registration box; and there we go. That's how it happens. So I think there's a great example of where open transparency can very quickly disprove, in this case, fraudulent claims.
Ann Johnson: I think that makes sense. In the same vein, do you think that we're moving toward more transparency, more disclosure, openness; or will organizations try to minimize what they share unless it's mandated or regulated?
Troy Hunt: Yeah. And, unfortunately, I think that's what it is. One of the things that -- that a lot of people don't understand is around what are the obligations, the regulatory obligations of organizations for disclosure. And I particularly hear this in the EU under GDPR. And we were so excited about GDPR back in sort of 2018 era. It was going to come along and force all organizations to disclose everything within 72 hours. Otherwise, we find up to 4% of gross annual worldwide turnover. And now everyone will behave. And what people don't tend to understand is that, for things like disclosure, that the regulatory obligations are usually around reporting to the regulator. So you might have to, if you're in the UK, for example, report to the Information Commissioner's Office. And you have to report to them within 72 hours. But then you get to self-assess around the necessity report to individuals. And GDPR uses terms like jeopardizing the rights and freedoms of individuals. In Australia, we have what we call the notifiable data breach scheme. And, if the breach is likely to cause serious harm to the individuals, you need to disclose to them. But, outside of that, and outside of particular specific classes of data such as medical data or other financial data or other sensitive classes, you just don't need to disclose. And people, when the penny drops, they're outraged. They're like, how on Earth do we not have to hear about this? And as we discussed earlier on, organizations are very much in self-preservation mode after a data breach. So, if they cannot disclose and not have blowback from them -- and I think this is where Have I Been Pwned has a role to play -- they won't. And that's a real shame.
Ann Johnson: I agree, by the way. What do you think with CISOs specifically, right, when they're looking at breached data, what is the one thing you wish they would do differently in their defense? And then what do you think CISOs and their role in breach disclosure and transparency actually looks like in five or ten years?
Troy Hunt: Well, let me explain it in the way that I often broach it with organizations. So what will often happen for me is someone will send me data. And, while we're doing this podcast, I saw one pop-up where someone said, Look. This organization has had a data breach; and, also, here's a link to them denying it. And the link is to a tweet which basically just says, Fake news. Now, I'll have a look at that data, and I'll be able to verify it. And, if it's legitimate, I'll get in touch with that organization and say, Look. I think you should look at this more closely. It's not consistent with what you've said online. And the advice I normally then give is, Look. The truth is in the data. We will get to the bottom of the truth. And, particularly if it's in public circulation, you cannot escape that truth. Now, this is your opportunity to have some control over the narrative. You can either analyze this, come up with reasonable conclusions, make statements about it, and deal with it appropriately; or everyone will draw their own conclusions. And they have the data. They will be able to draw accurate conclusions in some cases, inaccurate conclusions in others. But, unless you control the narrative, you have no ability to control what people say about it. And because we have a platform where we have at the moment just over 6 million individual subscribers, your dataset probably contains thousands or tens of thousands of our subscribers. And we're going to send them an email because that's the commitment we've made to them. If we find your data, we'll let you know. And we're going to tell them exactly what's in there. So you can either -- I don't quite put it this abruptly, but there's the underlying sentiment. You can either do the right thing and be honest and tell everyone about it, and that will line up with our messaging. Or you can try and cover it up, say nothing, be dismissive; and we'll still give the right, accurate information to people. And journalists will pick it up if it's the right sort of story, as well, and they'll write about it. And then you're going to have to deal with the consequences of what looks like you're trying to cover it up.
Ann Johnson: I think that makes sense. And the challenge is that you get to this fog of war state where you don't know; and it's better to say nothing, right? It's better to say nothing until you have looked through all the data and done your research than to say anything that's inaccurate, either positive or negative, right?
Troy Hunt: The way I often put it, as well, is that, if an organization leaves a vacuum after a data breach, people will fill the vacuum. And very often it's reporters that fill the vacuum.
Ann Johnson: Yeah.
Troy Hunt: Very often, I use reporters to help do things like disclosure to an organization because one way of getting a company to respond is fear of having inaccurate reports being written about them, and it's a little bit of a shame that we have to be in this case where it's almost a threat. But, you know, here we are.
Ann Johnson: I often say that, in the absence of information, people fill in their own; and it's never positive.
Troy Hunt: Yeah. It's either never positive, or it's maybe not accurate as well. I mean, one of the things with -- with data breaches is that sometimes it's very simple. It's like it's just email addresses and passwords. But other times it might be things like financial transactions. Like, imagine the headline, financial transactions. Well, what does that mean? You know, does that mean my credit card details are at risk? Does that mean that someone got into my bank account? Or does it simply mean that there's a record of the items that I've purchased which might have its own issues, depending on the nature of the service. So leaving space for speculation or for other people to fill in the gaps is a very poor strategy.
Ann Johnson: It really is. So I always like to close Afternoon Cyber Tea with optimism. I call myself, you know, a cyber optimist. Having done this for almost 26 years now, I know that we're generally ahead of the threat actors. You know, for everything you see in the news, the industry has blocked thousands of other things. When you think about the work you've done now for 12 years and billions of compromised records, what gives you hope that we could build a more resilient future, one where there's more transparency and also better design so we can turn the tide against the breaches?
Troy Hunt: I think on the -- as far as the design front is the easier thing to answer, a passwordless future. Now, mind you, have been talking about a passwordless future for a very long time now. And we both, I'm sure, have more passwords now than we've ever had before.
Ann Johnson: True.
Troy Hunt: But we do have much more viable alternatives today than when I was answering questions 10 years ago about how long will it be till we get rid of the passwords. You know, the pass keys I mentioned before are fantastic. Now, most people don't know how they work, which is part of the problem. However, we do have a technical mechanism which does solve one of the greatest problems we have. Now, inevitably, we're still going to have to deal with this, like, trying to block water running downhill. They'll find the next path of least resistance. And that may not be compromising the pass key technology itself, but that might mean finding other ways to intercept the way people author things, or catching them poth auth. Maybe now the attackers have to try and get more cooking fingerprint material from clients than the actual material that people use to authenticate in the first place. So I think what we'll see is just this sort of continual progression where we will keep finding better mechanisms. And then other people find their way around it, and the game continues.
Ann Johnson: Exactly. Troy, I know you're really busy, so I want to thank you for joining me today. Breach data is a hard topic. But it does tell us a lot of truth about attackers, about us, and about the gaps in our personal and professional defenses. Your work in the past 12 years has helped the industry face that reality with a lot of clarity, and I know our listeners are going to really enjoy this episode.
Troy Hunt: Oh, cool. Thanks for having me, Ann. I really appreciate you having me on, especially on this birthday day.
Ann Johnson: Yes. Happy birthday to the organization. It's phenomenal. I want to thank our audience for tuning in, and join us next time on Afternoon Cyber Tea. So I invited Troy Hunt on Afternoon Cyber Tea for really obvious reasons. He has been running Have I Been Pwned for over a decade; 17 billion, I believe he said, records. That's structurally correct. He has a lot of insight into what is causing the continued proliferation of data breaches, the continued proliferation of records. It was a great episode, and I know you'll enjoy it.
