
Bringing the Cyber Tea to RSA 2025
Ann Johnson: Welcome to Afternoon Cyber Tea, where we explore the intersection of innovation and cybersecurity. I'm your host, Ann Johnson. From the frontlines of digital defense to groundbreaking advancement shaping our digital future, we will bring you the latest insights, expert interviews, and captivating stories to stay one step ahead. Today, I am excited to come to you from this year's RSA Conference, where over 40,000 people have come to talk security and innovation, including today's guests. Vinod Vaikuntanathan, Ford Foundation Professor of Engineering at the Massachusetts Institute of Technology, and Dr. Sasha O'Connell, the head of Cyber at the Aspen Institute, join me to bring the experience of this incredible conference to you via our Afternoon Cyber Tea community. Throughout the episode, we talk about artificial intelligence, quantum, cyber resilience, partnerships, and more. We talk about what we are seeing and hearing and what topics are surprising them as they walk the halls here in San Francisco. It is such an honor to speak with Vinod and Sasha. I appreciate them taking the time to sit down and talk with us during such a busy week. And I know you will enjoy this special episode. Sasha, welcome to Afternoon Cyber Tea. This is actually our first in-person Afternoon Cyber Tea.
Dr. Sasha O'Connell: Amazing. Thanks for having me. It's a pleasure to be here.
Ann Johnson: So you have an impressive career -- cyber policy, cyber education. Can you just share for the audience your background and more about the work you do at the Aspen Institute?
Dr. Sasha O'Connell: Sure, absolutely. So I think what's most important is that I come from a non-technical background. I like to own that and lead with that and share that, because I think sometimes folks are intimidated to get into this space that don't come from a technical background. So it's obviously so important to have a diversity of skills. So my background both academically and in terms of my work history is in public administration. I grew up in government at FBI doing policy and strategy. Mostly I ended up in the tech and cyber side of things in the national security space. So about 15 years at FBI. Left, spent a couple years in management consulting. And then just before coming to Aspen, I was teaching at American University full-time US cyber policy with a public administration spin. So who's who in US cyber policy? How do decisions get made? And then I came last summer of '24 to Aspen Digital, which is a program at the Aspen Institute. Aspen is most known for our "convenings," so we lead with that as a think tank. Really believe in the importance of partnerships, sort of multi-stakeholder engagement. And I have the privilege within Aspen Digital of leading the Cyber program. So Cyber right now, we are really busy with policy adjacent work, think of it as top-down. So we have a US group, a global group, that are very active on issues like what does offensive cyber look like in this time and space; what does that kind of policy look like; what are performance measures for the country look like in terms of cyber policy; those kinds of questions; how does changes in trade policy impact cyber? So we just came from a global group meeting last week in the UK. We had 17 countries, about 42 people, get together to take on and talk about some of these issues. We follow up with action teams and then often writings and other follow-up projects that stem out of that. So we do a lot of that work. We have a large summit, which is our largest public-facing event, coming up November, our 10th anniversary. So we're really excited about that. And then on the other side of things, we do a lot of work with direct service organizations in cyber, working on resiliency issues. So around workforce, training, volunteers for critical infrastructure. And also a really fun part of my job is bringing those two communities together. So those direct service US-based civil society organizations that are doing hands-on work to support the cyber community with our policymakers and our leaders globally. Making sure those things are integrated and are informing each other. So busy times and a lot of fun.
Ann Johnson: Well, a lot of changes. We're recording this the day after Brad Smith. I don't know if you saw, he made announcements overnight related to how Microsoft is going to think about operating in Europe with an eye toward resilience, right?
Dr. Sasha O'Connell: I saw it. Congratulations.
Ann Johnson: Thank you. I'm taking on the remnants of the European Deput CISO, also.
Dr. Sasha O'Connell: In your spare time [laughter]?
Ann Johnson: Yeah. Well, I actually have a home in Europe.
Dr. Sasha O'Connell: Amazing, okay.
Ann Johnson: When they were looking, they said, oh, you have the ability to be there. And I said, I do have the ability to be there. But in that topic of resilience, can you talk about how Aspen collaborates with other organizations to enhance cyber resilience?
Dr. Sasha O'Connell: Absolutely. And one thing I'm really excited about the opportunity to talk about is a public-service awareness campaign that we are helping lead along with our partners at One Rose, on behalf of Craig Newmark and craig newmark philanthropies. So we're here at RSA really all about this campaign, which is called Take 9. The website is PauseTake9. And it really is about communicating resiliency to everyone, right? That this is all of ours to help solve and everybody plays a role. And it gets back to my original point that you don't have to be on the tech side of the world, we need you in this fight for cyber resiliency. We know about the threats today. We know at this conference, right, what we're up against. But communicating that to consumers at the individual level. And what's fun about this campaign is it takes humor, right, and engagement and storytelling to bring the cyber awareness that we all take for granted to some extent -- we're all here -- really to a mass market and to consumers. The idea being again that, you know, resiliency requires every individual, and we need individuals to be informed consumers, right, to drive a market that builds in resiliency. So, yeah, we're here really talking about Take 9. And it's really fun, first of its kind, that's getting a ton of traction.
Ann Johnson: Yeah. I had the privilege this weekend, I was on an event with Craig.
Dr. Sasha O'Connell: Oh, good.
Ann Johnson: So he briefly -- like, we didn't have time to go in depth, so it's great to hear. He mentioned it to me, said, we're doing this. I love to hear the more in depth. And congratulations. And I will make sure that on my social media that I'm actually promoting it. Because consumer -- and I don't want to say the weakest link, because that's a negative thing to say, right? We need to educate people to keep themselves secure in their home lives, right? Which will help them keep themselves secure in work lives, which then will help all of us be more resilient.
Dr. Sasha O'Connell: Couldn't agree more. And to your point, it's not a blame. It's an empowerment sort of message that we're really coming at folks with, that there are things we can do. It's so easy to get overwhelmed, right, with the news, think it's someone else's problem. But there are, as you well know, very simple things. But we really need collective action here, and that's what the campaign's all about.
Ann Johnson: Yeah, okay. So common questions about cyber, right, how have those evolved in the last few years, and how are people thinking about, say, generative AI? What kind of questions are you getting?
Dr. Sasha O'Connell: Sure. I mean, there's so much to discuss. Again, I would pick the communication piece as something we're seeing a lot of change around, not to get totally back to Take 9. But in the world I came from in federal law enforcement and national security, again, the conversations around cyber and cyber policy were technical, either technical or national security technical in terms of the law enforcement space. And increasingly, the change I see is, again, this real effort around communication, around the integration of non-technical people and organizations and this kind of whole of society approach to cyber. And it's been really interesting for me, again, coming from a non-technical side to see that evolution and that change. I mean, the integration with AI, you know, we could kind of talk about all day. When it comes to cybersecurity, as you all know, you know, we're watching kind of both from an offense and defense capability what's going on, right? And again, not telling you anything you don't know, but we see of course the increase on the nefarious actor side, both in the ability to do more, right, and to tailor more specifically. So it's that double problem of expansion and better integration and customization that results in what Fran calls democratization of destruction, right?
Ann Johnson: Love that.
Dr. Sasha O'Connell: We have a problem there. Now, of course, again, as you well know, on the defense side, we're seeing benefits as well. We have folks in the federal government and others talking about the benefits that generative AI can play on the defense side, and we're watching that. And so that of course races on. My understanding is that we're seeing trouble more on the cybercrime side versus kind of the integration into more sophisticated intrusions becoming worse first, but we think both of those curves will kind of catch up. The interesting piece for me, if I can be policy nerd for just one second.
Ann Johnson: Of course.
Dr. Sasha O'Connell: From a policy perspective, right, the question is: What's the model of governance that will work here? And is there something in between it being entirely decided in the private sector and a traditional top-down federal government approach? Is there something modeled after potentially the internet governance model that could apply here that we can move to a multi-stakeholder kind of engagement around this? As I think everybody comes to see the need for some parameters.
Ann Johnson: Yeah, I think that makes sense. And I also think that technology isn't -- and you said this, so I'm going to quote you or, you know, paraphrase you. You said on a recent podcast that technology aren't the trickiest things in the industry, right? So when you think about even generative AI and all of the technologies coming, the tricky things actually are decision-making, balancing competing values, needs, risks, etcetera. How do you approach that? And we try to be advice-driven on the show. What advice would you give other security leaders for balancing those things?
Dr. Sasha O'Connell: Sure. In my space, I get to focus on that through strategic convenings, right? So bringing folks together from different backgrounds, from different industries. We work on diversity at every level in our convenings in terms of where folks are coming from to try and have what I call a little bit of a spicy conversation, right, to work this stuff out. I have seen in my tenure in this space so many of these cyber policy issues are evergreen and we don't get resolution and we just decide, you know what, we sort of roll on. And I really think the gap there is having those hard conversations and getting alignment or at least some agreement around the values and where we want to go forward. So we use our convening power at Aspen to try and bring diverse voices together. I really encourage folks to engage both security professionals in your organization in an interdisciplinary way, the product team, the strategy team, the comms team, right? Like, everyone really has an important role to play here in striking this balance. And then outside of organizations, of course partnering with folks like us and others who do this externally and can expose you to different perspectives, different industries. And we really have to work this stuff out and understand each other's perspectives and where we're coming from to get some movement forward.
Ann Johnson: I love that. I was doing a panel yesterday with -- I don't know if you know -- Heather Hogsett and Todd Conklin. Heather's from the bank policy industry. They're just great humans, right?
Dr. Sasha O'Connell: Yeah.
Ann Johnson: And we were talking about the need when you do table tops, don't just bring the security people, bring your lawyers -- important topic -- but bring in the business stakeholders.
Dr. Sasha O'Connell: Yes.
Ann Johnson: Because the business stakeholders need to understand the outcome of decisions, right?
Dr. Sasha O'Connell: 100%.
Ann Johnson: So, like, bringing in those diverse views, different perspectives, gives people understanding and help educate and bring them along, right?
Dr. Sasha O'Connell: Exactly. And for the business leaders on the product side to understand at the beginning of the development process, right?
Ann Johnson: Yeah.
Dr. Sasha O'Connell: Sometimes we work with folks at the end where there's less options, right? We always want to talk about things when we have more options, earlier.
Ann Johnson: Early in the cycle, yeah.
Dr. Sasha O'Connell: Exactly.
Ann Johnson: So in the industry today, there's a lot of stuff. I've been in cyber, this is year 25. I finished year 25, I'm going, I guess, into my 26th year.
Dr. Sasha O'Connell: Amazing
Ann Johnson: There's a lot we get right. There's a lot we get wrong. From your perspective, what are we getting right today, and where do we have opportunities for improvement?
Dr. Sasha O'Connell: Yeah. I've been excited to see again the focus on communication and engagement beyond talking to ourselves, right? Again, coming from where I sit, which is outside the technical side of security, until we get to all the things we're talking about to a real diverse conversation, but that requires people kind of picking their heads up from their busy days, right, being willing to talk, utilizing translators to have -- you know, not traditional way of translators but people who can talk across policy folks -- your lawyers, you know, your product people, and your security people -- to have these conversations and then to have those conversations publicly. So I've seen that starting to happen, and I get really excited about that. Because otherwise, you know, how are we doing? Sandra said, like, we were here last year, are we doing better?
Ann Johnson: I love Sandra.
Dr. Sasha O'Connell: She's amazing, right?
Ann Johnson: Yeah.
Dr. Sasha O'Connell: She's like, are we doing better or worse, right? We can't keep doing the same thing and expect a different result. And I think picking our heads up and having those collaborative conversations, building unusual partnerships to move the ball forward is where it's at. And seeing that and some general political will and interest in these topics keeps me inspired and motivated.
Ann Johnson: Good! How many RSAs for you?
Dr. Sasha O'Connell: Number two.
Ann Johnson: Number two.
Dr. Sasha O'Connell: Only number two. I'm like an RSA rookie, almost [laughter].
Ann Johnson: Yeah. So you walked around this year. I know we talked about when you came into the room how busy you've been. Anything surprise you or anything you think we should do more of, less of?
Dr. Sasha O'Connell: Such an interesting question. I'm always looking for opportunities to actually have these kind of conversations, right? It's so hard to really grab people and sort of get past the hello and set up a call for next week.
Ann Johnson: Yeah.
Dr. Sasha O'Connell: When everybody's back. So, yeah, I'm always looking for that. We are working on that, right, having those smaller opportunities to really engage and again cross pollinate universes. Which is the best part here, right? And really starting to happen again seeing folks from all different kinds of industries involved in cyber here is inspiring too, sort of some non-traditional players. So it's great to see.
Ann Johnson: Yeah. So once every year, I spend an hour just walking the show floor. And I don't like walk around and look at what Microsoft has, because I know what Microsoft has. I'm walking around the small booths, because I want to see the new innovations. You do get to see the companies that are pivoting into cyber.
Dr. Sasha O'Connell: Exactly.
Ann Johnson: And it's really interesting. And then there's like government entities that are out there. Like there was a coalition of the German cyber companies.
Dr. Sasha O'Connell: Yes, the global presence, interesting. Did you see the puppies?
Ann Johnson: No, the puppies were gone when I got there. Which I'm devastated. I have three rescue dogs.
Dr. Sasha O'Connell: You do?
Ann Johnson: I'm the world's biggest dog person.
Dr. Sasha O'Connell: You missed the puppies?
Ann Johnson: And everyone's, like, go see the puppies. We got to the booth, no puppies.
Dr. Sasha O'Connell: Actually, getting back to Craig, he went over yesterday, got some puppy time.
Ann Johnson: I'm going to have to tell my chief of staff that I need more puppy time on the schedule.
Dr. Sasha O'Connell: That was a good one this year.
Ann Johnson: Yeah. I think the puppies probably get overwhelmed. Because I was there late in the day. They probably take the puppies home.
Dr. Sasha O'Connell: Exactly.
Ann Johnson: I'm a cyber optimist. As I mentioned, I've been doing this a long time. I get up every day believing that we're one step ahead.
Dr. Sasha O'Connell: Yeah.
Ann Johnson: What makes you a cyber optimist?
Dr. Sasha O'Connell: Again, it's the people and the partnerships, right? I mean, growing up in government, growing up in federal law enforcement, again, to some extent it's sort of this endless cycle, right? I tell students who want to work at the FBI, we're never going to solve all the problems without you, right? There is a sort of endlessness to the threat and to the mission and the need, but it's the people that keep you going, the passion they bring, the partnerships and collaboration. And I found the same thing in the broader cyber community. Such amazing people, so dedicated, right, to protecting not just themselves and their families but the broader community. And that's truly inspiring for me and definitely keeps me going.
Ann Johnson: Awesome. I know you're busy, Sasha, thank you so much for making the time.
Dr. Sasha O'Connell: What a pleasure. So great to meet you too. Thanks for having me.
Ann Johnson: Thank you.
Dr. Sasha O'Connell: Yeah, you bet. [ Music ]
Ann Johnson: Vinod, welcome to Afternoon Cyber tea.
Vinod Vaikuntanathan: Thank you for having me.
Ann Johnson: I know your work focuses on cryptography. Can you briefly share your background and more about the work you do at MIT?
Vinod Vaikuntanathan: Yeah, sure. I am a cryptographer computer scientist by training and a bit of an applied mathematician as well. So much of what I do at MIT focuses on the theory and applications of cryptography, which is a method of sort of securing our communications and computations, but also its connections to emerging fields of science. For example, quantum computing on the one hand and machine learning, in particular security issues in machine learning on the app.
Ann Johnson: Okay. And AI is of course the hot topic at this year's RSA Conference. You participated in a keynote panel on cryptography and the impact on innovations like artificial intelligence and quantum computing. How do AI and quantum work together? And what should security leaders be focusing on as the technology continues to accelerate and continues to advance?
Vinod Vaikuntanathan: So I'll address the two sort of technologies, you know, in turn. So maybe I can first talk about quantum computing. It's really the big question of our times is whether we will be able to realize the vision of the physicist Richard Feynman of building a very large-scale quantum computer which will have a tremendous amount of applications. But one of which is -- which is sort of close to my heart -- is that if there is such a large quantum computer, it'll end up breaking public key cryptosystems that we use to communicate over the internet today. Essentially all public encryption systems will be broken if such a computer exists. And now that puts us in a little bit of a quadrium. We have been making steady progress towards building larger and larger quantum computers. It's anyone's guess at this point when and whether at all we'll have a large enough quantum computer capable of breaking cryptosystems. But, you know, if that happens, we are in big trouble, we are in really deep trouble. Because, really, the integrity and privacy and security for our communications over the internet relies on these systems. So what cryptographers have been thinking about over the past decade is whether and how we can design new kinds of cryptosystems that cannot be broken even with large-scale quantum computers. So this is a field called post-quantum or quantum-resistant cryptography. And, you know, we've been making quite a bit of progress over the last five, six years. You know, NIST (the National Institute of Standards and Technology) out here has just finished standardizing a bunch of cryptosystems that includes encryption schemes and digital signature schemes for use in the post-quantum era. So that's standardization. But a lot of companies are experimenting with kind of coming up with a new version of TLS which uses post-quantum cryptosystems in its heart. I've been sort of hearing about Google. Microsoft has an effort along these lines. Amazon, Cloudflare, and so forth, you know, the companies have all jumped into this effort. So that's really great to see. So that's sort of the introduction of cryptography with quantum computing, at least part of it.
Ann Johnson: By the way, I spent about 14 years at a company called RSA Security. So it's funny as you talk about, you know, being an MIT professor, right, RSA folks were there and the original algorithms. And I tell people there's billions of those toolkits that every piece of software writes.
Vinod Vaikuntanathan: That's right. Yeah, that's awesome. So just kind of a sidebar maybe, if you don't mind, Ron is two doors away from me right here.
Ann Johnson: Really?
Vinod Vaikuntanathan: Yeah. Not at this point, because right now he's at RSA. I saw him there yesterday before I flew out. But I guess a second point is I'm a cofounder of a startup, which we can maybe talk about or not talk about. But our CEO is a former RSA employee.
Ann Johnson: We'll come back to the topic, don't worry, but let's talk about your startup for a second. Who's the CEO? I'm curious. And then, what is your startup doing?
Vinod Vaikuntanathan: Great. His name is Alon Kaufman.
Ann Johnson: Oh, I know Alon very well.
Vinod Vaikuntanathan: Oh, great! Wow, that's amazing!
Ann Johnson: He and I worked on a patent together for a Bayesian algorithm, an extended use case. We'll talk about that some other time.
Vinod Vaikuntanathan: That's amazing.
Ann Johnson: But yeah, I know Alon quite well.
Vinod Vaikuntanathan: That's amazing. So Alon has been our CEO for, what is it, like seven years and running, yeah.
Ann Johnson: Well, that's great. Can you talk about just a little bit now that we've talked about quantum-resistant encryption and how we think, you know, quantum computing is going to break encryption. We all know that, by the way, so we do need to rush towards -- can you talk a little bit about how you think the AI and the quantum are going to work together both helping the industry but, also, is that going to accelerate the breaking of this encryption? Or are there controls we can use AI to help us be better at that?
Vinod Vaikuntanathan: So I can definitely imagine AI helping us with vulnerabilities in software better. It's also a double-edged sword that people can use it to find -- like, bad actors can use it to find vulnerabilities before we do. So it's a bit of a, you know, dual-use technology, if you will. So AI can help make our systems more resilient and more secure. But, you know, the base at which AI models are being incorporated into our software supply chains, it is a little bit concerning to me in the sense that AI models are, in my mind, not like regular programs that are used to sort of write. Those are programs that humans, you know, wrote, for the most part. These models are like the human brain, you know, they're emergent. And we don't understand basic properties of these models. When we look into these models, we see numbers. For the most part, we can't sort of understand why the model is saying what it's saying, if you know what I mean. You know, properties like predictability and explainability of AI model's largely an open question at this point. So with a few colleagues at Berkeley and Princeton, I wrote a paper a couple of years ago where we showed how to insert an undetectable backdoor into a machine learning model. So that is like, you know, you outsource training to third parties all the time, you know, really small third parties. And if I'm one of those third parties, what I can do is I can train the model. It'll do its job on the average very well. But it'll also have a backdoor. And I'll have the key to the backdoor, which I can use to change the input/output behavior of this model. So we showed how to actually do that in a cryptographically undetectable way. So in other words, the user of the model can look at the architecture and the weights, you know, whatever they want, they won't have a clue that there is a backdoor hidden in there. That is very concerning to me. Together with many other security issues that people have pointed out over the course of years, for example, adversarial inputs. You know, just the fact that the distribution that you train a model on may not be the same as the real-world distribution that you deploy it on. You know, these are all concerns that we don't really have satisfactory solutions of. And, you know, that worries me. It worries me that we are sort of putting the cart before the horse.
Ann Johnson: Yeah. I consider Ram Shankar as a friend of mine. He also leads the Microsoft AI Red Team. And one of the things that he talks about is that we are hyper focused about data -- which is right, right? We should be focused about data. But we also need to think about, you know, model poisoning, model drift, injections, prompt injections, etcetera. And he's not convinced the industry has caught up to the risk. There's a lot of promise of AI, but he's not sure the industry has caught up to the risk. With that in mind, how do you think cybersecurity professionals should skill themselves, right? How do they keep pace with advances in AI, with advances in quantum computing?
Vinod Vaikuntanathan: That's a great question. You know, I think the key is to remain curious and use the resources. There are lots of resources, you know, on the internet. Sometimes it's really dizzying. You know, there's some much out there that it's hard to sort of separate the wheat from the shaft, if you will. And I also think that we, you know, sitting in universities in academia have an obligation to actually distill this knowledge into a form that people in the industry and cybersecurity professionals can actually like bite and chew. So, you know, a little bit along that way, I was involved in a course for cybersecurity professionals about quantum computing. All the way from what these things are, where are we in building quantum computers, all the way to, you know, cryptography and many other sort of applications of quantum computing. So I think this is the kind of thing that we need to do. It's a lot easier to do for quantum computing than for AI. AI is such a vast field that a lot more effort needs to be spent on in education.
Ann Johnson: No doubt. Can we shuffle to RSA for a minute? I know you were here. How many times have you been, or was this your first RSA?
Vinod Vaikuntanathan: No, this was actually my second time. The first time I was there was in 2019. I was sort of wearing a completely different hat back then. I was there as a cofounder of Duality Technologies, which is the startup that we talked about. And we were participating in the Innovation Sandbox at the RSA Conference. That was a lot of fun.
Ann Johnson: When you walked around this year, contrast it, compare it to 2019, but what surprised you? And were there any topics that you said, wow, I haven't heard about that, let me spend a little time thinking about it?
Vinod Vaikuntanathan: Yeah. So it was a sharp contrast to me looking at how much more AI there is at this year's RSA Conference, you know, both in terms of like the number of submissions to the academic track, but also just sort of how much activity is there surrounding AI, just compared to like six years ago, right? You know, back then, it was about sort of more traditional kind of security concerns, maybe a little bit about like backdoors in like cryptographic algorithms. I remember hearing that at some point 2019. But this time, it's really, like, AI has really kind of taken over. And you could argue it's for a good reason. Because these models can do amazing things. Everybody wants to deploy it. And you really have to be worried about, really have to be thinking about security issues.
Ann Johnson: Yeah, no doubt. For those who couldn't come this year, how would you encourage them, right? If they are thinking about coming next year or the year after, what do you think should incent people, or why do they want to come to the RSA Conference?
Vinod Vaikuntanathan: So I've only been there twice, but both times I felt like it was the beating heart of, you know, security as it is practiced in the industry. I mean, lots of very interesting ideas. Lots of interesting people, speakers, show up. And, really, it's like the one place to go to to hear what is the latest and greatest in security. So that's, I think, a pretty good reason to go.
Ann Johnson: I love the beating heart. I'm going to tell my dear friend Hugh Thompson that. He'll really appreciate that line. So I'm a cyber optimist. I've been doing this for 25 years now. And I wouldn't get up every morning if I didn't think we were ahead of the bad actors. It's a battle every day, but I feel like we're winning. Why are you a cyber optimist? What are you optimistic about the future?
Vinod Vaikuntanathan: Yeah. So I would call myself a cautious cyber optimist.
Ann Johnson: Love that.
Vinod Vaikuntanathan: So I am really optimistic as well. If you look back into the history of cryptography, it started with people coming up with ways to protect information in sort of like an ad hoc way. And then they got broken and it got fixed and got broken. So the cat and mouse game was going on for a long time. But at some point, we kind of, in the 1970s, really, we put an end to it. We sort of put the field of cryptography -- you know, starting with the classic paper of Diffie and Hellman -- into firm, principled, pudding. In the field of the security of AI, we are where cryptography used to be in the 1920s or '30s. In other words, we are in the infancy of the field. We're still kind of figuring out what the threats are, how we can mitigate them, how we define these threats precisely. I mean, that, I think, is very important. You know, if you can't define something, what hope do we have to actually solve the problem? So we are really at its infancy. But the fact that we have so many bright minds working on these questions, so much resources being poured into AI security and safety, it makes me very optimistic. I think we are really at the beginning of this evolution.
Ann Johnson: I love that. Vinod, I know you're incredibly busy, I want to thank you for joining me today on Afternoon Cyber Tea.
Vinod Vaikuntanathan: Thank you so much for having me. [ Music ]
Ann Johnson: This conference brings together incredible people with every background and experience imaginable. There is a wealth of knowledge amongst the attendees and speakers, and I wanted to bring some of that wisdom to our Afternoon Cyber Tea community. I am so excited we were able to record with others, and I hope you enjoyed this episode. [ Music ]
