
Caveat Live: FBI and KU Cybersecurity Conference.
[ Music ]
Dave Bittner: Hey, everybody, it's Dave. We've got something special for you this week. We're excited to share our very first "Caveat Live" event. My co-host Ben Yelin recently headed to the FBI and KU Cybersecurity Conference at the University of Kansas for a live session of "Caveat." During the episode, Ben covered the importance of public-private partnerships with Doctor Perry Alexander, and Ben and Professor John Symons spoke about the philosophical issues in AI, and how those should impact policy decisions. Here's the show. [ Music ]
Ben Yelin: Thank you very much everybody and welcome. This is our first on the road recording of the "Caveat" podcast. We are part of the N2K Cyber Wire Network. Our show is called "Caveat." You can get it on all your favorite podcasting platforms. Comes out every Thursday morning. It is a cyber law and policy podcast with some surveillance, 4th Amendment type stuff. My background is as an attorney. My co-host Dave Bittner, unfortunately, was unable to make it today, so I'm going to fly solo here. But it is so great to be with all of you, and I'm just very glad that we get to share this experience together. I'm also very pleased to be joined by our first guest, Doctor Perry Alexander. Doctor Alexander is the AT&T Foundation Distinguished Professor of Electrical Engineering and Computer Science here at the University of Kansas. Go Jayhawks. And first, I wanted to just ask you about this event itself. This is a -- the third annual conference for Kansas University and -- and the FBI on cybersecurity.
Perry Alexander: Sure. So first, welcome to Lawrence.
Ben Yelin: Thank you very much.
Perry Alexander: I didn't realize you -- you'd come all the way from the East Coast.
Ben Yelin: I did. Yes.
Perry Alexander: Yes, this is the third of these events. It has grown from, you know, a one room kind of, I wouldn't say small. We were never small. We started with 200 or so people to -- that brought together the FBI specifically and people in the region just to talk about cybersecurity and think about cybersecurity. And if I remember correctly, the first year it was in the Lied Center on it -- we all fit on the stage. And it's grown since then from around 200 to now over 400 this year. So, it's -- it -- it's kind of fits a unique space. I don't -- I don't know anyone else who does something that's quite like this. But it's -- it's grown wonderfully. The KU staff have just been phenomenal in -- in putting this thing together. It's -- it's been really good.
Ben Yelin: They've been great to work with on our end as well. Can you just talk a little bit about your areas of research just for our listening audience? I think probably people here in the auditorium we're in know a little bit about your research, but just for our listeners.
Perry Alexander: So, I -- I work at the in -- in the intersection of two areas. One of them is the topic of this conference, which is trusted systems, thinking about, "When do I trust systems that I'm interacting with?" and "How do I establish trust in them remotely?" That's the -- the kind of the application that I have. The other area I work in is -- is treating programs in computing systems as mathematical objects and trying to verify things about those programs without running them. This is -- this is something called formal methods that -- it's been around since the beginning of computing, but now has kind of blossomed quite a bit in the last ten years or so.
Ben Yelin: And can you talk a little bit about the work you've done with the National Security Agency and also just the importance of having that partnership between that government entity and a -- a research university and then maybe get into a little bit about Invary?
Perry Alexander: Sure. So, it's -- it's a -- a -- a winding story with lots of -- of hooks and crannies and interesting things going on, but I think it was about 2006, I was called by a colleague at the NSA who I'd known for many, many years. And he asked me, "What do you know about trusted boot?" And it's not important what trusted boot is for this story, but "What do you know about that?" And I said, "Well, nothing, not much at least." And so, and -- and my -- my colleagues, Brad Martin had -- had done this several times, called me up and said, "Hey, what do you know about? Hey, what --?" And this time he -- he called me back in two weeks and said, "Hey, do you want to work on this project?" And I said, "Sure, it's something I don't know a lot about and don't understand. Of course, it's something I -- I want to do." And went to the first -- I met my -- my coworker, so to speak, the day before the first meeting I went to. Went to the first meeting and I joked about this, said, but it really is true. I gave a presentation that was completely wrong. Top to bottom, slide one to slide N was wrong, but got involved with this group. And it -- it's funny how these research groups form. Certainly, how they come together is -- is it's like any other group, but people kind of adapt to what -- to their place in the group. And I didn't really understand what I was hearing. I couldn't process it all. So, I did what I always do and I started writing it down. So, I in effect became the -- the -- the scribe or the author of the specification that we were -- we were developing and how I got to know my -- my friend Pete Loscocco, who was on stage with us this morning, is basically kibitzing over slides over -- over many years. And I learned an immense amount about something I didn't understand by just listening to Pete and -- and others, George Coker, talk a lot about -- about this particular problem of trusted systems. And -- and developed -- we were talking about the importance of relationships this morning in our -- our panel session, but we developed a working relationship amongst that group of people that allowed us to work quickly and critique each other in -- in very positive and effective ways. And it's been -- it's -- it's still continuing, different people, different faces --
Ben Yelin: Yes.
Perry Alexander: -- but it was critical -- critically important. So, as a part of that project, they -- Pete led the development -- intellectually led the development of a -- an application called LKIM, the Linux Kernel Integrity Measure. And what it does is it looks at the Linux data structures as -- as Linux runs and -- and establishes whether that -- those data structures have integrity, whether they look normal or not. And the -- the -- the interesting about this thing is it -- it -- it's not dependent -- it doesn't work like a virus checker. It's not dependent on signatures. It's looking at the kernel and analyzing the kernel itself. So, it can catch a variety of issues, particularly zero-day attacks that other techniques can't. So, I watched them develop this and I didn't really work on it, but I watched them develop it and I got to know the developers really well. And they actually came here and visited and -- and -- about LKIM and we were talking again relationship wise, we were talking one day and Pete said to us talking about how difficult it was to commercialize. They just were having no success. Well, I had worked with our commercialization group here, Adam Courtney and crew at the -- at the Innovation Park. I'd worked with them before and I knew they were there. So, I just said, "Hey, can we take a shot at it? And -- and sure. So, I -- I went and talked to Adam and talked to my -- my colleagues there and what grew out of that, they helped us set up the company. They help us do all of the legal things that I don't know anything about.
Ben Yelin: Right.
Perry Alexander: Get things going and then helped me attract Jason Rogers, who was also on stage with us this morning. And Jason has been a very successful technology leader in -- in Lawrence, worked with a number of companies and he wanted to take a shot at being a CEO. And I, as I said, I -- I did one other company and learned that the last thing I should ever do is be a CEO. I'm not good at it.
Ben Yelin: You knew what your role would be.
Perry Alexander: And I -- exactly. And I knew I wouldn't be CEO. And I -- I think I had a hard time convincing Jason, "I do not want to do this." Jason, on the other hand, is brilliant at it. He's extremely good at this. So, we got to talking and got Jason on board and things have just kind of grown. We just landed our -- our seed round.
Ben Yelin: Fantastic.
Perry Alexander: So, things are -- are going really well and it's -- it's wonderful because I'm allowed to play the role that I'm good at and I'm not asked to play the roles that I'm not good at. It's just been -- it's -- it's been an incredible experience.
Ben Yelin: And can you talk a little bit about what Invary is and -- and what it does?
Perry Alexander: So, Invary took this application called LKIM and what they've done is rewritten it in -- in -- in a language called Rust, which is -- is one of the I guess most modern development languages. And are -- are hosting it in various ways and now they will sell this application and it will run periodically on people's machines and determine whether their kernels are in a -- in a good state. So, the bulk of the work has been -- technical work has been doing that rewrite work and then Jason and crew have developed various markets for -- for -- for the product.
Ben Yelin: And it -- it looked like you have a diverse array of clients in the private sector, but also DOD and -- and in the IC, the intelligence community as well.
Perry Alexander: So, anybody who operates Linux environments would be interested in -- in what -- what we offer. But you can understand why military systems really don't want their -- their operating system kernels corrupted in some way.
Ben Yelin: Right.
Perry Alexander: Really anybody who handles data wants to know what's going on, on their systems and the kernel is kind of the heart of what's going on, the operating system kernel is at the heart of what's going on. And if it's corrupted, if it's -- if it doesn't have integrity, then you've got big, big problems.
Ben Yelin: Can you talk a little bit about the broader lesson here, working with the NSA towards this commercialization, where you can develop the product based on their intellectual property to some extent, but also, what that relationship means?
Perry Alexander: Wow, it's -- it's one of the most important things I have is that collection, that network of people that -- it's interesting. We all have our roles and we all kind of understand and -- and like our roles. I was telling a friend the other day, when I choose what I'm going to do now, it -- I almost always choose work based on who I will work with rather than what the work necessarily is. And so, the relationship that we built over the years, as I said, we're -- we're kibitzing over slides in -- in discussions. That relationship allowed us to kind of naturally -- naturally build Invary and -- and -- and build the organization that it is.
Ben Yelin: Switching gears just a little bit to talk about formal methods, this is the question I'm kind of asking about everything, but now we're in a world of AI and machine learning. Can you talk about how that's affected the discipline of -- of formal methods a little bit and what we can look to going forward in the future?
Perry Alexander: So, I -- I -- I think it's affected it in -- in many ways and I'll -- I'll talk about -- about kind of two directions. One is using AI, ML those things to do verification, and the other direction is to do verification about AI and ML systems. So, the -- the -- the former is more of what I do and what -- what I do when -- when I'm verifying a system is you -- you -- you develop a collection of theorems and then you work in a language that's very similar to a programming language and you develop a proof of those theorems. And coming up with the theorems is probably the hardest part of it, but -- but oftentimes developing those proofs is -- is difficult, as well. And -- so, what we do is we ask -- we -- we train a large language model over other proofs, over a collection of proofs of -- of what we're doing. And then we use that language -- that large language model to generate a proof of the theorems that we -- we would like to verify. The beauty of this is we have a checker for those proofs. So, if the -- if -- if somehow the LLM hallucinated about a proof, we really don't care because we'll discover it immediately --
Ben Yelin: Right.
Perry Alexander: -- if the proof doesn't run -- run in the verifier. So, in a -- in a strange way, formal methods in this way is kind of the -- the perfect application. The other thing we can do, one of the things that I think is -- is not well appreciated about government systems is certification. And certification is incredibly expensive. It takes a very long time to do, so much so that modifying systems, even when it's the right thing to do, is too expensive, because you've got to recertify. So, we're working right now with -- with DARPA and Collins Aerospace to develop techniques that allow us to modify or update the -- the proofs that we do with the changing system. So, system changes and we can use AI models to go in and modify -- rather than redo all that certification, we can modify it and in effect re-execute the pieces that we need to, and we'll hopefully bring down the cost of -- of -- of doing that recertification.
Ben Yelin: Do some of the concerns around AI keep you up? Things like hallucinations, things like inherent biases? Or are -- are you confident in integrating AI and machine learning in -- into the work that you do?
Perry Alexander: Wow, that -- that -- that's a really hard question. Sometimes it does -- it does keep me up. Sometimes it -- I find it sorry to say it this way, but find it humorous. I can't get it to do anything I want it to do.
Ben Yelin: Right.
Perry Alexander: It does keep me up in the sense that we really don't know how these things work. And if they go wrong, I don't know -- I don't know how we'll know. I think verifying them is very, very difficult because again, we don't know how they work.
Ben Yelin: Right.
Perry Alexander: When they create artifacts, in my world, software artifacts and -- and a lot of people are talking about generating code using AIs. That's a -- it's a great idea, but you're generating code now that no person has ever seen.
Ben Yelin: Right.
Perry Alexander: So, if it doesn't work and -- and nothing's perfect, now you have to go in and you have to understand the code that the AI generated to try and debug it. And I think most developers will tell you that debugging is far harder than developing. So, that -- that worries me a bit. But I'm becoming, because I -- I -- I have a lot of friends who work in -- in the space of ethics and policy and what not around AI, I've gotten a lot more concerned about application. I think our -- our speaker this morning, Brian McClendon was talking about doing things with data and -- and in this case, location data that were pretty astonishing. And astonishing is -- is a great thing. If your child is lost and you're trying to find them, having all that camera data is a pretty awesome thing. If you are being pursued by a criminal entity, then having all that data is not so great.
Ben Yelin: It's not so -- yes.
Perry Alexander: So, I -- I do think about that. I worry about hallucination quite a lot. I did an experiment where I -- I did -- I was writing a paper and I'm -- I'm searching for references, and I will usually use Google Scholar to help find papers. So, I decided to use ChatGPT. And -- and have it find papers. And it didn't. And it gave me papers that were completely fabricated.
Ben Yelin: What's amazing is they don't seem fabricated because it's like --
Perry Alexander: No, they don't.
Ben Yelin: -- they're full names, they're believable titles, they're from reputable institutions, and they literally never existed.
Perry Alexander: They never existed. And I had -- I had -- but I had to -- so, what I ended up having to do is go back and use my traditional resources to check what the AI was giving me. And it turned out every single reference it gave me was -- was fabricated.
Ben Yelin: We've seen that a lot in the legal world. It's gotten people in trouble because they're submitting filings in court, citing cases that of course did not ever exist. So yes, I think hallucination is a problem in really a lot -- a lot of different realms. What do you see as the biggest challenge in embedded security going forward?
Perry Alexander: Wow. I think keeping up with the adversary, we -- we are not winning this race. We probably aren't even staying even. So, keeping up with the adversary worries me. I think as you know, as Brian pointed out that the new and emerging applications of data and the things that we can learn About -- about people worries me when those things are used improperly. I -- I also worry about a -- a legal and a -- and a regulatory system that's still figuring this out. I -- I think technology moves so fast that it's impossible for those social systems to keep up. And I -- that means that we end up with policies and legal things that may not work as well as we need them to work. But -- but we're getting better. Things are improving. But I -- I do worry a little bit about those things.
Ben Yelin: That last problem is something we've talked about a lot on this podcast is that technology moves at a speed far exceeding our legal system. Is there an approach, like if you were to get into the head of a policy maker on how to think about developing, whether it's legislation or regulation in a world of rapidly changing technology, like what's -- what's the key message you would want to give that person?
Perry Alexander: That's really hard and -- and it's really hard because I'll -- I'll use the example, the -- the extreme example of a -- a cyber weapon of some kind. So, if I develop a -- a kinetic weapon, I know exactly what that thing does. If I -- if I use it, I know exactly the physics of what that weapon is going to do. Cyber weapons, we really don't know. We do not have good ways of predicting what a cyber weapon or a cyberattack will do. Well, if you can't predict, it makes it very hard to regulate. If I don't know what's going to happen when -- when certain things occur, it's very hard to -- to then somehow develop policy or develop law around what that -- what that is. And that as I said, that -- that -- that worries me a bit. But if I could -- if I could convince people to -- to -- to take action, it would be to convince more people in -- in computing and really computing to get involved in the law, to get involved in policy making. So, that when I talk to people about issues around computing, they -- they really deeply understand what it is I'm saying.
Ben Yelin: Yes, sometimes it feels like we are in completely different worlds and it's nice when our -- our worlds can mix a little bit. What advice would you have for students? I know this -- we're on a -- we're on a university campus, but who are interested in embedded systems and -- and the work that you do?
Perry Alexander: Wow, so one of the things that is -- that we -- we were talking about at dinner last night, we talked about it at lunch today, is the need for a diverse education. And that is within your discipline and also outside your discipline. Within your discipline, I have a chart that I -- I show freshmen that maps out information from being gathered in the world, like sensors and whatnot, and shows information moving through communications to computer engineering, to computer science, and in effect, moving up in abstraction. And I can use that chart to kind of say, "Well, this is what an electrical engineer does, and this is what a computer science does." But what I then do at the end of it is I draw a circle that crosscuts everything. And I say, "That's what you have to know to work in my lab." And I was talking at lunch today with -- with a colleague about, "Do we teach people cybersecurity first or do we teach them control systems first?" Because you've got to know both. So, what I would tell students is be as general as you can in your learning. Learn how to solve problems. Learn not to be afraid of -- of things you don't know and how to go after them. The other thing I would tell them is get out of your domain, get out of your discipline and hang out with people that -- that think different ways. Go -- go -- go to the art museum, go to the concerts, go talk to the philosophy people, get outside of your discipline and -- and stretch your mind.
Ben Yelin: That's a perfect segue, because I doubt that many people in this room, even those who know you, knew that you wanted to be a jazz musician.
Perry Alexander: Yes, I did.
Ben Yelin: And that you majored in college in trumpet performance.
Perry Alexander: I did. I was a trumpet performance, electrical engineering double major.
Ben Yelin: That is probably one of the coolest double majors I -- I've ever come across. Is there anything, and I'm sure you've thought about this, that you've taken from your music obsession, your continued interest in music that you use in your professional life?
Perry Alexander: So, I -- I gave a -- a commencement address for the School of Music and one of the things I talked about was exactly that. And I -- I tell people that there is a discipline to learning to play that really is -- is in -- is embedded in everything that I do. I like to say that every -- every student I've ever taught takes a bit of my trumpet teacher with them. And I -- I -- it's -- it's interesting because I -- I think that in music, how do I say this? I can go to as many trumpet playing classes as I want, and I will never be able to play. You have to go in the practice room and you have to practice and you have to play and you never get it right the first time, or you rarely get it right the first time. And you learn about getting incrementally better. Rather than skipping all the intermediate steps and going from -- from knowing nothing to getting an A on the exam, you go through this process of getting incrementally better and understanding that there is always another increment. The other thing I learned to do, and this sounds odd, is I -- I learned to lose. I -- I learned that you don't win every -- every performance competition that you're in and that's just the way it is. So, I -- I -- I kind of learned to be resilient in -- in a way, but the -- I also believe very strongly that the -- the discipline of studying music and the discipline of studying computing are very, very similar. A lot of my friends in the community are either composers or -- or performers. In fact, some of the conferences I go to actually have bands of the -- the people that attend.
Ben Yelin: Some people like podcasts. Some people like bands. Exactly. Well, thank you Doctor Perry Alexander for joining us and being our --
Perry Alexander: It's been my pleasure.
Ben Yelin: -- excellent first guest. And thanks again.
Perry Alexander: Thanks a lot.
Dave Bittner: We'll be right back. [ Music ]
Ben Yelin: Our second guest is John Symons. He is a Professor of Philosophy and the Director of the Center for Cyber Social Dynamics here at the University of Kansas. Welcome, John.
John Symons: Hi, Ben. Thanks for having me.
Ben Yelin: Very good to be with you. Can you talk a little bit about why we need ethics in cybersecurity, in artificial intelligence? What -- why is it a domain in this field?
John Symons: Well, you know, obviously the very idea of security involves values, what we care about, what we're trying to protect, what we think is important, and so on. The things that we want to secure are the things that we care about, the things that we think are important and valuable. Thank you. So, those are the -- those aspects of our life or those aspects of our society that we take seriously, that we think we ought to care about. And the judgment of, you know, what we ought to care about is not a technical question, right? It's not as straightforwardly -- it's not something you can formally solve for. It involves rational persuasion, deliberation, reflection, some historical consideration, and political and moral deliberation are part of that -- part of that process.
Ben Yelin: Do you find that you're able to break through to the technologist community and in the policy community to understand the importance of ethical considerations and -- and all decision making? Like do they -- do they listen to you?
John Symons: Absolutely, absolutely. I think -- I think we're more and more sensitive to the role of design, for example, and the social impact of design. We've seen the effects of, you know, certain kinds of design choices in the construction of social media platforms that I think have affected all of us. I think we're fully conscious at this point of the social impact of technology, the effect it has on our children, on our relationships, on our dating lives, on so many aspects of -- that directly affect everyone who's involved in the field. But I think it's -- it's almost unavoidable for us to be forced into a position of thinking seriously about these questions, about normative questions generally.
Ben Yelin: Do you think at this point, like we're at a proper equilibrium of striking that balance of taking ethics into consideration in a way that you think commensurate with the need to take ethics into consideration?
John Symons: I think we have certainly done a great deal of very good work over the past ten years with respect to some very sort of standard or core topics in, for example, AI ethics. So, data ethics, AI ethics more generally, we've done really good work around privacy, around security, around transparency even. But -- and often these -- these do involve technical solutions. So, questions around bias and fairness, etcetera, have been amenable to technical solutions. And I think we've done really good work around that. Where we've failed and where we're only beginning to kind of make sense of where we are is in our consideration of the social impacts of -- of our design choices in tech. So, we can see that many design choices have corrosive effects on social norms that we would consider salutary social norms. We can see that they affect institutions, they infect -- affect our relationships, they affect the quality of our relationships and so on in ways that I think most of us are beginning to recognize have important impacts that we want to correct for or be -- at least be aware of.
Ben Yelin: Is it hard also to get through when you're dealing with people who maybe haven't thought about these types of normative considerations and also people might have different normative views, conflicting values. How do you work through that?
John Symons: Sure.
Ben Yelin: And from an ethical perspective.
John Symons: That's a -- that's a really good question. So, for example, take -- take for example the effect of dating apps and algorithms governing data apps that -- dating apps that -- that we're all subject to. I think that's -- that's a situation where basically no one is happy with the current situation around these technologies, around dating app technologies. No one is really into this.
Ben Yelin: Right.
John Symons: So, then we have a question like, "Okay, we're not going to go back to a pre-technological era in -- in personal relationships and dating. So, then how do we consider our options? How do we consider our alternatives?" And we're going to have different values. I think here it's a joint project of some rational persuasion, but also maybe even poetic or fictional reflection or deliberation. So, we'll think about alternative ways, alternative social equilibria around relationships and dating, let's say that our platforms could reflect. So, and we have that available to us. I mean we -- we live among -- I mean many of our fellow citizens come from societies, for example, where matchmaking would be a -- would be a -- a sort of a -- a -- an alternative social equilibrium around dating.
Ben Yelin: Right.
John Symons: Or, you know, we -- we maybe see in the history of our own families how our -- how our families met. They might have met through friends or neighbors or -- or -- or their churches, etcetera. So, we -- we do have some capacity to inhabit alternative social equilibria. We can look to our, let's say, friends from the Indian subcontinent and see how their lives look with these alternative arrangements. So, I think it requires us to inhabit those alternative social equilibria, and we can do that poetically, for example, or using our imagination. And then we have the question of, you know, how we -- how we deliberate regarding conflicting conceptions of the good.
Ben Yelin: Right.
John Symons: And that's where philosophers have been -- we've been in that business for nearly 3,000 years.
Ben Yelin: Since Aristotle? Yes.
John Symons: Yes. And -- we're -- we're well positioned to -- to let people do that. It's not that philosophers have all the answers, but we're certainly equipped to make the disciplined reflection on the alternatives available.
Ben Yelin: I wanted to talk to you about personal relationships. Your -- I believe your most recent paper is about close personal relationships in AI. The story we did on our podcast last week, spoiler alert, for those of you who have listened, was about a proposed California statute regulating AI chat bots and the use among children. Can you talk a little bit about your paper and how you distinguish what a close relationship could be in the context of artificial intelligence and also in the -- in kind of the real physical world?
John Symons: Sure, sure. So, in that paper, together with my graduate student Oluwashio [phonetic] and Sanwolu [phonetic], we explore the difference between the kinds of relationships that we could have with an abstract object, a formal system like an AI, and the kinds of relationships or the kinds of values in relationships that we have with one another. So, we have different kinds of personal relationships. So, a parent-child relationship is different from, let's say, a spousal relationship, would be different from a friend relationship or a cousin relationship, etcetera. And each of those relationships has its own kinds of goods associated with it. There's something good about being a parent. There's something good about being a romantic partner or a cousin, and there are different kinds of goods. And they also come along with different kinds of special obligations. So, that the idea is that in embodied human life, where we have finite resources, where we -- we were born. We will die. So, we have natality and mortality built into what we are. We have finite capacities, finite attention. I can only be friends with so many people. I can only be a committed romantic partner with a small number or one person.
Ben Yelin: Hopefully, a small --
John Symons: Hopefully.
Ben Yelin: -- number of people. Yes.
John Symons: Right? Those are all constraints on us. And those constraints shape the value of that kind of good. So, there's certain kinds of goods I can have in a romantic relationship that are in virtue of the fact that exclusivity is involved in that. That there's only certain kinds of things we can share together or do together. And if, you know, if my girlfriend is actually the girlfriend for thousands of people, she's not really my girlfriend, right? She's something else. So, certain kinds of goods are only available under certain kinds of constrained, embodied conditions. And they vary, and there are different kinds of goods. So, one's relationship to one's children involves a set of obligations. We call them special obligations. Those special obligations are -- are specific. They're agent relative. They're relative to the kind of person and the kind of relationship you have. AI is not capable of having simultaneous alternative kinds of goods that it pursues. So, in the paper we explore the difference between ourselves as finite embodied beings with certain kinds of very specific resource and attentional constraints versus an AI which is an abstract object. So, you'll remember in the -- in the movie "Her" when Samantha, the AI, reveals to Theodore that she's actually having relationships with thousands of people simultaneously.
Ben Yelin: Heart-stopping moment in that movie.
John Symons: Yes, and the response -- so Joaquin Phoenix, the actor, he's a brilliant actor, the response is really interesting, right? So, it's not like a sort of a male proprietary jealousy response.
Ben Yelin: Right.
John Symons: Instead, he's sort of taken aback and he realizes she in fact is not in love with me. We do not have that kind of relationship. So, he's sort of taken aback and rethinks, "Okay, well, what exactly did I think was going on? What was actually going on?" And in part, that's because he realizes that Samantha is not the kind of being with whom he can have that kind of close personal relationship. And in part, it's because she has these additional capacities. She's disembodied. Right? She has vastly superior cognitive capacities to him.
Ben Yelin: She has Scarlett Johansson's voice.
John Symons: She does, which is a major asset.
Ben Yelin: [inaudible 00:35:09] context, yes.
John Symons: Yes. And so on. So, the idea is that the kinds of beings that the AI are, are fundamentally different from the kinds of beings we are. And those differences make a difference to the kind of relationships we can have. And a lot of those -- a lot of those differences involve the -- I mean, you could say the superiority of the AI. The AI is immortal. It's multiply realizable. It can in principle have infinite cognitive resources, you know, given a science fiction model of AI, whereas we can't. So, that's the kind of line we explore in that paper where we try to articulate what we distinguish -- well, we distinguish the ontological characteristics of AI from ours. And we connect those ontological limitations to the kinds of relationships that we can have and the kinds of values that can emerge from those relationships.
Ben Yelin: This is getting really philosophical, but do you think there's a point where the technology could develop to kind of outwit those ontological limitations?
John Symons: They can certainly make it feel like we're in a relationship --
Ben Yelin: Right.
John Symons: Right? So, it's certainly the case that we can be tricked. We're already in a situation where we can be tricked by the AI where we can -- but that's the case in interpersonal relationships too, right? I can think she loves me and she actually doesn't --
Ben Yelin: Right.
John Symons: Right? Where I think the -- the -- there are -- where I think the real challenge lies for AI is in this technical point about kinds of values. So, for example, if you are a parent and you're committed to certain kinds of, let's say, values of justice, for example. Typically, we would say, you know, if you're in a, you know, a Western society, you're committed to sort of an anti-corruption posture towards, for example, college admissions. But at the same time, if it's your child and you're trying to get her into USC, you have to balance --
Ben Yelin: Try to get on the rowing team. Yes.
John Symons: -- you have to balance, right, your commitment to those principles, broad principles of fairness, which are universalistic, right? They apply to everyone and your commitment to your child. So, your -- you have a special relationship to your child. So, for example, if I brought -- if my daughter when she was a child forgot her lunch at home and I had to bring it into school, I'm not going to divide her lunch among all the children who did not bring their lunch to school that day, even though that would be a sort of guided by a principle of universal fairness or utility. Instead, I'm going to give the lunch to my daughter. And if I didn't do that, she would think, "Oh, well, is he really my father?" Right. Do we really -- I mean, that's not a -- that's not the kind of thing that a -- a father wouldn't divide the lunch up and divide it evenly among -- among all the needy children. The father would preferentially relate to his offspring in a way that was appropriate to that relationship.
Ben Yelin: And you don't think AI could ever develop that it had that loyalty?
John Symons: Well, here's the thing.
Ben Yelin: Yes.
John Symons: The AI can certainly optimize on one kind of value.
Ben Yelin: Right.
John Symons: So, AI is brilliant at that. Right? But what AI cannot do is distinguish or choose between kinds of value. So, if you were thinking, for example, of let's say someone like Gauguin, the artist, at a certain point in his life, he had to decide between his -- the aesthetic value of his work as an artist and his commitments, his moral value, or his moral obligations to his family in Paris. He decides in favor of his art and against his moral responsibility to his family. In doing that, Gauguin is taking two, let's say, hot standards of value, opting for one and abandoning the other. And then the question is by how did he make that judgment? How do we commit to one standard of value over another, whether it be moral or religious or aesthetic or whatever, or prudential and so on? In doing that, we're making the kind of choice that the AI can't do because what the AI can do is optimize on one standard value. And so, I think that is a -- that's sort of a core technical obstacle to the AI being able to navigate, let's say your obligations as a father and your obligations as a citizen and your obligations as a spouse, your obligations as an employee, as a sibling and so on and so forth. And those are things we have to do all the time. So, there's the Abraham and Isaac moment where Abraham is told that he has to sacrifice his son, which is, of course, a deeply immoral act. Abraham is doing that in virtue of his religious commitment, right? So, he's made -- he's made -- he's resolved to, you know, pursue his religious commitment, overriding his moral commitment to, you know, basic moral commitment, don't kill your kid. And overriding his familial commitment, his paternal relationship to his son. And in doing so, he's sort of resolved that problem or that question in a way that the machine learning optimizer can't do, right? So, that -- that's kind of the core technical challenge to AI being in that position. Now, that's not to say that we can't be fooled.
Ben Yelin: Right.
John Symons: I mean when Replika for example gets good enough, sure, people will use Replika as a replacement for example or similar apps as a replacement for close personal relationships as a sort of a, you know, maybe an inadequate but satisfactory solve for loneliness, right? So, it could be that you know, lonely people will have recourse to these chat bots and that'll -- that'll kind of get them to the level where they're not suffering from loneliness. But that can happen in -- in spite of the fact that they do not actually have a close personal relationship with the AI. They feel like they have it.
Ben Yelin: What do you think the broader implications of that are, even for policymakers thinking about a world in which kids are developing relationships?
John Symons: Yes, you know, I -- I often hear folks -- I think that this is kind of a tech-driven line of questioning where it's sort of like, "Well, there's nothing we can do," etcetera, etcetera. I mean, I think, for example, we can very easily make the decision as a society, and we are doing it to some extent. So, in Kansas, for example, you can no longer just log into Pornhub without using a VPN --
Ben Yelin: Yes.
John Symons: Right? In Utah, similarly. We've made these decisions. Now, of course, there's -- this is not, you know, a -- a perfect solution, etcetera. But we do have -- we do have capacity to, for example, compel Apple to make it such that there's age restrictions on the iPhone.
Ben Yelin: Right.
John Symons: They are perfectly capable.
Ben Yelin: In the App Store. Yes.
John Symons: Well, not just the App Store but the device itself.
Ben Yelin: Right. Right.
John Symons: Apple knows how old I am. Apple knows how old the child is. Apple could very easily introduce those things at the device level for example. And that would solve a whole range of these kinds of questions. We do have actual practical solutions for many of these questions. I think in -- when you listen to podcasts about tech, etcetera, people typically don't want to hear simple solutions that involve simple political action at the state level, for example. But here in Kansas, we did that. We got rid of -- we quote, unquote, "Got rid of Pornhub," for example, last year. And we decided that that does not comport with our values, right? We don't think that that is something that we want people spending a great deal of time on, right? Now if they want to, of course they can. They can get a VPN. They can do whatever they want. But we don't want to be endorsing that as a state. We've made that decision. It's a value choice. And I think the same holds for -- for many of these basic questions. We're perfectly capable of doing that.
Ben Yelin: Do you think there's going to be a delineation in allowable relationships? So, for example, some of the questions you talk about in this paper wouldn't apply to say a math tutor where there isn't that type of special bond where you're going to be bound by these ontological limitations.
John Symons: Absolutely.
Ben Yelin: Do you think we can distinguish between those as a society, as policymakers, that sort of thing?
John Symons: I think we already do.
Ben Yelin: Yes.
John Symons: So, I mean you and I probably use AI every day. We're using it to sort of -- as a sort of a cognitive enhancer, right? So, if I'm not understanding something in a complicated paper, I can ask Gemini or I can ask Claude or whatever. I can ask for explanation. And that's a point where, yes sure, it works and it's a -- it's an effective cognitive enhancer, but you know, am I thanking Gemini for -- for this explanation? Typically, not. I mean I do thank --
Ben Yelin: I do, too. Yes.
John Symons: -- my AI.
Ben Yelin: I try and be nice.
John Symons: But, you know, I don't always thank it in the way I would an actual human tutor, etcetera. So, I think we're capable of doing that. We here at KU, we actually study how people cultivate close personal relationships. So, together with Omri Gillath, the social psychologist, we look at the -- the way intimacy develops between conversation partners and we see how that works with or doesn't work with AI. Typically, in a conversation between you know, two of us, we'll reciprocally share information that has varying degrees of privacy or intimacy. So, we might talk about our siblings. We might talk about our work life, etcetera, our pets, etcetera. If you don't reciprocate, so if I tell you that I have three siblings and you just nod and you don't tell me how many siblings you have, I'm going to notice that as a kind of a barrier to forming a close personal relationship. It turns out that human beings typically don't form close personal relationships of that kind with AI, or with even with interlocutors that they think are artificially intelligent agents, right? That are artifacts. So, that's the -- in practice, that's -- that's what we already do. So, your question was, "Are we capable of making those distinctions?" In practice, we just do make that distinction.
Ben Yelin: Can you talk a little bit about the work you've done with AI and agency?
John Symons: Sure. So, together with Syed Abu Musab who's now professor at UT Arlington, we've been interested in understanding you know, the -- so, for example, in a -- in a conference like this, you hear the word agent all the time. So, computer scientists use agent in a very promiscuous way, right? All kinds of things they -- they think of as agents. Whereas philosophers typically have a very high standard for what a genuine agency is. So, genuine agency for a philosopher is someone who acts according to reasons, who can reason, who is typically like a, well, who would be a neurotypical adult human level of -- of action and intelligence. We think that that's a mistake. Both -- both of these things are a mistake. And we're interested in social agency. So, what is it that for example, an AI does in the social world? How is -- so, how an AI interacts with -- with other -- with human beings, with other AI, with economic phenomena, with other kinds of social phenomena, with the law, for example, in ways that have causal impact, right? So, an AI, an agential AI can go out into the social world and change the social world. It can do that. It can have genuine agency in that sense without necessarily acting on reasons --
Ben Yelin: Right.
John Symons: -- for example or deliberating or being more -- even being morally responsible itself. So, we criticize the traditional philosophical conception of agency which -- which we call the threshold model. We criticize that. And we argue that social agency in this case, can be broken down into different components and that each of those can be studied separately and that machines or artifact can actually exhibit many of those aspects of agency. So, that's the work we did in that paper, and we have a book coming out next year on -- on that topic, too.
Ben Yelin: Interesting. What is kind of the -- the realm of agency that you think artificial intelligence could never capture?
John Symons: Okay, so I hesitate to answer definitively, but it strikes me that AI itself is typically not blameworthy in its actions. So, the makers of AI, the people who deploy AI as a tool, they might be blameworthy in various instances. But blaming AI is sort of like blaming a hammer or a typewriter or blaming any other kind of tool. So, I don't think that artifacts in themselves are the bearers of blame and praise.
Ben Yelin: We've seen legal cases where they've compared AI to bomb sniffing dogs --
John Symons: Yes.
Ben Yelin: -- where they've been trained --
John Symons: Yes.
Ben Yelin: -- but they don't have their own set of -- they think they're getting a treat. They don't have their own set of moral circumstances.
John Symons: Exactly.
Ben Yelin: Yes.
John Symons: Well, that's exactly the way we would sort of frame this.
Ben Yelin: Well, I just wanted to thank you so much --
John Symons: Great.
Ben Yelin: -- for joining us --
John Symons: Thanks, Ben.
Ben Yelin: -- for the "Caveat" podcast, and Professor John Symons, everybody. Thank you very much.
John Symons: [inaudible 00:49:18] to me. Thank you. [ Music ]
Dave Bittner: Thanks to everybody who had a hand in making this special "Caveat Live" event possible. Our special thanks to the FBI and KU Cybersecurity Conference, everyone at the University of Kansas for thinking of us and making the invitation. Thanks so much. We'll see you back here next time. [ Music ]

