EP111: Dr. Geoff Keeling, AI Ethics Fellow at Stanford University: AI and morality
Sep 30, 2020 ·
55m 9s
Download and listen anywhere
Download your favorite episodes and enjoy them, wherever you are! Sign up or log in now to access offline listening.
Description
💬 “The point of AI is to satisfy people's preferences, but there are two ways to this. One is to adjust the world to fit the preferences. The other is...
show more
💬 “The point of AI is to satisfy people's preferences, but there are two ways to this. One is to adjust the world to fit the preferences. The other is to adjust the preferences to fit the world. And it seems like the latter is what's increasingly going on. That’s definitely not what we want, so we need a plausible criteria for when somebody’s preferences are being unduly influenced in a way which is not appropriate.” Dr. Geoff Keeling
🎙️ Dr. Geoff Keeling is a Research Fellow in AI Ethics at Stanford University. He is based in the Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence. Geoff’s work is about the safe and ethical design of emerging technologies. His current research is about the ethical design and regulation of machine learning in medicine; and the ethics of automated vehicles. Geoff’s broader interests include ethical decision-making under uncertainty, alongside foundational questions about the nature of rationality, reasons, evidence, and probability.
🎧 In this episode, Geoff and I talk pragmatically about how capable and useful human decision making is when we objectively examine the torrid state of nations, equality and the climate. There’s a huge amount of bias in that question, but it’s something that I want to hear the experts answer, albeit many people do not agree with the premise of that question. Geoff addressed this and we examined the role of the individual and society in decision making and the associated payoffs and exactly what reasoning we wish machines to do and when? Also whether this would be better as reactive to situations or pro-active? For example, should machines be permitted to pro-actively intervene when they recognise opportunities to save lives? We talked about the differences between AI meeting needs and influencing needs, and I asked Geoff how thin the line is between influence and exploitation. We talked about Cambridge Analytica and I asked if this group of people could work out how to exploit our innate human biases, could we instead use AI to illuminate these biases.
Coming to autonomous vehicles, Geoff explained and explored the trolley problem, something he wrote his thesis on in 2019, and he was quick to rationalise the limitations with this idea in self-driving cars. I asked him how two non-experts could have a useful and relevant conversation about autonomous cars, avoiding the pitfalls and the hype. I asked Geoff where it’s useful to think and talk about Artificial General Intelligence at this point in history.
We came full circle and again I asked Geoff about human decision making and whether our perspective is limited to a few living voices. I gave my own thoughts on how we largely ignore the needs of animals, nature and the unborn and finally I asked Geoff what people ask him about his work at dinner parties. It was a real privilege to spend time with Geoff, he was very generous with his time and enthusiasm for this conversation. I find his perspectives to be fascinating, and I hope you do too.
show less
🎙️ Dr. Geoff Keeling is a Research Fellow in AI Ethics at Stanford University. He is based in the Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence. Geoff’s work is about the safe and ethical design of emerging technologies. His current research is about the ethical design and regulation of machine learning in medicine; and the ethics of automated vehicles. Geoff’s broader interests include ethical decision-making under uncertainty, alongside foundational questions about the nature of rationality, reasons, evidence, and probability.
🎧 In this episode, Geoff and I talk pragmatically about how capable and useful human decision making is when we objectively examine the torrid state of nations, equality and the climate. There’s a huge amount of bias in that question, but it’s something that I want to hear the experts answer, albeit many people do not agree with the premise of that question. Geoff addressed this and we examined the role of the individual and society in decision making and the associated payoffs and exactly what reasoning we wish machines to do and when? Also whether this would be better as reactive to situations or pro-active? For example, should machines be permitted to pro-actively intervene when they recognise opportunities to save lives? We talked about the differences between AI meeting needs and influencing needs, and I asked Geoff how thin the line is between influence and exploitation. We talked about Cambridge Analytica and I asked if this group of people could work out how to exploit our innate human biases, could we instead use AI to illuminate these biases.
Coming to autonomous vehicles, Geoff explained and explored the trolley problem, something he wrote his thesis on in 2019, and he was quick to rationalise the limitations with this idea in self-driving cars. I asked him how two non-experts could have a useful and relevant conversation about autonomous cars, avoiding the pitfalls and the hype. I asked Geoff where it’s useful to think and talk about Artificial General Intelligence at this point in history.
We came full circle and again I asked Geoff about human decision making and whether our perspective is limited to a few living voices. I gave my own thoughts on how we largely ignore the needs of animals, nature and the unborn and finally I asked Geoff what people ask him about his work at dinner parties. It was a real privilege to spend time with Geoff, he was very generous with his time and enthusiasm for this conversation. I find his perspectives to be fascinating, and I hope you do too.
Information
Author | Richard Foster-Fletcher |
Organization | Richard Foster-Fletcher |
Website | - |
Tags |
Copyright 2024 - Spreaker Inc. an iHeartMedia Company