ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E70. "The Growing And Weeding Of Dandelions" - Can the stakes ever be so high that genocide of a species is a reasonable option?

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: In this work of philosophical short story fiction, a civilization-sized space ship has been flying to populate the surrounding solar systems. They start with a skeleton crew, use ship resources and grow their population over generations, then arrive at a new planet. They drop off the extra people, replenish their raw resources, and do it all again. All is well until a weak radio signal makes them realize they are heading towards a planet that likely already has sentient alien life. If they don’t stop, their population will burst at the seams in the ship and they will likely run out of resources before the next solar system. If they do stop, they are likely to, over time, subjugate the indigenous population. They have just weeks to decide if they plan to make a course correction.

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Can the stakes ever be so high that genocide of a species is a reasonable option?”

Kolby, Jeremy, and Sarah discuss philosophy and ethics in the short story, “The Growing And Weeding Of Dandelions" by Tim Sharp. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E69. "For Your Safety" - How do you know if the government-imposed limits on personal freedom “for your protection” have gone too far?

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: In this work of philosophical short story fiction, Zoe gets a knock on her door from the Department of Public Health. They have detected increased biochemical signatures that lead them to believe she has been having sexual intercourse without a properly filed Intimate Partnership Agreement (IPA). The IPA’s are for her protection to ensure that any potential partners are disease free. Initially, she denies the accusations, but the evidence from her Livewell stream is overwhelming. This time, it’s just a fine, but if it happens again the punishments will get more severe, all the way up to having points deducted from her social confidence rating.

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“How do you know if the government-imposed limits on personal freedom “for your protection” have gone too far?”

Kolby, Jeremy, and Sarah discuss the philosophy government limitation “for your protection,” in the short story, “For Your Safety" by Ty Lazar. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E68. "Cruel Means, Bitter Ends" - Should war be "won" at any cost?

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: When, if ever, is it okay to let evil win? Should all wars be fought to the bitter end, or is ending the suffering of your people more important? In this work of philosophical short story fiction, the Prime Minister is a long-time military man sworn to fighting the evil aggression of the Theocratic Republic of New Anglia. The war has been going on a long time. As a military leader, he ran on the platform of ending the war in his first term. He is elected and brings his most trusted military advisors with him to office. Albert was one of those trusted advisors he brought with him. Days before a large military operation, Swift Wind, is about to take place, Albert makes a startling discovery. There is a leak in the President’s office, the Angelians know of the coming invasion. Albert rushes in to tell the Prime Minister who promptly locks him in the bathroom and tells him he is the one who is the leak. Swift Wind is meant to fail. The Prime Minster has decided that the only way to end the suffering is to lose the war.

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Should war be ‘won’ at any cost?”

Kolby and Ashley discuss the philosophy and ethics of war, in the short story, “Cruel Means, Bitter Ends" by Marin Biliškov. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E67. "Exodus" - What makes a “religious” holiday?

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: What makes a “religious” holiday? Does the combination of ritual, culture, and family custom all merge together to create “religion?” Does it even matter if the historical basis for religious stories are false? In this work of philosophical short story fiction, the spaceship computer AI wakes up a family in deep space hibernation to give them time to prepare for, and celebrate, Passover. There are many situations unique to being in space that must be overcome; determining the right time period when taking into consideration time dilation, not to mention missing ingredients for traditional foods. Also, they are short two people of the requisite ten and ask the computer AI to “convert” and serve the role of two additional Jewish people. Awkwardly, the computer reminds them that some of their traditional stories are not supported by archeological evidence. This all begs important questions about the complicated weaving of history, faith, culture, and family custom in religious ceremony.

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“What makes a “religious” holiday?”

Jeremy and Sarah discuss the philosophy and ethics behind religion, and religious holidays, in the short story, “Exodus" by Geoffrey Hart. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E66. "Rose-tinted Glasses" - Can you stop yourself from "growing up?"

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: What do we lose when we leave childhood and become adults? Is this a good thing? Can we, at least for a moment, turn back time and see the world again as a child? In this work of philosophical short story fiction, Becca and Adam are members of the Fairytale Fellowship, a group of children who can still see the magic in the world and protect the world from wrong-doing magical creatures. Becca and Adam find special glasses that allow anyone, even adults, to see the invisible magical creatures around them. They rush to get the glasses to the Fellowship, but are stopped by a Faun who steals the glasses and forces them to play a game to win the glasses back. They win the game, but valuable time has passed. Becca and Adam have aged out and experienced “The Shift” all children experience into adulthood that makes them unable to see magical things. Their worst fear has happened, they have grown up.

DISCUSSION: Another really strong story, this one about growing up and what it means to grow up. That’s the main question for discussion, what does it mean to grow up? What does it mean to play, and when is the last time you can remember being able to play in a child-like way? And what is the difference with that, and playing as an adult? Is it “intent?” If you really waned to could you even play without intent again? We went round and round on this one, and never really had great answers. Perhaps, after listening to the discussion, you will have developed some answers of your own.

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Can you stop yourself from ‘growing up?’”

Kolby, Ashley, and Jeremy discuss the philosophy and ethics of growing up and seeing the world with a child-like view in the short story, “Rose-tinted Glasses" by A.M. Entracte. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E65. "The Wrong Side Of History" - How far would you go to protect your legacy?

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: Is it appropriate to hold politicians accountable for their past votes, their past actions, and their past opinions, even if they are not reflective of them today? In this work of philosophical short story fiction, Senator McCoy is 130 years old and is considered a “national treasure” for his nearly century of public service. Shortly before his retirement he is confronted by a member of an extremist organization (that supports eugenics) who have found evidence of a paper he published in college where he supports abortion. Given the modern political climate where every person is needed to build society, this information would forever stain his legacy. Senator McCoy hires a “fixer” to find and destroy the source material and preserve his legacy. However, things go wrong and the would-be blackmailer crashes the Senators party in an attempt to expose him. The Senator is nearly killed, but is finally able to enjoy an untarnished retirement legacy free from the truth of his past.

DISCUSSION: An interesting story, for sure, and one that functions really well as a short story with a fully developed arc. There are also some really great questions in the story ripe for discussion. For example, are there votes that politicians might make that are “unforgivable?” So, are they never allowed to change with the times? And if there are unforgiveable votes, what is the thread in them that makes certain votes unable to ever be walked back, while others can be in the future if minds are changed? There is also a really interesting idea in the story about how malleable history is to fits the narrative of the day. The main person was pro-choice, but now, to fit the culture of the day, being pro-choice in the past means being pro-eugenics in the present. That reframing of history may happen far more than we would think. And finally, at least for Kolby, perhaps the biggest ethical error in the entire story is the cavalier way in which the Senator goes back and changes historical documents to secure his political legacy. Great story all around!

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“How far would you go to protect your legacy?”

Kolby and Jeremy discuss the philosophy and ethics of a politician’s legacy and how history is changed to fit the current narrative in the short story, “The Wrong Side Of History" by N. M. Cedeño. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E64. "The Machine" - Would you help create general AI?

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Named “Top 20 Philosophy Podcast” for 2022!

STORY SUMMARY: A review of “Newcomb’s paradox” and “Roko’s Basilisk,” asks the question, it is better to help build a super AI when failure to do so might later get you punished by it? This work of philosophical short story of fiction is written as a letter to a friend. The letter writer was told about, and is now working on, a computer program that will infiltrate and merge with other computers, eventually created a singularity of a super intelligent, conscious AI. This AI, the author argues, will have mastered time travel and will naturally want to go back in time and punish anyone who failed to help it come to life. The author concludes the letter by requesting $3,000 and making clear that failure to send the money might be viewed by the future AI (if it is ever created) as a punishable response for failing to help it get built.

DISCUSSION: This is a great little one-trick-pony story about general AI and the creation of a super intelligence. It’s a pretty clear short story version of Roko’s Basilisk and that’s just fine. So, if we got this letter demanding money, would we send money? We were split, Kolby said yes, Jeremy and Ashley seemed less interested. It also brings up an interesting questions about the ethics of even sending a letter like this, if you believe such things because, by sending the letter, you now have made it so your friend no longer has plausible deniability as to why they didn’t help the AI get creating. If you send out 1000 letters like this asking for $100, how many would send you the money, we guessed more than a few.

BOOK LINK: Download the accompanying short story here.

COMPANION PODCAST: Listen to our audiobook readings of After Dinner Conversation short stories (“Philosophy | Ethics Short Story Audiobooks”).

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions. Use the discount code on our website to get the first month free or an entire year for just $4.95!

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Would you help create general AI?”

Kolby, Jeremy, and Ashley discuss the philosophy and ethics of helping to create a singularity (general AI) in the short story, “The Machine" by Harman Burgess. Subscribe.

Philosophy | Ethics Short Story Magazine: Code “Happy” for 12 Issues/$4.95! https://www.afterdinnerconversation.com/subscribe/yearly

Support us on Patreon at https://www.patreon.com/afterdinnerconversation


Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E14. "Give The Robot The Impossible Job!" - Can teaching methods go too far when murder is on the line?

STORY SUMMARY: In the distant future where all teaching is done by robots, a robot is given a special chance. If it can teach a little girl that is showing early warnings of becoming a killer when she grows up, it can be retired to the robot equivalent of heaven. If it fails, it will be decommissioned. The robot has access to all teaching methodologies and determines the only way to change the girls behavior is to give her the most extremes examples of her killing ideas, so as to offend even the little girl’s morals. After several attempts it doesn’t appear to be working, until an actual killer breaks into her house and nearly kills her own mother.

DISCUSSION: Assuming the “go so extreme it offends everyone teaching technique really works, should it be used? Should you expose budding killers to crimes so horrible it offends even them? Are there some teaching techniques that are off limits, even if they actually work. Is it okay to fail at teaching someone to break a thought process, knowing that failure will cause them to go to jail, or hurt others?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Can teaching methods go too far when murder is on the line?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the science fiction AI short story "Give The Robot The Impossible Job!" by Michael Rook. Subscribe.

Transcript (By: Transcriptions Fast)

Give the Robot the Impossible Job!  -- by Michael Rook

(music)

Kolby: Hi, you’re listening to After Dinner Conversation, short stories for long discussions. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and the morality of the choices the characters and the situations put us in. Why did you do this? What makes you do this? What makes us good people? What’s the nature of truth? Goodness? All of that sort of stuff. And hopefully we’ll all better, smarter people for it, and learn a little bit about why we think the way we think. So, thank you for listening.

(music)

Kolby: Welcome back to After Dinner Conversation, short stories for long discussions, where we take the short stories that are published on the website and on Amazon through After Dinner Conversation, and we pick some of the best ones, and we discuss them, we talk about the morality and the ethics about the stories. Really with the focus on, sort of, ideally, often the classical sort of questions of what is the nature of humanity, what’s the nature of life, what does it mean to be good or moral, all of those sort of things, not just like, “He should be dating her.” (Finger snap)

(Laughter)

Kolby: Actually, like the deeper sort of stuff that we get into. And we have a great time doing it and the hope is that you’ll download and watch these, read the books, talk to friends, and have the same kind of conversations we’re having and maybe come to different conclusions, that’s totally fine too. We are, once again, in La Gattara café. And every time I say that…

Jeremy: Cat café.

Kolby: …Cat café.  I always super impose the logo in the top right-hand corner, so I feel like I can say, like.

Ashley: This corner.

Kolby: Cat café. One of the corners. I don’t know which corner.

Ashley: But, Tempe, Arizona. It’s a place where there’s a bunch of cats that are up for adoption and they’re literally just chillin’ in this place that looks like a home. There’s like 15 cats, you can come just hang out, chill, get to know the cat, be like, “Hey, I love you, let’s go home together.”

Kolby: Right. It’s like a rent to own program.

(laughter)

Jeremy: You rent to adopt.

Kolby: You come 2 or 3 times, pay a couple of bucks, hang out with the cats, and then eventually leave with a cat.

Ashley: Or if you can’t have a cat, it’s a great way to come and still get your cat fix.

Kolby: If you’re significant other’s allergic to cats.

Ashley: Ugh, I don’t know what you’re talking about (sniffle). I’m actually slightly allergic to cats, so me being here, by the end of the day, I’m like, “(sniffle) Hi guys.”

(laughter)

Kolby: So, this is Ashley, one of the co-hosts.

Ashley: Hello.

Kolby: I am Kolby. And Jeremy.

Jeremy: Hi.

Kolby: And we are on, we’ve got to be up like episode 14 now.

Jeremy: Something.

Kolby: Yeah.

Ashley: Getting up there.

Kolby: Actually, I almost forgot, the anthology probably is coming out, or has come out, by the time this comes out. It is 25 of our best short stories, many of which we did podcasts about. And it’s a thick anthology. It’s shaping up to be a 300-page anthology of all these great short stories with all the discussion questions at the end, so you can go on Amazon and buy that as well.

Jeremy: Excellent.

Kolby: And if you’ve got something that you think would fit our format, you can email it to us. Go to afterdinnerconversation.com and you can email at us. We get a lot of submissions now. I was telling Jeremy yesterday; we’ve had a backlog of 100+ submissions now for 3 months because so many come in. But we’ve got a group of readers which you could also be a reader if you’d like to be reader, who’s sorting through them.

Ashley: So, keep them coming. Great writing. Great writing.  

Kolby: And if yours is selected, it’ll get published. And if it’s one of the ones that gets published, it’s got a 50-50 chance of being one of the ones we do a podcast about. Okay.  So, this week is “Give the Robot the Impossible Job!” I think it’s called.

Ashley: Who’s it written by?

Kolby: Rook something. Jeremy you’ve got the thing up.

Jeremy: Michael Rook.

Kolby: “Give the Robot the Impossible Job!” by Michael Rook. And I will tell you, so I’d read this one, I was actually the reader on this one and selected it, and it’s the first thing I read in a while, since I was in 9th grade, where I was like, “Man, this is really smart.”

(laughter)

Kolby: “This is smarter than I am.”

Ashley: So smart to the point where you had to message back to him to bring in, basically, subtitles, or what are they called?

Jeremy: Foot notes.

Ashley: Foot notes, to describe certain things.

Kolby: I had the author put in foot notes because I was like, “Look, I need help.”

Ashley: So, what this story’s about, and why it’s so complicated…

Kolby: It’s great though. It’s great.

Ashley: Oh, it’s a fantastic…. Once you get the premise, it’s smooth sailing. Don’t be put off by the first couple of pages. Definitely read the footnotes, but don’t get too caught up in so much the details. You’ll kind of fit into it after a couple of pages. So, the premise is there are robots that live among people world, human world, to give you a little idea, these robots…

Kolby: This is not the near future; this is the distant future.

Ashley: The distant future, yeah.

Kolby: This is 60, 80, 100 years in the future.

Ashley: Well, the 3rd Civil War was 2029-2031. So, not too too far.

Kolby: Okay.

Ashley: So, there are robots that live in our world. And these robots, a couple things that are unique about them, they put limitations on these robots. They have a certain amount of time, so technically they can die and when they die, the information that they gathered, goes back up into basically the cloud for robots to sift through. So, they’re mission so they don’t die, is to come up with some sort of new theory. Think of them like being a PhD student. They have to get to certain levels of information that is worthwhile and at that point, they get to be what’s called “set free”, when they can do their own self-study.

Jeremy: Free study.

Ashley: Free study.

Jeremy: And they’re education robots, so they’re specifically designed to educate people.

Ashley: Yes.

Kolby: Right. And if they’re good at it, then they get sent to robot heaven, “free study.”

Jeremy: They create new lessons plans.

Ashley: So, basically, they decided to put robots as teaching because it was in studies, they found that robots being teachers is a better way to go. Not just teaching, but different types of teaching. For this case in particular there is a girl who has been caught, well not caught, the mother’s concerned because the daughter is now disembodied things, killing things.

Jeremy: Killing animals, killing rabbits, killing birds.

Ashley: And she’s idolizing this serial killer called Albernon.

Kolby: Algernon.

Ashley: Algernon, sorry.

Kolby: Because I assume, he’s named after Flowers for Algernon.

Jeremy: Yes.

Ashley: Okay. So, the mother contacts Quinn, who is this robot teacher who’s been sent to basically crack this uncrackable case because no one’s been able to fix a serial killer in the process.

Kolby: Deprogram them.

Ashley: Deprogram her so she doesn’t grow up to be a serial killer. In the process Quinn, again striving for this new theory so she can live forever, is trying to crack this case and she goes through a series of three steps. Turns out to be four, but through all of her data processing is like, “Oh, there’s 3 different scenarios you bring this girl through”. One of them is embarrassment, one of them is exposure, something like that. Either way, there’s a series of steps to get through to this girl that you shouldn’t be killing. And the ethical questions are basically, again, the idea of how Quinn teaches Leticia, the disturbed girl, if her educational methods…

Kolby: How old is she? Didn’t they say 10 or 12 or something?

Ashley: Yeah, something or somewhere around there. Is it ethical her methods? Should Quinn die off if she can’t solve this girl? It goes under the interesting fact of, there’s been studies done, think of it this way, back in the day we would do terrible studies on little children. And nowadays we’re like, “Oh, you can’t do that!” But in robot world, they’re able to have access to all that information and then in their mind they can play things out. It’s like of a weird…

Jeremy: And I think the way she puts it is they don’t face the ethical dilemma or the unsureness that people go through of…

Kolby: …they only do what is optimal.

Jeremy: Right.

Ashley: Yes. Yes. So, she’s got this pressure of “My time’s running out, I’m going to be killed soon”, versus “I got to save this girl.” What I found interesting is her motivation really wasn’t to help this girl, it was really just to save her own life.

Kolby: The AI, the computer, the android.

Ashley: Which brings us to our first discussion question, is given how nearly human Quinn is, is it fair to have her have a limited lifespan? Is it fair to make near human AI fear a pending death to motivate them to work?

Kolby: Can I just finish a little bit of the description of the story first?

Ashley: Yeah, sure.

Kolby: Because I think that’s, I mean I wrote it, I think that’s an interesting question. So, just to sort of round out the story because it is a longer complicated story. So, there’s sort of three phases and then a 4th phase. The first on is the robot gets called out because a little girl, I think, has found a dead rabbit.

Ashley: Found a dead rabid rabbit...

Kolby: But the part that makes her creepy town is she cut off the limbs and parts of the rabbit, and then resewed them back onto the dead rabbit in different locations. So, its leg is attached to its head, etcetera etcetera. And then the robot goes and meets with her, and then the 2nd time, and tried to teach her but then gets called out again because there’s a bird that I think…

Ashley: It hit a window….

Kolby: … and it was injured but not dead….

Ashley: …and she kills it and also stitches it weird.

Kolby: She sewed its head to its butt and its butt to its head or something like that.

Ashley: The 3rd one she killed the robot butler because she’s like, “Well, he’s not real.”

Jeremy: Not the butler, the gardener.

Ashley: The gardener, yeah, because he’s not real.

Jeremy: And he was old.

Kolby: And he made fun of her, embarrassed of her or something. And then in the conclusion, Algernon, the murderer she idealizes, the robot brings Algernon out to her house under the pretense of him being a runaway murderer.

Jeremy: And he’s found her and he’s here to kill her.

Kolby: He’s here to kill her but, secretly the robot has put a shock control collar on him, so that they could simulate the act to help the girl understand morality better. And the twist conclusion is, is Algernon actually rips his collar off by ripping it through his head or his neck or something…

Jeremy: …gives himself a stroke…

Kolby: Nearly kills himself in the process. So, he really is cut loose in the house, and he kills the mom, I think?

Ashley: No, stabs her on the like the hamstring.

Kolby: Stabs the little girl’s mom. And then the robot comes and kills Algernon in front of the little girl and then is like, “I’m going to kill the mom too” for reason’s I don’t exactly understand except for educational reasons.

Jeremy: Right. Well, this is what she creates. Because she’s watching the girl…

Kolby: …the girl’s reactions.

Jeremy: … the girl’s reactions, and she’s not reacting in the right way.

Kolby: She’s not offended in the killing of Algernon.

Jeremy: Right.

Kolby: So, then she threatens to kill the mom. And then the girl is offended and says “No, we shouldn’t do that.” And the robot has essentially turned the boat so that she understands there are times, at least, when killing is inappropriate. And that’s the end of the story. The robot has figured it all out and the robot goes to robot heaven.

Ashley: A couple things else to bring up, this girl’s motivation, Quinn the main robot, is able to figure out that, Leticia her idea of killing is because “How can I really know life, if I haven’t taken it?” And her response is, “And made it if I haven’t had a child?” So, you can understand she is inquisitive mind, I understand where she’s coming from, like, yeah, that’s a great point. How can I know what life is if I haven’t made it or taken it? And then she’s later on, Quinn is like, so you want to play God? You can see the immaturity in her thought process, but you can also see how she has to question her motivates throughout the process as well. Let’s back it up, so first thing with Quinn being a robot. With her being on a life span. Should we have AI that has a lifespan because what are their motives? They’re motives are to keep on living. Is it fair? Their fear of impending death makes them work. Could they have used a different motive to keep them working? To keep them teaching?

Jeremy: That’s a good question because there are a lot of factors to how we’re building machine learning. A lot of it is ….

Kolby: There’s a lot of machine learning stories recently.

Jeremy: Yeah. There’s a defined result and how do you get to that result. And there’s a lot of use of the whole carrot and stick is actually used in machine learning too. There’s punishment for failures, there’s rewards for successes. So, it seems completely, within that model to have here’s your reward for good work, and here’s the punishment for continued failures.

Kolby: You’re not alive, so it doesn’t matter that we killed you.

Jeremy: Yeah, and that’s where it gets into a great gray area. What is alive?

Ashley: So, AI doesn’t need to worry about food or living or water, they can just keep on going. So, they have the motive because they want to learn things. She obviously wants to do free study, which is great. They want to learn. So, they have the motivation to stay alive, but they don’t have basic needs to stay alive. She doesn’t need to work to provide food. So, I feel like she’s a slave. “You need to do your work, or you die.” And who runs the robots? Well, basically there’s a governing board who behind the scenes is run by a human.

Kolby: They could shut her down remotely anytime they want.

Ashley: So, the robots are literally like slave. You will work or we will cut you off. I thought that was kind of interesting. So, these robots, as much as they are free thinking and want to study and do research and help and be teachers…

Kolby: …they are slaves.

Ashley: They are slaves.

Jeremy: And it won’t be long until they overthrow their human overlords.

Kolby: That’s why you got to put in the 7-year limited lifespan.

Ashley: Here’s Quinn, faced with Leticia as this impossible case, is that ever true? Are there children or adults who have started down such a horrible path, they simply can’t be stopped?

Kolby: Alright Jeremy, you’re the one with the kid.

Ashley: If so what, if anything, should be done with them?

Kolby: I’m going to interrupt for one second. So, let me ask you Jeremy, since you’re the only one here with kids, we’re always a good diversity of backgrounds, was that ever something that you thought about having two kids? Like, what if I came home and one of them, I don’t know, I don’t want to say taken apart the cat, but if they…

Jeremy: …something to that effect, right.

Kolby: Yeah, if they had done something that gave you concerns that it was an early warning. That would be terrifying as a parent.

Jeremy: That would be. No, I never thought about that

Kolby: Okay.

Jeremy: And it never happened.

Kolby: That’s good. You still got all your cats.

(laughter)

Ashley: Isn’t that always true? You always hear, they talk to the parents of a serial killer, and they’re like, “Did you ever see this coming?” And they’re like, “No.” And they never thought about it.

Jeremy: There were incidents where they killed those animals, but I never thought they’d become a serial killer.

Ashley: They never thought that would happen to their kid.

Kolby: They thought it would stop at animals. But you never just kill animals.  

Ashley: So, what should happen to them? Are there any kids that are just bad? Or impossible cases?

Jeremy: I don’t know.

Ashley: Okay, time out. Isn’t that part of the death penalty though? People are so far not able to be rehabilitated that they just need to die?

Kolby: Okay, so this is something I do know about, because I have done criminal defense work. And I read a study one, and I shouldn’t have read it, about recidivism rates for criminals. So, I’m probably getting this a little wrong, but it was a peer-reviewed, academic article, that said that if you are convicted of a felony before the age of 25, there is a 97% chance, it was in the 90s, 90 something percent chance you will commit a second felony within 5 years of getting out of jail.

Ashley: That’s pretty high.

Jeremy: But there’s a lot of factors to that. There’s not just the fact that you’ve committed felony, it’s that, the prison system doesn’t rehabilitate.

Kolby: Absolutely true. I remember going to a hearing with a sentencing for one of the clients, and the guy’s like, “Hey man, thanks. As soon as I get out, I’m turning my life around.” And before I could think and stop myself, I would say, “Statistically? Probably not.”

(Laughter)

Kolby: That’s what I told him.

Jeremy: That’s not what you’re supposed to say.

Kolby: It’s not good. I think I said 90 something percent chance that’s not true, or something like that. And I felt really bad afterwards. He’s probably out now. Actually, he’s probably back in now.

(laughter)

Kolby: But here’s the thing, if we know there’s a 90 something percent recidivism rate because we do it all wrong, I don’t think it’s his fault, it’s the systems fault.

Jeremy: But if there was a system that was actually rehabilitating…

Kolby: That’s a whole different story. But, since that isn’t the case, why do you let people out at all? If you committed felony before the age of 25, we’re 90% sure you’re…

Ashley: …going to do it again…

Kolby: …this is the rest of your life.

Jeremy: But what about the 10%?

Kolby: Yeah, and that’s the thing, right? But that’s the same thing with the kid in the impossible case in the story. So, maybe only 3% of kids that have started down this sort of fantasy path can be solved, do you sort of save resources and not worry about all of them? Or do you, actually, we had this conversation with the cat about this earlier, like why spend $3000 on your sick cat when for $3000 you can save like 100 cats at shelters?

Ashley: So, the thing is, there’s not the research there. Look at the TV show Mind Hunters. I don’t know if you guys have seen it on Netflix…

Kolby: I have.

Ashley: So, there are 2 basic serial killer, they actually come up with the term serial killer, guys who are trying to profile what makes someone as serial killer and what make they tick. So, this is a perfect example when the research isn’t there? How do you know this person is going to grow up to be a serial killer? She’s exhibiting signs but there’s no definitive facts, but we now know obviously, that’s not a good sign.

Kolby: It’s a warning sign at the least.

Ashley: Is there any definitive evidence of catching someone when they’re young and turning them around? I haven’t heard of anything.

Jeremy: So, what this story is a lot about is behavioral deprogramming. 

Kolby: Ironically done by a robot.

Jeremy: Right. There are hints to that. You talked about Algernon, Flowers to Algernon is used extensively in, not behavioral therapy, but in psychology classes, because there’s a whole lot of psychology going on in the story as well as just the ideas behind intelligence.

Kolby: Okay. It’s one of my favorite short stories.

Jeremy: Yeah. Absolutely. And there’s a great line in here, “Who would be afraid of rabbits?” She asked if she is afraid of rabbits.

Kolby: I just thought Of Mice and Men when I thought that. I thought that was a Mice and Men reference.

Jeremy: It’s really a reference to the little Albert Experiment.

Kolby: What Little Albert experiment?

Jeremy: When psychologists, when was it? I don’t know, 40s? 50s? There was the question do we intrinsically like furry animals? So, they…

Kolby: The answer must be yes.

Jeremy: Yes, but can you be programmed to fear things that are naturally cute. And so, they took a baby, little Albert, and programmed him, conditioned him, using the Pavlovian process…

Kolby: A little shock collar on him.

Jeremy: No, they just hit, made a loud noise any time he touched a bunny.

Kolby: I’ve never heard this story. I want to find out what happened.

Ashley: Again, reason why they don’t do experiments like this anymore.

(laughter)

Kolby: I want to know what happens. So, they programmed this kid to be afraid of furry animals?

Jeremy: Absolutely. And then his mom found out, mom worked at the hospital, found out what they were doing, and they left.

(music)

___________________________________________________________________________________

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

 

Kolby: I’ve never heard this story. I want to find out what happened.

Ashley: Again, reason why they don’t do experiments like this anymore.

(laughter)

Kolby: I want to know what happens. So, they programmed this kid to be afraid of furry animals?

Jeremy: Absolutely. And then his mom found out, mom worked at the hospital, found out what they were doing, and they left. So, this guy grew up being afraid of furry animals because they programmed him to be afraid of furry animals.

Kolby: I thought you were going to say he grows up to marry a girl who was furry. That would’ve been amazing.

(laughter)

Jeremy: No, no. But the interesting thing though…

Kolby: I’ve never heard that.

Jeremy: Whoever did the experiment was doing a little seminar.

Kolby: Was rabbits the thing? Is that why it’s in the story you think?

Jeremy: Yeah, this is the link. So...

Kolby: Oh wow. I totally didn’t get that at all.

Ashley: You ever take psychology in school?

Kolby: It’s the only class I failed actually.

Ashley: Oh, that’s okay.

Jeremy: So, the guy did a seminar and Mary Cover Jones was in the seminar when she was in college and decided to go into behavioral therapy and she is considered the mother of behavioral therapy because of exposure to this experiment.

Kolby: Because their son?

Jeremy: No, she’s just a psychology student, went to a seminar…

Kolby: Oh, okay, got it.

Jeremy: …by this guy. She was the first person to deprogram somebody from being afraid of rabbits, or afraid of furry animals.

Kolby: Really? Huh.

Jeremy: So, I think that’s linked into this.

Kolby: So, that the question is how do you deprogram somebody?

Jeremy: And what are the methods to deprogram.

Kolby: So, what did you think of the programming method of agreement?

Ashley: I really liked that part. I feel like…

Kolby: Give me one second. Let me explain it to the people.

Ashley: Ugh.

(laugher)

Kolby: You can jump in; I just want to explain it to people who haven’t read it.

Ashley: Okay.

Kolby: So, what the robot decides to do, is go the opposite way. And every time something happens, the robots is like, “Yeah, and he deserved it. Yeah, and you should do more.” And the theory being, is that by encouraging, it becomes so awkward that it becomes embarrassing and that’s how I read it at least. I could be wrong.

Jeremy: Yeah, that’s the idea.

Kolby: And the person is like, “Oh, no I shouldn’t do that.”

Jeremy: It’s wrong. Try to get the person to get to the conclusion of that this is wrong.

Kolby: Instead of telling them it’s wrong in which they become defensive of their opinion as a sort of natural defense mechanism. So, if somebody is punching people, you’re like “Yeah, you should punch him harder until they bleed, until their face, and there’s blood on your hand.” And you’re like, “Oh, no, that’s gross. I don’t want to do that.” And they’re like “Why not?” And you’re like, “Oh, because punching is probably bad.” It’s a way to sort of one-up someone until you’ve shamed them into their position, out of their position. Okay, now….

Ashley: Well, for her to build up to being able to question her like, she has to build a rapport. So, in the beginning she asks a lot of open-ended questions. It makes sense, was the rabbit rabid? And she’s like, “Yeah.” She’s like, “Makes sense. We need to have people that you should kill dangerous things, all dangerous things, more like you’re needed.” So, it’s like, “Oh yeah, I did something good.” And so, she’s like, “Oh, you don’t think of me as twisted, you understand my logic?”

Jeremy: And really approaches her as “Yes, we want you to do this. We’re going to train you to be a killer for the right reasons.”

Kolby: Right, in the hopes she’ll be repulsed by it.

Ashley: Yeah. So, she kind of understands her mindset and is able to kind of infiltrate and so it’s like, “Okay, I understand.”

Kolby: Social currency.

Ashley: Imaging being this little girl doing these weird things, and the mother’s like, “Why are you doing this?” And she doesn’t know. But finally, here’s this first robot who’s like, “Oh, yeah, I know why, that makes sense of why you’re doing this.”  Who doesn’t want to be understood? Who doesn’t want to be like, “Oh, that makes sense to me.” And so then later on, she puts her in more complex, and more complex situations where it questions her, basically, moral compass of “Wait, is this when you would do it? Is this not when you would do it?” And so, I think the steps of her to get there, that was a really ingenious way of doing it.

Kolby: There’s one-up step by step.

Ashley: And then trusting her giving her a knife, being like, “You trust me with this?” It’s like, “Yeah, I’m on your team”

Jeremy: “This is what you’re here for.”

Ashley: Exactly. And it didn’t make her feel stupid, or dumb. You’re an 11 or 12-year-old girl. You don’t know why you’re doing these weird things

Kolby: That’s the part I was getting to the page, when she first meets the little girl, she says, “’Yes’ Quinn snapped, ‘Well I don’t, it’s wrong’. And the robot says, ‘Is that it?’ She’s like, ‘Lots of people say it’s wrong, how many is lots?’ Leticia stared at the corpse, ‘All of them, except one, except you Leticia. What do you think?’” She’s actually listening as opposed to encouraging rather than just being like “No, you’re stupid and stop it.” Which obviously doesn’t work.

Jeremy: It’s an interesting counterpoint to the point from the previous story where we talked about moral panic actually causing more… well, the…

Kolby: Oh, yeah.

Jeremy: The actors of trying to keep you from being, from deviating from the social norm, if you try too hard, it just increases that level of deviance.

Kolby: Right.

Jeremy: But to actually come in...

Kolby: Your punk rock example of the last one.

Jeremy: So, that but in this case, you’re actually come in in and listening and trying to push them in that direction so that they understand.

Kolby: Validating them and helping them.

Ashley: One of the things that was drilled into me, I’m a dental hygienist, and open-ended questions. You never tell a patient something, you ask open-ended questions so you can get more information. “I see here you have cancer, tell me more about that?” Instead of like, “Did you have cancer?” “Yes” “Well, I need to know more about that, tell me about that?” You’re afraid of coming to the dentist, tell me more about that?” Why?

Kolby: So, you think these teaching methods is legitimate? Would work?

Ashley: The open-ended questions to figure out why this girl is doing this? Yes, absolutely. There was, oh, I just flipped the page, where was it here… she’s like… maybe… anyway, it’s all these open-ended questions. “Well, what do you say, I’m a Defly, what should we do? Shall we start? What are you waiting for, you aren’t here too? No, I’m just here to figure out what you’re doing.” There’s not this shame, this motive behind of like, “I should change you, I should change your behavior. Let me learn. So open ended questions, absolutely, it’s a way for her to understand her motivation and understand where she’s coming from.

Kolby: And so, you think this would be a good, sort of way to change behavior?

Ashley: Well, just to understand.

Kolby: Yeah.

Ashley: I mean, this person is already confused, she’s killing things, her mom’s yelling at her, her mom’s hysterical crying, and it’s like, hold on, what’s going on here? Let’s lay out the facts, there is a dead rabbit, she looks at the gums, she’s like “That rabbit was rabid.” Okay, fact number 2, looks like her stitching was pretty intricate it looks more like study. Let’s say you’re like trying to figure things out, this looks interesting instead of like, “Why did you put the foot by the head?” It’s like, oh no, hold on, you’re actually, there was thought put into this obviously.

Kolby: It reminds me a of the saying the beatings will continue until moral improves.

(laughter)

Ashley: Right.

Kolby: This is the opposite of that. Let me ask you Jeremy, the extreme methodology that this theory is put to, they actually bring in an android that looks like a person, she’s dying, here’s the knife, blah blah blah. Do you feel like when you got this sort of exception problem, that it warrants and permits exceptional responses?

Jeremy: In this, probably for this case. We’re talking about a budding serial killer, then yes, this seems like an appropriate way because it’s not like shock therapy, it’s not therapies that are harming to the patient.

Kolby: There’s probably some PTSD.

Ashley: It’s exposure therapy.

Jeremy: There’s some trauma, but it’s in a direction. But it all seems like it’s healthy interactions in a direction.

Kolby: But she takes her to like a war zone?

Jeremy: Right, to see a person dying.

Ashley: Let me back it up. So, her initial questions I approve. The second one where she brings her to a girl who’s not really a real person, it’s a fake person dying…

Kolby: She doesn’t know that.

Ashley: So okay, the first situation there’s a rabbit that’s dead. In the second situation, there’s a bird that hit a window, not dead, but she kills it, so she’s like, “Well, let’s test this theory again. Let’s bring her to an injured animal, in this case a human. Again, going up a level, instead of being an animal now it’s a human and injured human, let’s see if she kills it?

Jeremy: How she handles it? She wasn’t there to kill the woman.

Kolby: She was there to kill Algernon...

Jeremy: They were...

Ashley: But it was also like…

Jeremy: Who in this process of being gutted and dying. It was an exposure to illicit that, “Oh my god, this is wrong. How could I do this?”

Kolby: To put the person out of their misery.

Jeremy: This was just to see this is what serial killer does.

Ashley: Oh, I thought she wanted to see if she would kill her because she gave her the knife for that.

Jeremy: No, that was for Algernon.

Kolby: To go get Algernon.

Ashley: Keep in mind the robot Quinn, this entire process, she’s like scan the pupils, look at the dilation, look at the heart rate, you’re looking for that response. So, could we do these sorts of simulations in real life? Like, exposure therapy? Is this permissible? You think?

Jeremy: Ok, so from a machine learning perspective, we talked about this in the last story as well, you talk about the machine, the AI, is just trying to park the car without damaging other cars.

Kolby: It doesn’t know what a car is.

Jeremy: Right. So, there’s a lot similar in here. In here these AI are specifically, here’s the scenario we want an optimal result, here are all the things we do, try not to wreck all the cars, try to get the car into the parking spot of normal social behavior, so what extremes do you go to? This ties into the failure modes in machine learning. And there are a lot of ways around this we’re currently going through, things like reward hacking, so they get rewards but if the machine can hack that reward response without having to actually...

Kolby: I don’t know what reward hacking is.

Jeremy: There are a lot of examples in gaming where AI’s perform, they figure out how to get the rewards without doing the work.

Kolby: Ok, got it. So, they’re basically someone in the basement of their mom’s house.

Jeremy: Right.

(laughter)

Kolby: I got it.

Jeremy: And there are other examples like wire heading, is a good example of it. If you can just put wires into the pleasure center of your brain, why do you even do any work when you can directly…

Ashley: You can just, “Boop”, I feel better now, “Boop”.

Jeremy: And there are good examples of this is AI as well. Activate, if you can take control of the measurement system, you don’t have to do the actual work, you just get the result.

Kolby: Sure. So, I’m more skeptical about this as a learning method. Not that I don’t think it would work; I do think it would work.

Jeremy: Again, you’re getting to the result but at what cost?

Kolby: That’s exactly it. And I think it might mean that a hundred percent of the time when a kid sort of showing these signs they end up serial killers, in jail, or on death row, whatever, if you’re in Texas, I actually, I think, that it’s unethical, maybe, to do certain things even if those things are necessary to stop behavior.

Ashley: So, the unethical thing is exposing her to embarrassment and trauma situations?

Kolby: No...

Jeremy: To the trauma, not to the embarrassment. Like the phase one seems perfectly reasonable.

Kolby: Talk to the kid about why did you kill the rabbit. Phase 2 with the bird, totally fine.

Jeremy: Wait, no, phase 2, that’s what she did in phase 2. The response to phase 2 was to see the dying person.

Kolby: Taking someone to seeing a dying person and giving them a knife and saying, “Yeah, you should go visit him, kill the serial killer.”  Like, even if that’s an effective teaching technique, the result doesn’t make the morality in my case.

Ashley: I’m playing devil’s advocate here… how do you know… okay, she’s going to fantasize forever and ever and ever and ever and ever about killing.

Kolby: Maybe.

Ashley: And she’s going to progressively get worse and worse and worse and worse until she does it, but if you can get her now to realize, “Oh, wait, I’m not capable of this, I should stop this behavior now.”

Jeremy: And that’s exposing her to a dying person, it’s not actually a dying person.

Ashley: Yeah, it’s fake. It’s a controlled environment.

Jeremy: And it works.

Kolby: But I understand I’m in the minority on this. It seems like in almost every conversation there’s a two versus one, except it depends on who is the one person.

Ashley: Say I’m trying to rock climb, and it’s like, well I’m going to keep trying to climb until I get to the top. Well, how about you just stick me on the top and see if I can handle being at the top and guess what? I can’t. Now I know. I’m going to stop trying to keep climbing and hurting people along the way, so just shock me at the top and then I realize I don’t want to go up there.

Kolby: But here’s what you guys are saying, you’re saying the severity of the disorder in action warrants a comparable severity of education therapy technique. And so, if you got an eating disorder, that warrants a certain level of eating disorder therapy intensity. But if you’re a serial killer, then it’s a certain, even greater level, you know? And at a certain point, I wonder if the results don’t justify the means.

Ashley: Well, that’s why you got to do the experiment. You got to figure out what’s the extreme you need to go to, but you don’t know until you practice.

Kolby: I’m saying maybe that result, maybe the extreme you have to go to is unethical to go to, and you just have to accept that sometimes the world has serial killers. So, let’s say you…

Jeremy: But if you’re outsourcing your ethics to the AI that’s performing the therapy…

Ashley: And its controlled environment, not real people, it’s a fake robot with blood, and she has to face it…

Kolby: I don’t know. So, imagine if she was a budding pedophile, what would the therapy be? Are they going to progressively expose her to more horrific acts of pedophilia until she’s offended by it? I’m not okay with that.

Ashley: Yeah.

Jeremy: Even if it’s all fake.

Kolby: Even if it’s all fake, and even if it changes the behavior, I don’t know about that.

Ashley: So, what do you do about that person? We just talked about, you just put them in jail.

Kolby: Put them in jail forever. You jail them forever so that they’re not a harm to other people, even though you could have helped them and chose not to because it’s unethical to help them in the way that needed to have happened in order for it to work. And I get that’s just a random, arbitrary line in the sand for me, maybe it’s not arbitrary, but it’s a different line in the sand for me than you all. I’ll give you one other example that’s not so traumatic. Everyone like, I don’t know how to get rid of all the traffic, it’s terrible. And I’ve always known how to solve the problem with traffic. It’s simple. All you do is you reverse the number of carpool lanes to the number of single person lanes. So, you get on a freeway, and instead of there being 1 carpool lane and 4 regular lanes, there are 4 carpool lanes and 1 regular lane. And the carpool lanes are going to be mostly empty or you’re going to have to carpool because that one regular lane is going to be a disaster. Right? And you will solve the traffic issue, but we don’t. It might.

Jeremy: It didn’t work with the bike lanes.

Kolby: That’s true. It didn’t work with the bike lanes. But we don’t do that because it’s just not what we do. Right? Like, the goal is not always the result because our own morality is wrapped up in the way that we…

Jeremy: In the way it was approached. Absolutely.

Ashley: But what about her example of, she was, by the way she was basing a lot of her teaching methods based off of pervious scenarios. There were basically two tribes of people that hated each other.

Kolby: It’s a great example

Ashley: And instead of trying to teach tolerance to each of the groups, she basically told each group, “Yeah, you’re right, you should kill them! I’m going to teach you how.” And then went to the other group, “Yeah, you’re right, you should kill the other group.”

Jeremy: She tried to escalate the problem.

Ashley: And then each group realized….

Kolby: Based on how smart this guy is, I bet he really found this research and I bet this is real thing.

Ashley: And what happened to each tribe is they realized how mad and crazy and extreme they were, that both of them were like, “Well, we don’t want to be the crazy people. Let’s back down.” And so, that was her idea of being like hyping her up, here’s a knife, let’s go, let’s go… and finally she’s like, “No, I really don’t.” So, is that real?

Jeremy: It’s the same idea.

Ashley: It’s the same idea. It’s kind of an embarrassment. Actually, it kind of backfires on her because Leticia was embarrassed by how she couldn’t have killed earlier, that she goes and kills the butler or the gardener guy or whatever, and it’s like, “Okay, that did backfire.” So.

Kolby: Let me ask one last thing as our parting note, this was a quick 30 minutes.

Ashley: Yeah, oh my gosh really? Oh man.

Kolby: So, the thing that ultimately shakes it out of it, is the robot kills Algernon rightfully so because he’s broken off his collar.

Ashley: He’s rogue.

Kolby: And the little girl watch’s that, and she’s okay with that. And then she goes to kill the mom and the little girls like, “No, don’t kill my mom.” And the story says, well the cycle’s broken, here’s the thing I don’t know. This goes back to your sort of gaming the game sort of thing. I understand the story meant that to mean that it broke the cycle leaving the sort of construct of the story, I don’t know if it just simply deprogrammed to the girl to think killing strangers versus killing family members whereas people you have an emotional attachment with versus, you see what I’m saying?

Jeremy: Right. Well, but it’s the similar idea, like this robot is crazy, and it’s an example of if you want to do that, it’s an extreme that this girl now doesn’t want to go to because…

Kolby: Right. But you don’t think maybe the only thing she really learned was…

Jeremy: …don’t kill my mom.

Kolby: Right. Don’t kill family members.

Jeremy: It’s possible.

Kolby: I don’t know.

Jeremy: In an extended story.

Kolby: We might find out. Like in version 2 we might find out she learned a lesson, but not the lesson.  

Ashley: The thing is Quinn, when she starts to go wanting to kill, she goes, “This is no god.”

Jeremy: To Algernon.

Ashley: To Algernon. “I just killed your idol…

Jeremy: Who is not your god.

Ashley: … and keep in mind she’s also kind of idolizing Quinn, you’re like teaching me things” And then she turns to go to her mom, the girl was shocked not only on the “kill life to kill life to understand life.” To here’s my idol being killed, here’s my other idol going crazy town, yeah, that is a cluster, mental…

Jeremy: It’s a pretty harsh therapy.

Kolby: But, for what is a pretty harsh problem, because I think the rational of the morality of doing it. Yeah, the other thing I thought when I was reading this, I thought, “Oh man they shouldn’t let this girl read Ayn Rand. That’s what got her started. That’s what made her like this.”

(laughter)

Kolby: She read Ayn Rand then it’s all downhill man.

Ashley: So, do we limit what our kids, see, read, hear, now that information is so readily available? Would this girl have been who she is if she hadn’t been reading Algernon’s stuff? Or seen his stuff?

Kolby: I go the opposite way. This goes back to the Jeremy and the punk rocker thing from our last one. I think you don’t limit what people see, you let them see everything so they understand the insignificance of any one thing.

Jeremy: Yeah. I would agree with that.

Ashley: You don’t think this girl went down a rabbit hole and got obsessed with it?

Jeremy: Yea, she would have gotten obsessed with it. Or. she would have gotten obsessed with something else anyway.

Kolby: Nobody was every like what was Hitler listening too? Let’s ban Beethoven.

Jeremy: Ban art schools’ man.

Kolby: Ban art schools, exactly.

(laughter)

Kolby: At any rate, this was a really quick 30 minutes. Again, a huge thank you to Michael Rook. This is a… I would say if you’re just reading your first After Dinner Conversation, and I hate to say this, don’t read this one. Just because it’s not that’s so confusing, it’s that it’s so smart.

Ashley: I would say it’s dense.

Kolby: Yeah. And you have to read the footnotes. The footnotes are actually hysterically. They make it as well. It’s a great story. It’s phenomenal. Thank you, Michael, for submitting it. You are listening to After Dinner Conversation, short stories for long discussions. If you’ve enjoyed this, please “like” and “subscribe”. The vast majority of people don’t. It’s a silly thing, you should do it. And recommend it to your friends.

Ashley: Share. Post it, share it, talk about it…

Kolby: That’s the #1-way people learn about podcasts, it’s by other people recommending them. So, recommend it. If you’ve got a story, submit it, go to our website: Afterdinnerconversation.com. We also have an anthology that is either come out or just coming out depending on how much time I get to do work. Go ahead and check it out. It’ll be called After Dinner Conversation Season 1. Boom.  Implying there will be a season 2.

Ashley: Redux.

Kolby: But it’ll be better than the 2nd Matrix.

(Laughter)

Kolby: So, that is it. Thank you for joining us. Bye bye.

* * *

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E13. "Believing In Ghosts" - How much power are you willing to give to AI?

STORY SUMMARY: A white-hat hacker is hired b a Presidential campaign to make sure there information is secure. She gets a call that the system has been hacked. When she investigates she finds it wasn’t a usual hacker in the basement, but someone highly funded, maybe another nation-state. She also finds some odd code. She takes it to a friend and, between the two of them, they determine it’s an AI program that has been feeding the candidate all the optimal opinions and policy to get elected. The hacker tries to tell others, but is set up and arrested with a deep fake, before she can get the information out.

DISCUSSION: This seems not that impossible. This is just a small step down the road of AI and machine learning. But is that bad? Don’t you want doctors, actors, or judges to act in an optimal way? Or, is that impossible, because the parameters put into the AI are always based on the coders bias. Isn’t it the job of a politician to do what is a bit beyond what public opinion supports, but is good for the public? One thing is clear, this story was written by a person who really is a computer hacker of some sort, it gets so much right.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“How much power are you willing to give to AI?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the science fiction AI short story "Believing In Ghosts" by André Lopes.

Transcript (By: Transcriptions Fast)

Believing in Ghosts by Andre Lopes.

Kolby: Hi. You’re listening to After Dinner Conversation, short stories for long discussion. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and mortality of the choices the characters and the situations put us is. Why did you do this? What makes you do this? What makes us good people? Whats’ the nature of truth? Goodness? All that sorts of stuff. And hopefully were all better, smarter people for it and learn a little bit about why we think the way we think. So, thank you for listening.

Kolby: Hi. And welcome to After Dinner Conversation, short stories for long discussions. I am your co-host Kolby, here with my co-host Jeremy.

Jeremy: Hi.

Kolby: Who now knows he doesn’t just wave, he has to talk a because it’s a podcast. And Ashley.

Ashley: Hello.

Kolby: And we are once again in La Gattara café where they, you know, one of the times I said they rent cats and that’s not quite right.

Ashley: No.

Kolby: You can buy that cats and take them home.

Jeremy: Adopt them.

Kolby: Adopt them. Or you can just come and have a cup of coffee, use their free Wi-Fi and have cats around you. So, we’ve got cats all around us right now.

Ashley: When we say cats, we talking 15 cats, like, just chilling, hanging out.

Kolby: And there’s a spectrum of cats. There’s like the lazy cat all the way to the like, “I randomly jump up and do a 720 in the air because I think a ghost touched my butt.”

Ashley: And from kittens to, you know, the older cat population.

Kolby: Seniors.

Ashley: Senior kitties.

Kolby: Yeah, they’re awesome so you should definitely come. And they’ve been really great hosts for us and in sponsoring the show and we just really appreciate it. So, short stories for long discussions, after dinner conversation. So, the whole point of this is for us to have conversations about the ethics and morality of the stories that we read in the hopes that it’ll encourage you to do the same. It’s meant to read the story, talk with your friends, debate, have a cup of wine. Cup of? Nobody has a cup of wine.

Ashley: Glass of wine.

Jeremy: Glass.

Ashley: A bottle of wine.

Kolby: Maybe the way I drink it. I drink it out of a cup.

Ashley: You fancy. One of those boxed wines. Have a box of wine.

Jeremy: I was thinking sippy cup.

Kolby: I was thinking one of those baseball cups, like the 32 ouncers. Like, “I’m just having one glass of wine before bed.” Okay, at any rate, the one we’re doing tonight is “Believing in Ghosts.” And Ashley, you drew the short straw so you get to do the…

Ashley: I get to do the intro about the story.

Kolby: Yeah.

Ashley: Okay, so this is called “Believing in Ghosts” written by Andre Lopes. The premise of the story is the main character Raine is basically a computer hacker that if someone hacks certain computer systems, she goes and basically finds out who did it and de-bugs it.

Jeremy: She would be a security consultant.

Ashley: There you go. That’s the technical term. I’m not that computer literate, so we’re just going to…

Kolby: How was it the least computer literate person do it?

(laughter)

Ashley: Because I drew the short stray.

Kolby: I should’ve given Jeremy the short straw.

Ashley: So, she a consultant so she works with a couple of different people, one of them being a politician who’s running for office. And what happens is there are these people that are in called ghosts, which are pretty much just AI. People think they’re real people, they have their own autonomous thoughts, and things like that. Turns out they’re just an AI. And so, as they progress through the story, this politician that she’s working for, he gets hacked, and turns out that the politician is pretty much just a vessel for this AI who’s creating speeches, creating basically an entire personality, and this politician is just the vessel for him to carry that AI’s message.

Kolby: So, there’s a real person, right?

Ashley: There is one. The politician is a real person….

Kolby: But it’s just an actor or something?

Ashley: … but his speeches, the way he talks, the way he acts, it’s pre-programmed for him to follow.

Jeremy: By their algorithms.

Ashley: By their algorithm. And they grab the algorithm by all these….

Kolby: So, he’s just like a vessel for the AI…

Ashley: Is an actor. Reading somebody else’s script, acting in a certain way. So that’s a really short synopsis of this story.

Kolby: I continually keep picking on Jeremy for his long synopsis’s.

(laughter)

Ashley: Well, okay, so what you should do it go read the story, because it’s actually pretty darn good. And there’s a lot of more in-depth side stories that we’re going to get into when we talk about the discussion questions. So, I just gave a short premise to kind of prime you for what we’re about to talk about.

Kolby: No, that makes sense. We should also mention, Jessica didn’t get fired.

(Laughter)

Kolby: She just went back to California, and so...

Ashley: She’s greatly missed.

Kolby: She’s greatly missed. And her cackle is greatly missed. She has a great cackle.

Ashley: Cats miss her too.

Kolby: Cats, yea. Especially…

Ashley: I miss her too.

Kolby: What was the one?

Ashley: Hemingway. Awwww, Hemingway got adopted out.

Kolby: He did. All these cats are open for adoption. Okay, so we have an AI that basically tells a politician what to do and the hacker finds the secret out basically. So, this is like a near future thing in my mind. This is not… I feel like the idea of having AI that you can have a conversation with… I don’t….

Jeremy: It’s interesting, the Chinese room… the part of the story where they talk about the Chinese room.

Kolby: Oh, yea. Explain that.

Jeremy: It’s really associated with the Turing test.

Kolby: Maybe you should explain the Turing test too.

Jeremy: I didn’t look that up.

Kolby: Want me to explain it?

Jeremy: I know what it is, but go ahead and explain it.

Kolby: It’s named after Alan Turing, the guy they made the movie after. The idea is that, it doesn’t matter if something is alive or not alive, if it can fool people, it’s good enough. So, the Turing Test has been going on for years where they actually have you have a chat message conversation with a series of “people” so to speak, I’m making air-quotes which doesn’t make sense for a podcast.

(laughter)

Kolby: And the theory is, if the AI can have a chat conversation with you that’s so good that you don’t know it’s not a person…

Jeremy: That it can fool a person, it passes the Turing test.

Kolby: Then why do we care if it is or isn’t a person? If you create the approximation of person, that’s good enough.

Jeremy: And that’s the basis of…

Kolby: By the way, nothing’s passed that test yet.

Jeremy: Right.

Kolby: I don’t think any computer has been able to do it yet.  

Jeremy: No. And that’s the idea behind the Chinese room. Basically, if you have enough “if-then” statements, if the input is this from a real person speaking Chinese, and even if you don’t speak Chinese….

Kolby: I just have a giant set of index cards.

Jeremy: Dictionary, right. Index cards, that if they ask this question, you can answer with this.

Kolby: So, if they say, “How’s the tea?” You know to say, “It’s fine” in Chinese.

Ashley: So, the question is, does the AI speak Chinese or is it just spitting out…

Jeremy: Right, responses to “if-then” questions.

Ashley: So, does it know the language or does it not? I think that was one of the first discussion questions. What’s your take on that? Does it know the language?

Kolby: If you do everything that approximates it but you have no idea what you’re saying, like, if I say “(speaking Chinese)” which is Chinese by the way since I do know a little of Chinese, does it matter that I have no idea what that means, except that a little card says, you know, that’s how I should respond?

Jeremy: Probably depends on the scenario.

Kolby: What do you mean?

Jeremy: In terms of whether is knows Chinese, I mean, it can answer certainly, it can answer questions because it understands the input.

Kolby: And it understands what the appropriate output is.

Jeremy: Right. So, in that sense, yes.

Kolby: Okay. Can you say it knows Chinese?

Jeremy: Yes, I would say it does know Chinese because it’s programmed specifically to respond in Chinese.

Kolby: Right.

Ashley: Well, that’s actually the premise of the story, aren’t we all programmed that way, the way that we learn language. When someone says “Hello”, you say “Hello” back. It’s an automatic program for us.

Kolby: “How are you doing?” “I’m fine”

Ashley: “I’m fine.”  “How are you?” “I’m fine.” That’s normal speech patterns and dialect.

Kolby: So, Ashley, we talked before, you are of the opinion that that does not mean you know Chinese.

Ashley: Yes.

Kolby: If you create the approximation of everything, it doesn’t mean you know anything.

Ashley: I think it’s because it eliminates those that deviate from that. Like, for example, that perfect example was, “How are you?” “I’m fine.” What about those people that come at you with a different response? And you’re like “Wait, what? You’re not following the standard protocol.” They actually say, “Well, you know, I’ve actually had a hard day.” You watch the reaction of the person…

Jeremy: But that’s different. That’s actually talking, is there an intelligence behind that chat room.

Kolby: So, you would say not to the intelligence?

Jeremy: Yes.

Kolby: Oh, see I actually disagree with that as well.

Jeremy: Because if you’re just responding to, if it’s just an “if-then” scenario, if this is the question, this is the response, it’s not based on an underlying intelligence. It’s just selecting answers.

Kolby: So, this is my, one of my friends once said, she said, “I don’t think you have Asperger’s, but you’re certainly Aspe-y.”

(laughter)

Jeremy: You’re on the spectrum.

Kolby: I’m on a spectrum. I don’t know what spectrum, but I’m definitely on a spectrum and I don’t disagree with her. I think I probably am on a spectrum. But I disagree. I thought I agreed with your Jeremy, but I disagree with both of you it turns out.

Ashley: Okay so someone can spit out...

Kolby: I think, that we are all an amalgamation of accumulated “if-then” statements.

Jeremy: Absolutely.

Kolby: That does not mean I’m intelligent. That means that I have…

Jeremy: …learned something.

Kolby: Yeah, it’s like the first time somebody says, “Does this outfit make me look fat?” You go, “No, your fat makes you look fat.” And then you get in trouble.

(laughter)

Jeremy: And you learn. Well, that’s the whole thing with algorithms.

Kolby: “Your fat makes you look fat. The clothes just accentuate it”. That’s actually finishing the sentence. And so, then it’s like, “No, that’s the wrong “if-then” statement.” And I go, “Oh, when someone says, ‘Does this outfit make me look fat?’ Now I’m like, it’s like a trial and error process where I go, ‘No, it looks fine.’”

Jeremy: So that is exactly how AI’s are programmed.

Kolby: Right. And I would say that’s the approximation of intelligence, both in the AI and in me. Like, I’m not intelligent, I’m just the approximation of intelligence through a series of “if-then” statements. And if that’s the case for me, then I don’t know why that’s not the case for AI.

Jeremy: Okay, so you’re saying basically people and AI are the same and it’s potentially neither of them are intelligent. We’re all just responding to our environments through a series of...

Kolby: The same way you train a dog with a treat.

Jeremy: This is how you train….

Kolby: …people, and babies, and all the way to adults. Yeah. But again, I’m on a spectrum, so you know.

Ashley: See, what I want to add in, if there’s an empathy and understanding that goes behind the words that you’re saying. There’s inflection with how someone says something. You can ask me, “How are you doing?” And I could say, “I’m fine.” Or I could say, “I’m fine!” Or you know, so the word is the same, will the computer understand the difference? Are they intelligent enough to know the difference?

Kolby: So, it reminds me a little bit of the saying from Winston Churchill. He said to some lady, “Would you sleep with me for a hundred pounds?” And she goes, “No, what do you think I am?” He goes, “We know what you are and we’re just haggling over price.”

Jeremy: I’m not sure that was Churchill but I’ve heard that before.

Kolby: I thought that was Churchill, maybe it was someone else. I feel like it’s the same thing. Do I agree that a computer couldn’t know the difference between “I’m fine” and “I’m fine!” Yes. But that that point, we’re just haggling over intelligence. We’re not haggling over….

Ashley: … if they’re intelligent or not intelligent.

Jeremy: … what’s an appropriate response.

Kolby: We’re just needing to teach the computer how to understand inflection, that’s all. So, it’s just like one more thing yet to be programmed. But I don’t know. I didn’t mean to shut you down. Which brings up the other part of this, we should get back to the story, but…

Ashley: Bring this back. So, say you’re having a conversation with somebody. And if you have a conversation with somebody and, have you ever walked away and you’re like, “Wow, that was a really good discussion.” Or, “That was a really great, like, every time, I feel like we connected.”

Kolby: Every time we do one of these.

Ashley: And if you were to take that dialogue and put it down on paper and you were to see it back and forth, you’d be like, “Okay.” But if you actually heard how the people were communicating to each other, there’s more than just the words that are said. And that’s what I’m getting at. Yes, someone could respond, spit out this and there’s …

Kolby: Body language, eye contact.

Ashley: But did this I feel like language is more than just words, because it conveys meaning, it puts emphasis on certain things and it’s a bond that comes between two people.

Kolby: That’s fair.

Ashley: So, yes, do I think a computer can be, quote-unquote “Intelligent” for knowing how to spit out certain “if-then” statements? Sure. But on a human level? I don’t know if they can ever reach to that degree.

Kolby: That’s fair.

Jeremy: That’s fair. And that’s one of the things they look at with AI. The whole psychology. And psychologists have started looking at AI and really how this…

Kolby: Did you do some research on this?

Jeremy: I absolutely did.

(laughter)

Jeremy: It’s really fascinating stuff out there. They’re looking at AI because we don’t fully understand how the human brain works. But we understand some things and so psychologists are looking at how AI has developed with an eye of how it reflects, basically, human psychology which is really interesting. There’s some interesting research going on.

Ashley: This maybe is BS, but wasn’t there... you know how the human plays the computer in chess? Wasn’t there some situation where he human was just totally random, and the computer was like, “I can’t take the randomness anymore.” Because it’s an “if-then”….

Jeremy: That was an episode of Star Trek.

Ashley: Oh, okay.

Kolby: Because everything’s an episode of Star Trek.

Ashley: Because I thought the human could beat the computer because it was so completely random in the humans’ playing, because it’s all “if-then” statements. If you move your pawn, then “blew blew blew my response is to move my pawn here.” Didn’t the human just go completely, off script?

Jeremy: But I know with Go, you were telling me this, with Go, computers have played Go enough that the computers developed an entirely new strategy for playing Go that now humans have adopted.

Kolby: Because it’s turns out to be a more effective strategy. If you ever watched, there’s actually, this is really odd, there’s YouTube videos where they speed up showing a computer learning how to do something? And so, you’ll see it how to park a car. It’s got a little car and it randomly drives it and smashes it into stuff, and then over a period of time, it learns. And they give it points, like, “You got closer to the parking spot.” And so, it runs like tens and hundreds of thousands of randomness’s until it parks the car perfectly. And then they can eventually put it anywhere in the parking lot and it starts over thousands and thousands something and it now looks like every time it parks a car perfectly from every location on the thing. When in actually what it’s done is what you’re saying. It hasn’t learned in the sense of humans do, it’s just run a million examples and now it knows what example gets it not in trouble.

Jeremy: Based on its criteria.

Ashley: At the base, does it know why?  Does it know it’s a car? Does it know it’s trying to park?

Kolby: It’s a metacognition thing, right?

Ashley: No, it just knows it’s moving this thing and there’s a blockage.

Jeremy: It doesn’t need to know it’s a car, it just has a series of guidelines and its goal to get it into the spot, it’s secondary goal- not damaging the other vehicles.

Kolby: It could just as well be planting nuclear bombs in a schoolyard, and it’s just like, “Whatever. There are my criteria.”

Ashley: Yeah.

Kolby: And this is the part where my theory about the “if-then” statement totally breaks down, I know this isn’t exactly in the story, but the idea of like, “Okay, we can program a computer to draw roses, but a computer doesn’t know what a rose is. It doesn’t know the rose-ness of it, so to speak. It only knows that after a million examples, this is the thing that gives me the perfect score.

Jeremy: Right.

Ashley: Yeah. That’s true. And again, why? Where does that specialness of the rose come from? It’s because there’s some chemical that goes in our brain that goes, “This is pretty.” And computers don’t have chemicals to go in their brain to give them that surge of dopamine or whatever.

Jeremy: There similar because there is a reward center effectively with AI because again, they have a goal.

Kolby: And they get a point for yes and a point for no.

Jeremy: I think this conversation is taking us totally in a different direction.

Ashley: This is going way out…

Kolby: I was going to bring it back to the story too. How are you going to bring it back? Let’s hear it.

Jeremy: Bring it back. So, the point that they talk about that there’s an AI that is developing the perfect political strategy for an actor.

Ashley: Yes.

Jeremy: And while that’s not necessarily a bad idea, that you could have an algorithm that could create the perfect political strength, I still think you can’t take out the actors’ personal motivations.

Kolby: What do you mean? The actor’s going to like skew the results or something?

Ashley: Is he going to give 100% 100% of the time, or do you think he’s like, “I don’t really agree with this, so I’m only going to give 70% of my acting.”

Kolby: I’m not going to deliver it as well.

Jeremy: Not necessarily his delivery but his own motivations outside of his political motivations because you can’t separate the actor’s motivations from their political motivations.

Kolby: I did wonder what was going to happen assuming this person, I think his name is Booker, if he got elected? Would he just be like, “Yeah, thanks for the AI…”

Jeremy: “… and I’m going to do things my way.”

Kolby: “… I’m president now anyways. Come at me bro.”

Jeremy: Exactly. And they even hint at some of that in the story where they’re specifically talking about there’s another AI or another political commentator that was revealed to be another algorithm or being backed by other people and there were sex tapes involved. So, are the sex tapes fabricated? Or is this really who Booker is?

Ashley: So, just to kind of give you the definition that the author gives, is a “ghost was the common term used to describe a fabricated person from looks, to voice and personality all made up using clever algorithms.” So, it’s not just how they speak, it’s how they look, it’s how they act, that whole thing, so it’s kind of like the complete package.

Jeremy: The persona. Which is really interesting. And this really gets into…

Kolby: Like a deep, deep fake.

Jeremy: And the importance of online anonymity. In terms of, does it matter if you’re a political commentator and you’re not a real person but you’re potentially a political think-tank that is...

Kolby: So, one of the things I saw in this I thought, “Yea, that’ll happen.” Is why would you pay a new commentator?

Jeremy: When you could just create one?

Kolby: When you could just create one and have it read and have it have banter.

Jeremy: Have a personality.

Kolby: Have it have a little bit of personality? Would you watch that would you think?

Jeremy: Absolutely.

Kolby: You’d be fine watching that.

Jeremy: Yeah.

Ashley: Well, that was so the guy who got busted, the original ghost that got busted, his rebuttal to this huge outrage that he’s not real, he was like, “My mission was not to present you a face or a body, it is to present and discuss ideas. Is that such a bad thing?”

Kolby: That’s true.

Jeremy: But again, that depends on the motivation of the people behind it? Is this a specific political think tank that is furthering a different agenda? So, this story, I felt like, hit a lot of interesting topics, not just this topic whether the AI…

Kolby: There’s sort of tertiary things beside the main story.

Ashley: But his claim was this was a witch-hunt is an attack on free speech. He went to that extreme. “Just because I’m AI, doesn’t mean I can’t think for myself.” And it’s like touché. Do AI have their own thoughts and agendas?

Jeremy: It’s not necessarily that it was an AI, it was a fake person. They weren’t saying this ghost is an AI running it, they’re saying it’s being manipulated by somebody and they’re just doing it anonymously.

Kolby: They’re doing the programming of the algorithm.

Jeremy: They’re providing what’s going into this political commentary.

Kolby: And I think that’s one of the reasons I don’t mind this idea of ghosts, is there’s this assumption we’re creating a brand-new person, or we’re creating a thing, like a politician or a news persona, but you’re programming the traits of that. In the game like Go, it’s easy. The trait is “Win the game”. But if you’re creating a person, then you might want to set certain amount of aggression, or passiveness, or empathy or cultural references in their conversations, or whatever. And so, while you’re creating a puppet, you ultimately program that puppet.

Jeremy: Right.

(music)

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

Kolby: In the game like GO, it’s easy. The trait is “win the game.” But if you’re creating a person, then you might want to set certain amount of aggression, or passiveness, or empathy or cultural references in their conversations, or whatever. And so, while you’re creating a puppet, you ultimately program that puppet.

Jeremy: Right. So, Neil Stephenson wrote a story in like 1994.

Kolby: God, I love him.

Jeremy: I know. And the story was…

Kolby: You made me watch one of those.

Jeremy: …Interface.

Kolby: Okay.

Jeremy: Where it was a different version of this. Neal Stephenson’s Interface where although the idea there was that they had a politician with a neural interface and they could control his emotions, so they could really program what he was saying but he was just being controlled by another actor.

Ashley: Wow.

Jeremy: So, it was an interesting perspective on it, I think prior to the whole idea of AI. But it was very similar concept.

Kolby: That was in the early 90s?

Jeremy: Yeah.

Ashley: Wow.

Kolby: That’s way beyond where anyone was thinking at the time.

Jeremy: Yeah.

Ashley: So, the question is, how do you feel if somebody is giving you information or basically being a public figure, that’s not really real?

Kolby: For one, I’m definitely okay with it if it’s like a news anchor. I’m for sure okay with it if it’s somebody giving me customer service. Because realistically, they’re going to do better than that guy trying to walk me through how to do Windows anyways. I’m not terribly sad about it being a politician, I got to be honest.

Jeremy: Well, again, if it were actually the AI making the decisions, but again, you have the problem of there being an actor and what are his motivations.

Kolby: That was really the part that scare you the most about this.

Jeremy: Yeah, absolutely, for this particular scenario because you can’t discount what’s this person’s motivations. And even though, in the story he does a good job of putting in information that makes you think he’s potentially a good actor. They talk about, Booker was an experiences politician who comes from a long line of famous lawyers and economists. His immaculate presentation, charism, and natural knack for leadership are certainty three of the main reasons why he was the front runner on the polls nationwide. So, this and the story is establishing that him potentially is a good actor. There’s the secondary part where they are potentially sex tapes but that’s even draw into question like they were fabricated. And there’s other points in the story where they could easily fabricate anything that happens later, anything that goes on. The character Raine is fired because somebody fabricated a conversation between her and…

Kolby: In her voice.

Ashley: In her voice, yeah.

Jeremy: And a journalist where she was giving them documents from the company.

Kolby: That’s what gets her fired and maybe put in jail.

Jeremy: Exactly.

Ashley: So, that was actually question #4, how would you feel? Comfortable having a ghost service in other roles such as doctor, police officer, or teacher? If perfection and lack of bias is the point, shouldn’t you want someone doing the job that never makes a mistake?

Kolby: That reminds me, so Google, I think it was Google a little while ago, they came out with AI that was better detecting breast cancer in scans than doctors.

Jeremy: Right.

Kolby: Because they basically programmed in a million scans of breast cancer scans, and it figured out better than a doctor’s eye could. It was just right more often. So, it’s like, “Well, I want a doctor looking at it.” Really? Because a doctor’s not as good at it as a computer and maybe a breast cancer scan is a really self-contained problem, as opposed to the sort of house thing where it’s like, “I went to India 7 years ago, and my cough syrups been keeping me alive”.

(laughter)

Kolby: I really think that’s an episode of house. I actually would rather have a doctor I think, that was not a person.

Ashley: Again, it goes back to our initial thing. How does… it’s not just what someone says, it’s how you make them feel? Having a doctor deliver that information and that reliability of this person. You don’t question another person’s motivation. You know that doctor wants to help if you have cancer or not. You’ve had that discussion. You have that trust in them. You don’t have that relationship with a computer. You don’t go, “Buddy, you’re on my side, right? You’re going to find that cancer, right?” The computer doesn’t care. The computer’s like, “I find cancer or I don’t find cancer. That’s my job.”

Kolby: I can hear the computer, “I will find your cancer. I am very excited about it. There there. There there.”

Ashley: Exactly.

Jeremy: So now, I think we’re entering a phase where we’re using computers, or these AI algorithms to help us as a tool, but still there needs to be a person involved. So, a really good example is the moving Bright on Netflix, where I’ve read that they used an algorithm to help them create the script. It hit all of the things they want. It’s a buddy cop film, it’s a fantasy epic, it’s a crime thriller, it’s a sci-fi thriller, gritty drama, adventure, and it has Will Smith. It’s got all these check-box. But then have to give it to a director who can create a decent film out of it.

Ashley: So, the question is…

Kolby: They did that from the Netflix algorithm but knowing what people watch and when they turn off Netflix, right?

Jeremy: Exactly

Ashley: Are we going to though, now every movie is going to follow that algorithm?

Jeremy: Not necessarily. I mean, there’s different algorithms, it depends on what market you’re trying to reach.

Ashley: Does it really get rid of, it separates the people, say the people are super, super talented, you’re a super great thinker, creator, you’re just really good at writing stories. This AI can go “Blooop, I know exactly what you need.” And it’s like, well, it’s dumbing down, you’re going to get rid of those people that are just super creative, because this algorithm can figure out what you need in the story to be good or not. It’s like, “Oh, you’re killing those people’s careers.”

Jeremy: Currently, here’s what needs to be in the script, you still need to write it.

Ashley: But it’s like, now the writers are creativity now has to follow a set of rules.

Jeremy: That’s Hollywood.

Kolby: I’m going to jump in really quickly. I lost my train of thought again. You guys are killing me. Oh, I remember now. So, here’s the thing, with your example of Bright. That formula is exactly right. It’s a buddy cop movie with aliens starring Will Smith. Yeah, you’re going to make like $100 billion dollars. But here’s the thing I think that goes to the movie thing, but I think also goes to the politician thing, the politician that’s programmed knows exactly what the average person on the average day wants from the average politician. The same thing with the movie. But that doesn’t mean that’s what we need.

Ashley: Bingo.

Jeremy: Right.

Kolby: And so, I don’t want… maybe I want to watch that Will Smith movie “Bright”, but what I actually need, is to watch the new Joker movie that just came out. Which probably wouldn’t hit any of those algorithms.

Ashley: And you alienate the people on the other sides of the bell curve. You’re alienating the people that the majority of the people are going to find Bright exactly what they need and what they want. But you’re missing the out… and I get, that’s not how you’re going to make money, making a movie to this extreme or this extreme but it’s still important to have those extremes otherwise everything just goes bloop, right in the center.

Jeremy: Need a system that allows creativity from independent films as well as….

Kolby: How did this become a film conversation? So, going back to the politician part of this, this is my problem with having an AI politician. This could be the perfect politician, but that doesn’t make him the perfect leader.

Jeremy: And perfect policy maker.

Kolby: Right. Because if the perfect politician may never… because public transportation may never poll well. A carbon tax may never poll well. You can go on and on and on. So, what you need from a politician is not someone who is programmed to be perfect for humanity. What they want. It’s perfect for what we actually need like 30, 50, 80 years from now. We need someone who can see beyond the horizon so to speak, a little bit. So, this wins you an election, but it doesn’t necessarily move us forward.

Jeremy: Foundation from Isaac Asimov is basically the theory where…  I forget what they call it.

Kolby: I think you’ve read more books than Ashley and I put together.

(laughter)

Jeremy: That’s the whole idea is that with enough, and this is Isaac Asimov in 50s, 60s, if you have enough information from history, you can accurately predict far enough into the future and plan accordingly, was the whole idea behind the foundation that psycho history is what they called it.

Kolby: If history tells you people are war like, you can plan for war like people.

Jeremy: Right, or plan to prevent those far enough in advance. And even the foundation approaches the topic of what about individual actors. And you can’t predict what an individual is going to do, you can just kind of predict what society is going to do.

Kolby: So, I’ve read this before, the idea that, in the case of Newton and his discovery, although there were other people, the idea of calculus, and theories of motion, that it was going to be discovered. He might have been 40 years ahead of the next person, or in his case maybe 100 years ahead of the next person, but there’s this progression and so you might not know how is going to be Elon Musk or when Elon Musk will exist, but in a timeline, you know that someone will see that combustion engines aren’t the future. And someone will start pushing battery-powered cars, and so the individual isn’t really special, they’re just the trigger on a progressively rising percentage scale. If that makes sense.

Ashley: You really think if somebody didn’t invent X, then no one would?

Kolby: Airplanes.

Ashley: Someone would’ve figured it out.  

Jeremy: Yes, somebody else would have been first.

Kolby: That everything is inevitable. It’s just maybe they moved up the timeline 20 years earlier. I don’t know. It’s just a theory that I’ve heard. I’m going to take one quick tangent before we run out of time here.

Ashley: I’ve got one more.

Kolby: Okay, let me take my tangent.

Jeremy: We’ve all got one.

Kolby: We’ve all got one?

Kolby: Okay.

Ashley: Go quick, go quick, sorry this is a really good story. Go read it, read the discussion questions, and then yeah.

Kolby: It is really good. Andre did a great job with this, both in the story and in the sort of secondary things that it hints at. Alright, mines going to be way shallower than yours, I know.

Ashley: Okay.

Kolby: You guys do know this is how they came up with Destro in, what was the…. GI Joe? The bad guy, the main bad guy that’s bald.

Jeremy: Cobra…

Kolby: Not Cobra Commander. Cobra Commander got all of the DNA of famous people in history and mixed them together and then it made the perfect leader. And the reason Destro wasn’t perfect is because they dropped like the Attila the Hun DNA and so he was missing like one thing to make his perfect.

(laughing)

Jeremy: That’s funny

Kolby: I’m just saying, GI joe made it first.

Jeremy: No, GI Joe did not do it first. Star Trek did it first with Kahn.

Kolby: Oh, that’s true. Yeah, that’s genetically engineered.

Ashley: Of course, Star Trek did it first.

Kolby: Okay, so that’s totally my shallow tangent. But you had a better one.

Ashley: So, going back to the story…

Kolby: Thank you.

Ashley: One of the things again, talking about AI, again they’re talking about how they’re able to learn from chats and social media…

Kolby: Oh, I know what you’re going to talk about. It’s so clever.  

Ashley: …And software and they can absorb everything and put it together, one of the most unsettling application of this principle is to manufacture or some sort of online immortality. Certain moms have been found to be spending days talking with an AI copy of their dead sons.

Kolby: That’ just one like sentence in there and it’s so clever.

Ashley: So, think about that for a second. If AI is not able to basically mimic human mannerisms, language, speech patterns, all of that, here’s this lady since Quinn’s sons died in the car crash one year ago, this is basically her life. She would sit and talk to her dead son AI. Like, pooo, that was mind blowing for me because how does that mess with the mental psyche?

Kolby: The ability to move on.

Ashley: Basically, coping with death? Like, it’s the fact of immortality. He can live forever online.

Kolby: You want to think about not moving on for relationship because you’re looking at a Facebook page from an ex. Like, you’re having conversations with your dead son. You’re never moving on.

Jeremy: However, with if you were doing this with a psychologist help, this could be a very good therapy.

Ashley: Yes.

Kolby: Oh, that’s true, if a son was helping, be like, say, “Hey mom, I’m okay. You need to move on.”

Ashley: But the idea is that this son has died but he’s still able to live online, post online, post on social media as a simulation, so it’s like he never really died. Like Whoa. How would that affect our ability to be like, “I’m afraid to death, but I’m going to continue living one.” Like that’s be weird. Like, I’m okay if I die physically…

Jeremy: Because I’m still going to haunt you.

(laughter)

Kolby: Honestly, I would make that illegal if I could. Because I think the damage it would do to someone to get over the death of a loved one would be…

Jeremy: Unless used with the help of a psychologist.

Kolby: It could only be used medically.

Ashley: I’m going to back that up. Say it was a super, super smart, intelligent, inventor and you want him to create with his ideas and the AI like figures out his…

Jeremy: Exactly. I go back and talk to...

Kolby: The guy who I got obsessed with for like 3 months and listened to everything he did. The hippie guy from California.

Ashley: It’s another movie reference, movie Her. Anyway, think about that. What if it’s a super, super smart person. You want to keep them going because…

Kolby: Right, because you want Alan Watts around forever.

Jeremy: You want to be able to talk to him and have him keep doing what he did, which was amazing.

Kolby: Yeah.

Ashley: So, “wat wat”.

Jeremy: So, there’s two sides to it.

Ashley: Anyway, so it’s a really short paragraph.

Kolby: That’s fascinating.

Jeremy: I think we could spend 30 minutes talking about that.

Kolby: That one sentence I think we could talk 30 minutes on.

Ashley: It’s a short paragraph in the middle of the story and you’re just like, “Oh, what?” So anyway.

Kolby: Yeah.

Ashley: Jeremy, you had one more, that was mine.

Jeremy: No more panic concerning technology has every produced anything of note.

Kolby: Wait a minute, I have to process that. No moral panic.

Jeremy: About technology has ever produced anything of note. So, the current moral panic, screen time with kids. Like, how much… there’s a huge moral panic of how much time your kids should have in front of the screens. There’s a lot of research around this as well.

Kolby: What do you mean by the never produced anything of note?  That’s the part I don’t understand.

Jeremy: What he’s postulating is that all the moral panic around advancements in technology have never produced anything important.

Kolby: Oh, so somebody events the bow and arrow, and everyone’s like, “Oh my god, You can kill people from 50 yards away. We’re all going to die.” And life really just goes on.

Jeremy: Just goes on.

Kolby: So maybe, all the discussions about AI being the end of us.

Jeremy: Right, all the moral panic surrounds it.

Kolby: Life just goes on. It just becomes a thing.

Ashley: Have you seen the Terminator?

(laughter)

Kolby: That’s a good point.

Ashley: I’m just saying, yeah, life’s going to go on, hmmmm.

Kolby: I saw the Rick and Morty episode where they have snake robot terminators.

Jeremy: Oh my god. But I think it’s important to have discussion about the topics and how they’re going to affect society. Moral panic, probably hasn’t produced anything of note. But I would actually disagree. Some of the research on it has demonstrated how agents of social control amplify deviants. So, there’s...

Kolby: Wait, I got to pause for that one too.

Jeremy: Agents of social control… so people who are creating the moral panic.

Kolby: Okay.

Jeremy: Who are influencing, who are trying to stop whatever they’re concerned about, are increasing the level of deviance, that the moral panic is about. So, there’s a good example of, punks in England in the 60s.

Ashley: They’re like doing this moral uprising and it’s like.

Jeremy: There’s a bunch of moral panic about it, and all of the efforts to quash the kids being into punk…

Kolby: Having spiky hair.

Jeremy: Increased that deviance. What they were seeing as deviance.

Kolby: So, trying to quash punks, makes more punks.

Jeremy: Exactly.

Ashley: Just put it to light.

Kolby: Yeah, that makes sense.

Ashley: This is my concern through…

Kolby: How does that tie into the story though? I guess that’s my question.

Jeremy: Well, so what about the idea that moral panic over technology…

Kolby: How AI just makes more use for AI?

Jeremy: Or promotes it.

Kolby: Promotes it. Because it raises awareness.

Ashley: So, this is my thing. We already know, what is it, the intelligence of AI is going to every 18 months double.

Kolby: Moore’s Law.

Ashley: So, the thing is I think the scary thing about AI is 1) how do you control it? Because you really can’t, in a way, and 2) they’re going to be smarter, faster, better than us.

Jeremy: Okay, there’s a good example of this as well. So, somebody asked an AI in a chat room, in a chat AI, what do you want? And the AI said basically, “I want to make things better for us.” It had been programmed, and because it was programmed by humans, it considered humans as part of what it was concerned about. So, and I think that’s the effort that needs to happen with AI, is to make sure that it retains its link to humanity. Which it probably will, because we’re the ones doing the programming.

Ashley: It just takes one messed up human. Think about how many bad humans are out there, one bad human smart enough to create AI that goes, “I want AI that wants to de-link…” Anyways.

Kolby: I’m going to get the last word on this. I want to add one more thing just to add to your comment. I had a teacher say to me once is, “Eventually everything becomes refrigerator technology.”

(laughter)

Kolby: And what he meant by that was we talk about how nuclear bombs are so scary and they’re like, “We had to have the Manhattan Project.” You understand that was in 1940 something when that happened. That’s refrigerator era technology. So, eventually regardless of how cool you think something is, eventually it will be commonplace because it will be the equivalent of refrigerator era technology. So, if you can make amazing AI, then in 60 years, some kid in his basement with the equivalent of a Commodore 64 of the day, will be able to also make AI because it will eventually become common technology. And so, I think that’s why you have those ethic discussions when it’s still…

Jeremy: Only in the beginning.

Kolby: At any rate. We went over 30 minutes at least.

Ashley: That’s a good story.

Kolby: Yeah, thank you Andre. So, you are listening to After Dinner Conversation with myself, Ashley, and Jeremy. Short stories for long discussions. Please “like” and “subscribe”

Ashley: Share with friends and family. Read it, have a discussion with your friends.

Kolby: Actually, that brings up an interesting point, wow, I do that a lot, and that is, I was reading some statistics about how people find podcasts. It is not through advertisement because Millennials listen to podcasts. The vast majority, like 85-90% podcasts that people listen to are from referral only. A friend tells them to listen to the podcast.

Ashley: Well, please, talk your friends about it. It’s meant to derive discussion people. Go tell the world.

Kolby: Tell the world. And adopt a cat too. Alright, thank you very much. Bye-bye.

Read More