philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E31. "Idle Horns" - What do you do when it's punishment to punish?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: Set in hell, a demon, “Bub” asks the person he is torturing for eternity what he did to end up in hell. Turns out he stole a few bikes. This causes Bub to question his purpose and walk off the job. He climbs to limbo to take a break from it all. Eventually, Hermes comes to fetch him and bring him before Satan, who punishes him for eternity for walking off the job.

DISCUSSION: Wonderful story, both for the questions it asks, and the humor it brings to the situation. Brings up good questions about the “fairness” of eternal punishment for any temporary act. Also, brings up the question of a god who would is all good, all powerful, and all knowing, and yet allows people to be tortured. Nice twist on the concept in that Satan is hurting people, not because he cares about people, but because he knows it hurts God to see his children being hurt. Kolby wonders if walking off the job because of concerns about the morality of his actions should be enough to earn Bub a place in heaven.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“What do you do when it's punishment to punish?”

We discuss the ethics and choices in the short story "Idle Horns" by Garrett Davis. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E30. "Farewell, Odysseus" - Would you get a human as a pet?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: Set in the future, the Dios are a group of super humans who, because of having the wealth over generations to make mental and physical enhancements, are a different, and superior, race. Humans on earth sometimes agree to go to live on Mars with the Dios as their pets. The narrator is one such person, that is, until he starts to ask too many questions.

DISCUSSION: Great world building. A longer and more complete story than we usually do. Perhaps this is a warning, not about keeping people as pets, but about the long term effects of the wealthy having access to technology that allows them to further separate themselves from the poor over generations. What level of difference in ability makes it okay to keep another species as a pet? Maybe the differences in the story aren’t as great as the Dios want them to seem? Is this slavery? Is it fair that those on earth are so poor this is their only way out? Is that the crime here? Is this different than having a “sugar-daddy” that takes care of you?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Would you get a human as a pet?”

We discuss the science fiction short story about a superior race that keeps humans for pets "Farewell, Odysseus" by J.G. Willem. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E29. "All My Tomorrows" - How many tomorrows would you give up for a single yesterday?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: A sixteen year old girl is put in charge of her parents’ shop that sells memories of your past in exchange for a year of your future. The girl loves her job and all the memories she can feel leaking through the files. A sad man comes in and asks to have the memory of his “last good day.” She sells him the day at a discounted price for the remainder of the life he has left. It was, he says, the last day before he learned a terrible secret he never recovered from, one that caused his wife to leave him.

DISCUSSION: The story is so beautifully written. The energy of the young girl working alone, and the visuals of the memories and feelings as she walks by them draw you in so deeply to her joy. However, the story is terribly sad. Would you trade a year of your future for a past day? It means you think one year of your future can’t live up to one day of your past. It means you think your best days are behind you. It also means you are living in the past. Maybe would be worth doing it to see a dead parent again or to relive a past moment and provide forgiveness for past mistakes. The very fact that this is possible might make living for the future harder. It’s perfect that a young girl is working the shop, because she only sees the memories as positive, rather than in relation to the life people have today they compare those days to.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“How many tomorrows would you give up for a single yesterday?”

We discuss the science fiction short story about trading your future for your past in "All My Tomorrows" by J. Grace Pennington. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E28. "The Seven Absent Sins" - Are all sentient species hard-wired to sin?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: A Jesuit Priest researches other sentient beings in the universe looking for species that are incapable of committing one of the cardinal sins. He finds six different species that, because of their biology, he says, cannot commit one of the sins. He is unable to find a species without the sin of Pride. He questions, but finally confirms, that this strengthens his belief in God.

DISCUSSION: Are there merits to the earth closing itself off from the universe for years in order to maintain its “cultural purity.” Is this a good idea or doomed to cause issues? Are there ways to preserve culture without bans on other cultures? Is sin automatic with choice? As soon as you have a sentient choice, does the fact you can make the wrong choice mean we are capable of sin? The examples for various species are tough, because it’s not like they aren’t capable of sin. The issue is that their biology or environment makes the sin impossible. So, with different choices available to different species, could there be different sins possible? Are the number of sins infinite?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Are all sentient species hard-wired to sin?”

We discuss the science fiction short story about the search for a sinless culture in "The Seven Absent Sins" by Nathan Ahlgrim. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E27. "Two-Percenters" - Should we make everyone special?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: Set in the future, 2% of the population have a genetic makeup that allows them to be enhanced. The intelligent are very intelligent, the beautiful, like Greek gods. Because of their enhanced abilities, they run the world. An enhanced “Social” meets up with an enhanced “Rational” to tell him about a newly discovered drug that would allow the other 98% of the world to be able to be enhanced as well, but it would cause the 2% to regress to average, or worse. The Rational takes the vial and releases it into the world. The Social kills herself.

DISCUSSION: If things in this world are so amazing, why are the 98% causing civil unrest? Should the elite naturally be left to lead others? Does being super-human automatically make you super moral? Should the truly exception should lead the masses? if everyone is raised up, we are right back where we were, with people fighting to be on top and not enough to go around. Do we live in a meritocracy today? Doesn’t money allow those at the top to keep their children at the top today? The only ones with no choice are the 2% after the virus is released. Discussion about if we would release the virus.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Should we make everyone special?”

Kolby, Jeremy and Ashley discuss the choices in the medical ethics science fiction short story "Two-Percenters" by CJ Erick. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E26. "Snitch" - When doing a moral good, do the ends justify the means?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: A black pastor in New Orleans is trying to get a redevelopment project built for his poor community post Katrina. Things aren’t going well. A white person was robbed and beat up in the area, which scared off the banks from lending. The pastor goes to the local gang and pays them to keep white people safe. He also reaches out to another church group to help with protests. The white developer/partner comes and says he is going to make the project smaller and the church will get less. The pastor goes to the black mayor who also wants a cut of the development money for his re-election campaign. The pastor finally decides he’s had enough and calls the federal government to report corruption in the city.

DISCUSSION: Seems a very realistic portrayal of how things get done. There aren’t any clear good guys in this story, just people with codes that go with their social group. Which comes first, your code that pushes you into a group, or a group you get into that pushes their code on you? Maybe the pastor has finally decided to stop making moral compromises and live by better ethics, maybe not.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“When doing a moral good, do the ends justify the means?”

Kolby, Jeremy and Ashley discuss the choices in the inner city development short story "Snitch" by Charles Williams. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E25. "Pneumadectomy" - Would you remove your soul, to save your life?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: A young boy heads to the park to play soccer with friends, they tease him and won’t let him play because he had his soul removed. Flash to the discovery of the soul. A doctor has modified a CAT Scan machine and found the soul in the appendix. When the appendix is inflamed, sometimes it is medically caused, sometimes it is because of an injured soul. Regardless of the cause, it can still be removed and the problem is fine. And the person with no soul seems no different. The mother comforts the boy when he gets home. Later, the boy goes to a friends house, his mother tells the boy her son died, because he was having appendix issues, and they refused to have it removed because they didn’t want to remove his soul.

DISCUSSION: You must first accept the premise of the story, that the people in the story found the soul in the appendix. Knowing that, what good is it to have a soul in the story? Seems like everyone stays basically the same. Would you write on a piece of paper selling your soul to another person? If so, you must believe in a soul, regardless of what you say. Otherwise, the paper means nothing and it’s free money. Would you be friends with someone without a soul? Is the soul tied to an afterlife, if so, maybe the mother who let her child die did the right thing. Should the government be allowed to impose a medically necessary procedure that a parent refuses? Courts in the US say parents can refuse treatment for their children, if they are under 12 years old, with a court order.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

“Would you remove your soul, to save your life?”

Kolby, Ashley, Jeremy and Sarah discuss the choices in the medical ethics short story "Pneumadectomy" by Harris Coverley. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E24. "Choose" - What if death is the only option?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: A woman wakes up strapped to a lab table. An emotionless doctor asks her to choose what she would do for the trolley problem. She is then graphically shown the results of her choice. A new choice, do you push people out of an over-capacity life raft to save the others? She is again graphically show the result of her choice. This goes on for 1000’s of scenarios until the woman is totally exhausted from watching death. Only then does she realize she is being punished, 900 years after a choice she made, to kill children in order to find the cure to a disease.

DISCUSSION: Loads of utilitarian questions in this story, just one after another. Scenario makes them fit in the story very organically. Would a person really get rattled from watching all that death, or get desensitized? In Greek plays, it was called catharsis. It’s hard to know, is the woman being punished for her past acts, or re-educated, or being used as a deterrent to others? Should a person be punished after they have died? Should they be punished longer than their lifetime, or more than a single death?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“What if death is the only option?”

Kolby, Ashley, Jeremy and Sarah discuss the ethics and choices in the suspenseful short story "Choose" by David Whitaker. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E23. "Prevention" - Would you turn on your son, to save his school?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: A single mother and her son have coffee before school. His car is in the shop, so she drives him to high school. He calls his mom later to say he left his laptop in the car. She decides to go through his laptop, and finds out his son and two friends are planning on shooting up the school in just days. She searches his room, and finds guns and drugs. The mother is worried about how this will effect her college daughter, and herself, if the shooting happens. The next day she spikes her son’s morning coffee with drugs and waits for him to die in his room of an overdose. She calls the police and ambulance. She disposes of the guns and laptop on the outskirts of town. The police suspect nothing and her son’s death is deemed a suicide by drug overdose.

DISCUSSION: The mother is a psychopath, and her priorities are all wrong. Her first concern is her daughter, and she treats her son like a stranger. She is emotionless in killing her son. This also hints that the son’s issues might be genetic from the mother. This is wrong behavior. She didn’t have to call the police, she could have taken him to the father, or for treatment. There are still two remaining kids planning to shoot up the school, and she doesn’t even tell the school about them. This is a story as much about the mother’s issues as about a school shooting. However, school shootings are now just the world we live in as the “new normal.”

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Would you turn on your son, to save his school?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the suspenseful short story "Prevention" by Margaret Karmazin. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E22. "An Infinite Game" - Is everyone selfish, when death is on the line?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: Four prisoners are made to draw straws for the order they stand in a row. Their prison guard plans to push his bayonet into the first person and see how far back it goes in the line of men. The first man in line panics, runs, and is shot. The narrator talks to the man in front of him and tries to convince him not to run so he might slow down the thrust. The heavy set man in the back thinks he is safe, but the guard changes his mind and stabs him instead. In the end, only the narrator survives.

DISCUSSION: Story focuses on morality, game theory, value theory, and infinite game theory. Should the heavy set man have volunteered to be first in line to slow down the blade for everyone else? Is it selfish to run and, thus, cause those behind you to be more likely to die? Should the four men have simply tried to rush the guard? Does everyone find God when they are about to die? Game theory seems to only work when dealing with large numbers, not individuals. Value theory seems to state that, at the end of the day, nothing is worth more than your own life. This is an infinite, not a finite, game. The guard seems like an arm-chair philosopher.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Is everyone selfish, when death is on the line?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the game theory short story "An Infinite Game" by Dean Gessie. Subscribe.

Read More
philosophy podcast, ethics podcast Kolby Granville philosophy podcast, ethics podcast Kolby Granville

E21. "Prohibition" - Can you blame an addict for not following the law?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: Set in the future, an addict takes a cab to an isolated part of town. He goes to a private, illegal club to break the law; he orders meat. The club is raided by the police, who kill a patron during their interrogation of her. The meat-eating addict sneaks away, knowing he will break the law again.

DISCUSSION: Story with so many layers. First, the contrast between the “humane” society that has banned meat eating, but the brutality of the individual police. Do more serious laws allow for more brutal policing? Also, is this man protesting, or is he simply an addict? Seems to be just be an addict. Are there natural rights? If so, is eating meat one of those natural rights? Does it matter if the reason the law was passed was to protect animals, or to prevent climate change? If a law is passed you disagree with, does it change your behavior? Do you leave the party where they are eating meat? Does it depend on the level of the crime they are committing?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Can you blame an addict for not following the law?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the short story "Prohibition" by David Edward Rose. Subscribe.

Read More
philosophy podcast, ethics podcast, Children Kolby Granville philosophy podcast, ethics podcast, Children Kolby Granville

E20. "How The Cockroach Lost Its Voice" - When cockroaches could talk, and humans were still unhappy.

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: An older and younger (talking) cockroach climb to the top of the highest thing, the refrigerator, to overlook their world. The older roach tells the child that the humans he sees can talk, and also have a 3rd eye inside of them that allows them to imagine the future and remember the past, and this is what makes them unhappy all the time. An angel moth comes down and takes away the roaches ability to speak forever.

DISCUSSION: A children’s story, but one with a good lesson, about having the ability to think about the future, but not let it trouble you or dwell on it. Do other animals have this “3rd eye”? Maybe dogs or others do, to some degree. It has made humans successful because we can remember errors, and plan for future problems. It’s a trade off, but a good one. The key is to not worry so much.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“When cockroaches could talk, and humans were still unhappy.”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the children’s short story "How The Cockroach Lost Its Voice" by Samuel Reifler. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E19. "The Orphan's Dilemma" - Is getting a future worth forgetting the past?

Named “Top 15 Short Story Podcast” for 2020!

STORY SUMMARY: The story takes place in the thoughts of a 16 year old boy waiting to have his memory erased for his adoption. He thinks about going on his first date, and about being teased by others. He wonders about the family that is adopting him and having new memories implanted in him. It’s finally his turn, he has decided if he is getting his memory replaced, and he heads in to the room to tell them his decision.

DISCUSSION: Wonderful story about a “hero’s journey” of death and rebirth. Brings up good questions about how our pain, as well as our joy, creates our personality. What kind of family would want a kid only on the condition of cleaning his memories? But, isn’t the goal a successful adoption, and maybe having a clean slate would make that more possible. Don’t people avoid adopting dogs with “issues?” Is there a screening process? Should a family be able to select a child who isn’t “broken?” What if a free college education is included? How much are memories worth?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Is getting a future worth forgetting the past?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the short story "The Orphan’s Dilemma" by Christopher Burrow. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E18. "The Book Of Approved Words" - Are there words so horrible they shouldn’t even exist?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: An “approved “government writer gets in trouble by the governing board for writing honest movie reviews. His run-away brother comes to his house to scan his contraband collection of books and invites him to join his rebellion by uploading an earlier edition of the book of approved words so the population can see the words that are missing from the current edition.

DISCUSSION: Interesting modern twist on the typical 1984 banned books idea. In this case, the words being banned are ones that might offend, or exclude, the general population. Brings up an interesting question, if you remove words for higher levels of anger and frustration from the vocabulary, does that have the effect of pacifying the thoughts of the population. Do ideas exist without the words to express them?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Are there words so horrible they shouldn’t even exist?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the dystopian short story "The Book Of Approved Words" by W.M. Pienton. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E17. "A Change Of Verbs" - What if you just said what you meant?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: The main character is a passive University Professor with a nagging wife. For some reason he decides today will be different and he goes through the entire day saying, and doing, what he actually feels. The day goes amazingly well, with classes, with colleagues, and he decides this will start a new chapter in his life.

DISCUSSION: Mixed reactions on this story. On the one hand, isn’t it his own fault for not always saying what he meant and letting people walk over him. Did this unhappiness come on slowly? He doesn’t treat his wife very well, and, perhaps, too much of his attraction is only physical. Is this just an extrovert writer telling introverts, if you just were more like me, you’d be a happier? Isn’t this all on a scale, you have to balance saying what you mean, with being respectful of others.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“What if you just said what you meant?”

Kolby, Jeremy, and Ashley discuss the choices in the short story "A Change of Verbs" by Tom Teti. Subscribe.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E16. "Abrama's End Game" - If god told you she was ending your world, would you fight back?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: The female god of a fantasy realm tells the inhabitants she is actually a graduate student researching AI in an MMORPG, and that she created them to see how they would change. However, the game developer is discontinuing the game because of the illegal gold farming being done in game for money laundering. The leader of the AI fight back by working with real in-game players to trade them in game gold for real world tools to fight back.

DISCUSSION: How easy or hard is it for people to accept the concept in this story, of in-game currency having real world value, and AI that is this advanced? What would it be like to meet your god, and know that you are simply in their video game? Given “Moore’s Law” would it be better to shut down the game now, before it becomes “too smart” to ever shut down?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“If god told you she was ending your world, would you fight back?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the fantasy science fiction short story "Abrama’s End Game" by David Shultz. Subscribe.

Transcript

Kolby:

Hi, you're listening to After Dinner Conversation, short stories for long discussions. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and the morality of the choices, the characters, and the situations put us in. Why did you do this? What makes you do this? What makes us good people? What's the nature of truth, goodness, all of that sort of stuff. And hopefully we're all better, smarter people for it and learn a little bit about why we think the way we think. So thank you for listening.

Kolby:

Hi, and welcome back once again to After Dinner Conversation, short stories for long discussions, which is a fancy way of saying we get stories that have some sort of moral ethical question. We publish them and then some of them that we really like, we do a podcast, and we discuss them. And our hope is, is that you'll read the story and that you'll have the same kind of discussions with your friends about what would I do? How would it work? What's the nature of goodness or truth? Or what does it mean to be alive? Or all the things that we discuss. A lot of it's been AI stuff lately, computer stuff, except last week, which was about cannibalism. We all decided we were cannibals at heart.

Ashley:

Read the story. It'll make sense.

Kolby:

Because all meat eating is non consensual.

Jeremy:

#Ieatpeople.

Kolby:

#Ieatpeople. Yeah.

Ashley:

Start a movement.

Kolby:

Start the movement. I am your co-host, Kolby. This is co-host Ashley wearing the-

Ashley:

Hello!

Kolby:

She's wearing the shirt today.

Ashley:

For those Watching on YouTube, you can actually see the shirt. It's basically Jeremy's hashtag. So I was researching this...

Kolby:

Yeah, he says that in every episode. He's required to [crosstalk 00:01:49].

Ashley:

He's really good. He does a lot of research. He reads a lot.

Kolby:

I feel like if there was a shirt for me, it would say, I don't understand.

Ashley:

No.

Kolby:

Or, let me give you a hypothetical. Maybe that's what mine would be. I don't know.

Ashley:

So according to...

Kolby:

Yeah. Yeah. According to... And of course we're also co-hosting with Jeremy who does our audio and does a great job and also researches stuff unlike the other rest of us who just wing it.

Ashley:

We're wingers.

Kolby:

And we are once again sponsored by and hosted by a La Gattara, the cat cafe in Tempe, Arizona that have cats you can come visit or you can adopt. They've adopted over 500, probably 600 by the time you watch this, cats and they've always got new cats. And they run the gambit from like older sleepy cats to young, fun, spunky kittens, and they're just rockstars. This one is pretty awesome as well. And again, if you're having a good time doing this, like subscribe, and more importantly, tell your friends. The vast majority of podcasts that people try out are not because of an advertisement, but because a friend recommended it, and it was like, yo dude, did you hear about blah, blah, blah? Ask me another or whatever, and they do it. So suggest it to a friend and also, last thing. We have a book that's come out After Dinner Conversation, Season One, which is 25 of our best short stories that ask ethical questions with all of the discussion questions at the end of each story.

Kolby:

There are even children's stories in there. You can buy that on Amazon as an ebook or as a print book, which is super cool. I'm super jazzed about that.

Ashley:

If you have a great story, submit it. Send it on in.

Kolby:

Yeah.

Ashley:

If you love reading stories and want to help us filter it to get into our genre of ethical dilemmas, be a reader.

Kolby:

Yeah, that'd be great.

Ashley:

Tons of stories, so.

Kolby:

We send it out to readers as our initial screeners. Honestly, for about every 10 stories we get, about two or three are the kind of thing we do, the kind of ethical questions. And of those two or three, maybe one is actually-

Ashley:

Give people hope.

Kolby:

No, it's-

Ashley:

You're gonna get submitted. You're gonna be on the podcast.

Kolby:

You're gonna be different, though.

Ashley:

You're gonna be selected.

Kolby:

The point is, don't just send us writing. Send us writing that's the kind of thing that we read and publish. That helps us ,and it also keeps the readers from wasting a bunch of time.

Kolby:

Okay, so our last story is Abrams. Is that...?

Jeremy:

Abrama.

Kolby:

Abrahma, which apparently is a version of Abraham. Abrama's End Game. I don't actually, I forgot... written by... this is a second story by this guy who did it. David Schultz, who did the one that was here with Jessica, that blew us away, Rainbow People of the Glittering Glade-

Jeremy:

Yes.

Kolby:

... was, up until that point, it was the best story we had read or discussed. This is equally good.

Ashley:

Mm-hmm (affirmative).

Kolby:

Yeah, solid. I don't know who is discussing this one? Is it me? Maybe. I haven't discussed one in awhile.

Ashley:

You discuss it. You're taking it.

Kolby:

I'll discuss it. So Abrama's... I'm gonna to keep mispronouncing that. Abrama's End Game is about a massively multi-player online game. And so it starts off a little bit weird in that there's a character like an elf or somebody walking through a village and you're like, oh, it's a fantasy story. There's elves and dwarves and stuff. And then you realize that some of the elves and dwarves are of another world and some are not. But they all look the same. And what you then realize is, it jumps back, and you realize it's actually a character in an online game like... What was the one that you played forever you got lost in?

Jeremy:

World of Warcraft.

Kolby:

World of Warcraft. Yeah. But the character actually knows it's a character in the game. And other people are leaving and coming and going, and some of them are just avatars that are just idiot bots. And the thing that's so cool about it, is the reason that this one particular bot in the game is so smart is because the game developers opened up to the code to allow researchers to experiment with AI by creating individual AI bots in the game to learn and write papers and learn about how AI would react in a real world. And in this case-

Ashley:

Simulation.

Kolby:

... in a simulated world.

Ashley:

Yeah.

Kolby:

Right. 'Cause it's a constructed world where you can try out theory-

Jeremy:

Right.

Kolby:

... of how AI interacts with real people.

Jeremy:

Right. And I feel like this is the next step.

Kolby:

Oh, I think this is in [crosstalk 00:06:08]near future.

Jeremy:

Like with World of Warcraft, researchers were analyzing the data.

Kolby:

Yeah.

Jeremy:

Especially when there were events that acted like virus outbreaks.

Kolby:

Yeah. The virus outbreak, the one that killed everybody in World of Warcraft. Yeah.

Jeremy:

Killed all the NPCs.

Kolby:

Yeah.

Jeremy:

That was pretty cool.

Kolby:

Because it spread just like a virus. Yeah. Yeah. And so ultimately what happens is, because people are gold farming and it's in violation of some cryptocurrency law that's been passed, the government is going to shut down the game. And the-

Ashley:

Well, they see it as a threat, because the game has a currency called GP or gold points. This can be exchanged anonymously. It's a free market, totally anonymous, no way of tracking. They can trade with US dollars at an exchange rate of a thousand gold points or GPS per $7. So this exchange is happening and now the game itself, Land of Legends, has a $2 billion GDP. And they're like, we have to regulate this.

Kolby:

And of course the implication is... So let's say I wanted to do something illegal, buy drugs or deal in whatever horrible things, I could go into the game. With real currency, I could buy the in-game currency. I could then trade that in-game currency with another person-

Jeremy:

For real world currency.

Kolby:

... for real world currency. They'd cash it out again, and they would then give me the thing I wanted.

Jeremy:

It's money laundering.

Kolby:

And so it would be a perfect way to either buy or sell illegal things or to do money laundering because it's untrackable money, right?

Ashley:

Legends wasn't intended to function as a perfect digital black market, guaranteeing anonymity and a stable exchange rate and encrypted transaction. And it's super popular.

Kolby:

Right.

Ashley:

It made the system illegal. It's illegal, technically, but-

Kolby:

There's no way to track the sales. And so, the government steps in to shut it down because it's being used for black market stuff, and it's illegal, and there's no way to track sales.

Ashley:

It's like, how do we tax people? Well, you can't.

Kolby:

Right. And so the AI finds this out and wants to defend its realm, defend itself from having its servers turned off. And so the thing... and there's so many clever things about this story. The super clever thing the AI does is, they use their currency. The AI uses it's in-game currency to interact with real people and ask the real people to do real things in the real world.

Ashley:

They ask these rebels-

Kolby:

Yeah, like a hacker group.

Ashley:

... to get some dirt so if they try to shut us down, we can blackmail them.

Kolby:

Right. And so-

Jeremy:

That's a very clever way they do it.

Kolby:

Right. And so they create basically a dead man switch, where the real people do work in trade for the fake gold because they can cash it out. And then the bots, the AI in the game, now has essentially government secrets, and they ping the server every hour. And if the pinging ever stops, everything gets decrypt-

Jeremy:

Decrypted and released.

Kolby:

... and all the government's secrets are out there, right? And so they've essentially made it now where the government can't shut off the server. So they've basically defended their realm by interacting with real people and making them do work. And then the government tries to step in, and there's a battle in the third act. The government is not successful and there's essentially a stalemate where the government can no longer shut down these servers.

Jeremy:

And so they go in and sign a peace agreement.

Kolby:

Yeah, a virtual peace agreement, and then the story is over. And I will tell you, man, I think five to seven years. I think this is super near future.

Ashley:

This story is so complex because not only do you have the players in the game, like this is their world. This is their life. They don't know that they're basically in a video game. You have the researcher who's implanting these people and doing research studies on it. You've got the government. It's like, we've got to regulate this. We've got to tax them somehow. You've got rebels who are like, this is a way for us to make more money using it for bad purposes. And then you've got people who just want to play their video game. It sounds harmless. It's a video game. How do you tax... what'd they say? They wanted to tax a character.

Jeremy:

Yeah, the value of character.

Ashley:

How much should the US government tax imaginary creatures? And so it's like what? It's a world within a world, and they successfully defend themselves. It's just so interesting.

Kolby:

So, Jeremy, you played a lot of... 'cause I wrote a law school paper on EVE Online and about contracts and partnership agreements that are de facto partnership agreements in online worlds. You played a lot more than I did. Not just EVE Online but also World of Warcraft.

Jeremy:

Just those two mainly.

Kolby:

Yeah. What was your thought on reading this as a person who actually has put hundreds of hours, at least [crosstalk 00:00:11:04].

Jeremy:

Yeah, I mean, from a technical point of view, it seems like the idea... I mean, we have... EVE Online is a good example where there is an official exchange rate.

Kolby:

And they actually permit money to go in and out of their currency in the game.

Jeremy:

Absolutely.

Kolby:

Yeah.

Jeremy:

So there's an official exchange rate.

Kolby:

I remember you cost me money when you got my ship blown up. It cost me like 10 bucks. I had to go back and buy more currency to buy a new ship.

Jeremy:

Yeah.

Ashley:

Blowing up ships.

Kolby:

No, he did. He honestly he did that. I used my real US money to buy in-game currency, I think it's called ISK. I then bought a ship and it was like, all right, Jeremy, let's go raiding. And he made some silly mistake. He popped out of a portal too early, and I popped out too late or whatever. And I got blown up instantly.

Ashley:

Oh, no.

Kolby:

And my first thought was like, dude, you just cost me 10 bucks.

Ashley:

So,. okay. So I had a roommate who... to get through grad school, he was a big time online gamer, like would play for 48 hours straight. And he would build up these characters and sell them. And that's how he paid his way through grad school. It is literally a trade. He has a skillset. Why not? There is a supply and demand. People want to have these higher end characters because they love the game and want to build up more and blah blah blah. And there's a service-

Jeremy:

It's something for World of Warcraft they've had to address and there's a whole system now that helps you build characters faster, or you come in at a higher level with the newer versions.

Ashley:

But that totally just blew away all their hardcore people that made a career out of building characters. That was the way to utilize it.

Jeremy:

And they moved to something else.

Ashley:

Yeah.

Kolby:

One of the questions in here is about how like... So you and I, and I think Ashley, all think this is not farfetched.

Jeremy:

Right. And some of it's gonna happen sooner than other parts of it.

Kolby:

I think the AI-

Jeremy:

It's going to take the longest.

Kolby:

Yeah. But I think using online games to test AI-

Jeremy:

Absolutely.

Kolby:

Even rudimentary, I think that's not far away at all if it's not already happening already. The question I'm curious to hear your opinion on, what do you think about the sort of bias that... Since you and I have played these games and we understand them... So if you were to go and talk to someone else, someone who hasn't played online games, why do you think there's this idea of, oh, that's just stupid. People pay real money for fake gold. You think that they can defend themselves? How do you think that comes about that our... We bring ourselves to this. I think there's a lot of people who would think this is just hooey, which is probably the word they would use if they thought that.

Jeremy:

Right. The idea of going in and exchanging money for a fake currency.

Kolby:

For just fakeness. And it's all fake. Everything's fake. The people are fake.

Jeremy:

Then what about casinos where you go in and buy chips?

Ashley:

Yep. It's a video game.

Kolby:

Man, you totally just ended my question right there. Wow. It's a good thing these are headset mics and now [crosstalk 00:13:51] drop. You would've just dropped that mic. Yeah.

Ashley:

Just drop a cat. Just kidding. They land on all fours.

Kolby:

'Cause they are. They're just little blue chips that we just decide have value, which is the same thing with paper money.

Jeremy:

Exactly.

Kolby:

It's just little green pieces of paper we decide have value.

Ashley:

Yeah.

Jeremy:

So online courtesy is basically the same thing.

Ashley:

One of the people in the story, her reason for defending using online currency is that part of her research was that the software agents, the people that she created... They're equal participants and their behavior can be made to approximate human participants. It's kind of an economic Turing test in a way, conducted through virtual marketing activity. So it's a way of them testing human behavior. Like how much research is she getting out of this? I'm like, that's super valuable. Is there another way to do that, or is there not another way to do that?

Kolby:

I mean, here's the thing, right. You can do it with an AI bot in a game because you don't have to worry about creating a body.

Ashley:

Yeah.

Kolby:

Right. You don't have to create the android. You can just create the software and so it's a much easier way, and you can reiterate faster.

Ashley:

And she said she's done thousands of these characters, and then she's deactivated some of them as a failed product. For me, that was the most interesting.

Kolby:

The deactivating?

Ashley:

No. The fact that she's running this experiment and then towards the end she goes to Abrama and is like, I am your creator. I can talk to her. And I'm just like, wow. It's like this real living being and here's like her God. Like this is my child. This is my child, and we'll figure out something together. I just thought that was so awesome. I know. That's me geeking out right now.

Kolby:

No, I totally get that geeking out.

Ashley:

I thought it was cool. It's like I created this life and then look at them go.

Kolby:

And she felt an obligation to tell them like, hey, your end is coming.

Ashley:

Exactly.

Kolby:

Like if you're gonna do heroin, do it now.

Ashley:

No, no, but it's this-

Kolby:

It's always about heroin.

Ashley:

Her intentions for me were just so pure, and she understands there's by-product badness of it, but again, it's this, does the good outweigh the bad? Like, hey, all the good of the research-

Kolby:

So let me ask you this, Ashley. Let's say this really happened. So let's say that somebody came to you and was like, hey, just so you know, this is all a game. You were an experiment of mine. We're shutting it down pretty soon. Would-

Kolby:

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that, didn't you? If you'd like to support what we do at After Dinner Conversation, head on over to our Patreon Page at patreon.com/afterdinnerconversation. That's right. For as little as $5 a month, you can support thoughtful conversations like the one you're listening to and as an added incentive for being a Patreon supporter, you'll get early access to new short stories and ad-free podcasts, meaning you'll never have to listen to this blurb again. At higher levels of support, you'll be able to vote on which short stories become podcast discussions, and you'll even be able to submit questions for us to discuss during these podcasts. Thank you for listening and thank you for being the kind of person that supports thoughtful discussion.

Kolby:

Let's say this really happened. So let's say that somebody came to you and was like, hey, just so you know, this is all a game. You were an experiment of mine. We're shutting it down pretty soon. One, would you believe them?

Ashley:

No.

Kolby:

Okay, so there's nothing they would say-

Ashley:

It would be like putting on a pair of glasses.

Jeremy:

It'd be a very Matrix moment.

Ashley:

And I'd be like, wait, what? Hold on. Let me put my glasses on and off, on and off. I would probably act just like they did. We need to assemble. We need to educate. We need to form... We need to-

Kolby:

Oddly enough, that's-

Ashley:

... protect-

Kolby:

...one of my only faults in this story is the thing that you're talking about, is how quickly the main character believes.

Ashley:

Yeah.

Jeremy:

Right.

Ashley:

Well, she knew. She knew all along, though. She knew there's outsiders and then there's us.

Kolby:

Oh, that's right because she could-

Ashley:

She knew.

Kolby:

She could somehow see the difference in behavior.

Ashley:

Because she's a higher-

Kolby:

That's right.

Ashley:

... end algorithm.

Jeremy:

Well, and learned their language and-

Ashley:

Yeah.

Kolby:

That's a great point.

Ashley:

So it wasn't like she was just like... but the way that all the other characters fully believed her... granted, she did have this higher authority. She was a queen. People believed what she said, that sort of a thing.

Kolby:

So would anything changed then, assuming somebody told you that? Would you change your behavior at all, knowing that you weren't real but you felt real?

Ashley:

The thing is, wasn't the concept, if you're one of these characters and you died off, you were dead? or would they repopulate? I guess that's-

Jeremy:

I don't think it was really explained very well.

Kolby:

In real games, you would repopulate, but in this game I think you probably are just dead.

Jeremy:

You don't respond.

Kolby:

I think only the real people respond.

Ashley:

Yeah.

Kolby:

But the game players don't.

Ashley:

So I would go about being like, no, my life is-

Kolby:

It's still my life.

Ashley:

I have meaning. I don't want to die.

Kolby:

Okay. And that wouldn't change your behavior?

Ashley:

I would be much more suspicious of the outsiders and try to grab more information of what's happening in the outside world. Who's ruling me? Who's this person?

Kolby:

Elon Musk has a whole thing where he thinks... He's talked about it. I think it's a 50-50 chance that we're just someone else's simulation.

Jeremy:

It's greater than 50-50 is what he says.

Kolby:

Oh, really?

Jeremy:

Right. Well, if you look at it in a statistical sense, it's a much higher percentage chance that we're living in a-

Kolby:

Someone else's created world.

Jeremy:

Yes.

Kolby:

That's depressing. What did you think-

Ashley:

So really? Oh yeah, he thinks we're all-

Kolby:

He thinks there a more than 50% chance that we're in someone else's simulation.

Ashley:

Interesting.

Kolby:

What did you think of the Moore's law question? The question was, do you think Moore's law should have... Assuming it applies to AI, do you think that should concern us? That if we can make something as smart as us... so let's say in the case of Abrama, she's relatively... I think it's a woman, right?

Ashley:

Yeah.

Jeremy:

Yeah.

Kolby:

She's relatively smart in this. She's able to defend her own place, which means in 18 more months, she's going to be twice as smart.

Jeremy:

Right. I think it goes back to, again, something we've discussed previously is, as AI are developed, one of the driving factors of that should be a continued connection to the humanity which creates them and a reliable set of guidelines for what their behavior is.

Ashley:

Yeah.

Kolby:

Yeah.

Ashley:

Well that was the thing in one of the stories we talked... Yeah, I Think Therefore I Am. So the other question from before was, 'cause I can respond back in Chinese and all that other stuff. But anyway, I'm losing my thought, but continue.

Jeremy:

And also this gets to a motivation sense. At the end of the story, they're given their land through this treaty.

Ashley:

Yeah, their virtual land. Right.

Jeremy:

They've signed the covenant with God and Abraham now has a world or a land-

Kolby:

I didn't even get that, but you're exactly right.

Jeremy:

Exactly.

Kolby:

They exactly sign the covenant with God. Oh, my God, how did I miss that?

Jeremy:

So-

Kolby:

And the guy's name is... ah, yeah. No, I'm sorry, David Schultz.

Ashley:

Oh, my God, all these hints.

Kolby:

I totally should've gotten that. Oh, my God.

Jeremy:

But if you look at motivations... so what are their motivations? 'Cause from a human perspective, we have a motivation for... our basic motivations-

Kolby:

Eat, have sex.

Jeremy:

Right, so-

Ashley:

Drink.

Jeremy:

There's an AI presumably lives forever, can't reproduce. What are your motivations?

Kolby:

Yeah, it's playing the long game. So that was the other thing I thought about with this story is, the story assumes that all that they wanted was to be allowed to exist.

Jeremy:

Right.

Kolby:

But it's like now they know that they can work in exchange with real people to manipulate the outside world. Why wouldn't they also want to rule?

Jeremy:

Well, and you get that very much at the end, too.

Kolby:

Right. For now.

Jeremy:

We've signed this treaty. Continue gathering dirt. We need more information-

Ashley:

Yes. We need more leverage 'cause they're going to figure out a way to-

Kolby:

So I'm of the opinion... Honestly, I think the government should have just taken the hit and shut them down. Because you're talking about-

Jeremy:

Right. 'Cause there is the potential-

Kolby:

You're talking about something that lives on an infinite timeframe and will get infinitely smarter. You will eventually be working for it.

Ashley:

I have a way to solve it. You make the games so unappealing. You tax the people to play the game. So you can't control the currency, but if you want to play the game, now you have to pay $100 a day.

Kolby:

We're just going to Nerf all of your gear.

Ashley:

Make the game basically obsolete. Make the game so no one wants to play it.

Jeremy:

But you still can't shut it down. Oh, so then they don't have any access to the outside.

Kolby:

So make it EVE Online is what you're telling me. Make it-

Ashley:

No, no, no. Make it so that-

Jeremy:

Make it-

Ashley:

So the game's not even popular anymore.

Jeremy:

'Cause it's not even online. That's still a popular game.

Kolby:

[crosstalk 00:22:28] unplayable.

Jeremy:

Ultimately, that failed. I'm sure there are many-

Kolby:

Just make the game unplayable.

Jeremy:

[crosstalk 00:22:31] that failed.

Ashley:

Just make it so no one even wants to play it. And then guess what? Then, oh, we've got all these secrets. It's like, sorry, no one's coming to your land. You can just live off in the silence.

Kolby:

Well, that was the other thing, right, is the game only wanted to exist... And there's a hint that they're eventually going to want more.

Jeremy:

Yes.

Kolby:

I think they're only going to get smarter, and I think you have to bite the bullet. And one of the questions is, do you think turning off the game would be genocide? I actually do think it would be genocide, and I think it would be acceptable genocide.

Jeremy:

So you're the ready ape?

Ashley:

From the previous story. Yeah.

Kolby:

Wow.

Ashley:

Yeah. How would you feel if-

Kolby:

Which totally goes against what I said in the last group. That's two mic drops. No, you're right. Because they are self contained, immoral, and don't harm me.

Jeremy:

But they have the potential to harm you.

Ashley:

They have dirt on you.

Jeremy:

Right. They do have the potential to harm the government.

Kolby:

Maybe that's the solution. Maybe the solution is disconnecting them from the internet. Like being like, we're going to-

Jeremy:

Oh, you can have this world.

Kolby:

But we're not allowing our people into it.

Ashley:

Yeah, exactly.

Jeremy:

Yeah.

Ashley:

Like you're just going to live in your own little island.

Jeremy:

That's what they did with Moriarty in Star Trek.

Kolby:

Right. So here's the point I was getting at is, I don't know why they would stop wanting to exist. Why wouldn't they say like, hey, you're our coder. We're going to drop all this information unless you make it so that our game is light 24 hours a day instead of having seasons, unless... we think the fact that that this sword only is a plus 12 sword... We want it to be a plus 14. So why don't they start to use their weapons to manipulate their game in a way that they can construct their own world, right?

Jeremy:

Absolutely. You know what? I would like to see this expanded as a larger story.

Kolby:

I would so watch this movie. I'd watch this movie in a heartbeat.

Ashley:

What about the rat nine group? So rat nine is-

Kolby:

I love the names, by the way.

Ashley:

Are these actual-

Kolby:

Those are really names. They've got to be real hacker names.

Ashley:

So they're literally a group of hackers who play this game, who-

Kolby:

They're the ones who dug up the dirt.

Ashley:

They're the ones that dug up the dirt-

Jeremy:

And built the dead man switch [crosstalk 00:24:35].

Ashley:

... and have this pact with the gaming people to... like got them the dirt and stuff like that. I think that's really kind of interesting. It's like-

Jeremy:

It's super clever.

Ashley:

At the end of the day, who's the key piece that feeds them the dirt? This rat nine group.

Kolby:

Yeah. But that's the thing you could regulate, right? Like you could create laws saying that you can't sell them anything. Right? You could make the sale to-

Jeremy:

It would be hard to enforce, but.

Kolby:

Yeah. Like the same way it is with North Korea, right? It's impossible to enforce but you could make that rule.

Ashley:

Yes. How are they going to enforce it? You can't make an exchange. It's all anonymous and all encrypted.

Kolby:

Yeah.

Ashley:

They don't know who's going where and how much.

Kolby:

Yeah, that's a good call. All right, so last question since we are brought to get kicked out of the cat cafe, which are great for hosting us. A couple of things. First, I'm going to get my question and then a couple of things. Question number three was, does Descartes' statement in this context, "I think therefore I am", apply? Jeremy, yes or no? Does it apply?

Jeremy:

I think in this context because the way the AI is presented is, not only is it a program with parameters, but the character is presented as capable of thought in that way.

Ashley:

She's totally autonomous.

Jeremy:

Metacognition, understanding her environment and reacting to it.

Kolby:

I understand that I understand.

Jeremy:

Yeah.

Ashley:

Now I want to know. Is she continually growing?

Kolby:

It's the Moore's law question.

Ashley:

If you get a set of-

Kolby:

She's gonna get a lot smarter.

Ashley:

... if/then questions and then does that keep evolving? Is she-

Jeremy:

Presumably. But, again, depends on how the AI was built.

Kolby:

Do you agree with [crosstalk 00:26:03] on this one? The "I think therefore I am". Do you think this person is alive by Descartes' definition?

Ashley:

I am more lenient to the fact that she's alive because she had, again, this connection. I know we talked about it from a couple stories ago, but I feel this connection with her and part of that's because of her creator, the whatever her name is. Just the way that they communicate with one another, I have more empathy towards Abrama.

Kolby:

So you're saying it could be genocide to turn off software?

Jeremy:

Yes.

Ashley:

Yeah.

Kolby:

Wow.

Jeremy:

Eventually, not currently.

Kolby:

I agree. I agree with you. I'm just surprised that I... I thought this would be one where I was by myself, but, no. Because I just watched a lot of Star Trek. All right, that's a... Yeah, there you go.

Ashley:

Care about your characters. Don't let them die in the video games. They actually matter. The computer simulator people are like, where did he go? Bloop. Oh no, I lost my friend.

Jeremy:

Right.

Kolby:

Well, I thought it was interesting in the game... The character in the game distinguished between sentient and non-sentient characters in the game.

Jeremy:

Right. That there were lower level AIs that were just there as NPC guides.

Kolby:

They had to answer a series of 50 questions with 50 answers. Your princess is in the other castle.

Jeremy:

Right, right.

Kolby:

Okay. So you've been listening to After Dinner Conversation, short stories for long discussions, where we get people to submit stories. We select the ones we love to ask great ethical, traditional questions. We then publish them. We then discuss some of them in these podcasts. This, by the way, concludes Season One, and it's our last episode at the cat cafe, not because they haven't been great hosts. They have been wonderful hosts and great sponsors. You should adopt a cat if you're in Tempe, Arizona and come here. But just simply because Ashley and I are moving to Southeast Asia, and so our next one is... We're going to call it Season Two, will be from on the road. It'll be in Southeast Asia somewhere.

Kolby:

Jeremy will be visiting us or calling in, hopefully visiting us some.

Jeremy:

Both. We'll do that.

Ashley:

And we may be back here. Who knows how long we're gonna be gone?

Kolby:

But that might be for Season Three. Who knows? But yeah, so this ends Season One. The Season One book is now out, I am quite sure. So if you go to Amazon and look After Dinner Conversation, Season One, there's a book with our best 25 stories. You can download it, and all the discussion questions are there. You can see which ones have podcasts. You can listen to the podcast, read them, talk to your friends, like, and subscribe.

Ashley:

Share.

Kolby:

Share this. Let people know that that this matters. That means the world to us. And if you want to buy a shirt, the shirts are for sale. And thus ends the plethora of plugging.

Ashley:

Yay. Go team!

Kolby:

And thank you for .oining us, we are now at our 16th episode. It's a thing.

Ashley:

We did it.

Kolby:

We thank you so much. Bye.

Ashley:

If you've enjoyed listening to this, please like and subscribe. It helps us out a ton. The vast majority of people listen haven't liked and subscribed, which means maybe it shows up in your algorithm. Maybe it doesn't. So don't leave that to chance. Just go ahead and hit that button, and we'd sure appreciate that. And that way we can keep doing what we're doing, and you're not left to the whims of some algorithm. Thanks.

* * *

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E15. "Ruddy Apes And Cannibals" - If a civilized cannibal invited you to dinner, would you attend?

Named “Top 15 Podcast” for 2020!

STORY SUMMARY: Explorers find a remote island of civilized cannibals. The cannibals are much more technologically advanced, already having mastered teleportation and space travel. They have a debate to try and come to terms with the cannibals. The explorers are so offended they leave, but when they do, they leave several nukes and destroy the civilized cannibals.

DISCUSSION: Where do we draw the line on what animals we eat? Is all meat that we eat that didn’t agree to it an act of violence? Do you have a right to destroy those who have offensive values?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“If a civilized cannibal invited you to dinner, would you attend?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the alternative history short story "Ruddy Apes And Cannibals" by Shikhandin. Subscribe.

Transcription Provided by Transcriptions Fast

Ruddy Apes and Cannibals

by Shikhandin

(music)

Kolby: Hi, you’re listening to After Dinner Conversation, short stories for long discussions. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and the morality of the choices the characters and the situations put us in. Why did you do this? What makes you do this? What makes us good people? What’s the nature of truth? Goodness? All of that sort of stuff. And hopefully we’ll all better, smarter people for it, and learn a little bit about why we think the way we think. So, thank you for listening.

(music)

Kolby: Hi, and welcome back once again to After Dinner Conversation where we have short stories, where we discuss them and talk about the morality and ethics of what’ going on in the short stories that are published on our website, AfterDinnerConversation.com and/or on Amazon, there’s lot of places you can get them. I’m your co-host Kolby, here with co-host Jeremy.

Jeremy: Hi.

Kolby: And co-host Ashley.

Ashley: Hello.

Kolby: And we are once again at La Gattara cat café where they, that’s why I’m a little distracted right now, there are 2 cats on our table. If you’re watching the YouTube video you can see that and it’s a white one with brown, and there’s a brown one. You notice they never talk about the type, the species of cat like they do with dog?

Ashley: Like brown one, black one.

Kolby: Right, because it’s not like this is the smart cat or this is a cat that is a water cat.

(laughter)

Ashley: This is the golden retriever, he retrieves. No, this is just a freaking cat. A cat does what cats do.

Jeremy: A house cat.

Kolby: What kind of cat is it? It chases the laser pointer. It poos in the litterbox. At any rate, you can come here and you can get a cat. And they’ve been great hosts for us for our, now, 15th episode, and they’ve just been amazing. If you’re in Tempe Arizona ever, definitely come by. For $10 you can have a coffee and hang out with cats, and for a little bit more, you can bring one home with you. It’s a way to kind of test-drive the cats.

Ashley: Or if you can’t have a cat, but want to get your cat fix in.

Kolby: Also, Jeremy had one in our last thing and I forgot to mention it, we made Jeremy a special shirt that says, “So I was researching that.” I made one for myself, I’m not wearing his shirt. And in the process of doing it, I was like “Hey, we should sell these. That’s like a thing.” So, if you want to buy a After Dinner Conversation shirt, you can do that. Just go to our website and there’s a spot on there where you can buy merchandise now. I’m sure our sales will be in the ones.

(laughter)

Kolby: Maybe the twos? But whatever. My mom will buy one. It’ll be awesome. Feel free to buy one. They’re not nasty shirts, they’re not cotton shirts or like the tri-blend polyester ones.

Ashley: Buy the shift, buy one for all of your friends, read one of these stories, have a discussion and then watch the podcast. I’m just saying. And then send us a picture or video of it, and you will get a free copy of our book that is out or should be out.

Kolby: It will be out by now for sure. After Dinner Conversation Season 1 will be out on Amazon for download or for a paper copy. It’ll have stories, 25 of the best stories that we’ve done, as well as the questions that go with them. There are even children’s stories in there if you want to have some children’s stories. It’s pretty rock star. You don’t pet the cat that way, you’ll get scratched. And so, let’s do our first story. Rudy…

Jeremy: Ruddy Apes and the Cannibals.

Kolby: I’ll just leave it to you man. You do it Jeremy.

Jeremy: By Shikhandin

Kolby: Okay, we don’t know how to pronounce it. It’s not his real name anyways, it’s his pen name.

Jeremy: Non de plume. Alright, so this is a familiar story at first. It’s the story of English explorers discovering an island paradise and the inevitable conflict that arises because of their clash of cultures.

Kolby: I hate the word discovered. It implies that people weren’t there before you bumped into them.

Jeremy: Exactly.

Kolby: Discovered America? No, America had people, you just didn’t know they were there. Sorry. It’s just my little pet peeve, but I understand what you meant.

Jeremy: In this particularly story, the island is very remote and appears to be discovered in the 1940s or 50s. The explores, or ruddy apes in this case, discover an island of civilized cannibals. Much of the story is spent describing the cannibals and their seemingly utopian society. No dogmatic religion, responsible use of natural resources and nuclear energy, compulsory education, advanced medicine, a successful space program, and happy and progressive culture, no overpopulation.

Kolby: That’s like utopia.

Jeremy: I know, I did say, they’re seemingly utopian society.

Kolby: Oh, sorry.

(laughter)

Kolby: I got distracted.

Ashley: Distracted by a cat.

Kolby: Sorry. I’m listening. Focus. Focus. Focus.

Jeremy: Some time is also spent discussion how their cannibalism works in that it’s consensual. There are humans who are raised for eating, and they’re revered individuals who decide when they want to be eaten. Regular citizens as well can offer themselves, which is seen as a great honor for both them and the people eating them.

Kolby: This is the opposite of the story we had way back when. When they eat the fat kid.

Jeremy: “This I Do For You”.

Kolby: “This I Do For You”. Yeah, where…

Jeremy: The fat kid.

(laughter)

Kolby: They, like, they fatten the kid up to ginormous in case there’s a famine.

Ashley: He wasn’t revered, he was like “Oh, we don’t talk about you.”

Jeremy: Yeah, keep him in a room.

Kolby: This is not like that. That’s a good call.

Jeremy: Because cannibals are so content in this utopian society, they don’t see the ruddy apes as a threat in any way. And when they’re cannibalism is discovered the ruddy apes turn hostile and want to civilize the cannibals. The result of this is a trial where they ruddy apes bring in lots of vehicles and machines to build a big fort on the cannibal’s island where they can hold a trial. This trial consists of the ruddy apes lording over the cannibals and trying to convince them that their cannibalism is wrong. The crux of this is the discussion over how the cannibals believe that eating fellow humans is an act inspired by love and respect for others and that the hopes, dreams, loves, ideas, and deeds of all people live on after their body has died. So, at the end of this trial, the cannibals tell the ruddy apes to leave the island, like, “We’re done with you.”

Kolby: The trial doesn’t go well.

Jeremy: The trial doesn’t go well. And the cannibals tell the ruddy apes, “It’s time for you guys to leave. We’re done with you.” So, the ruddy apes pack up all their stuff and go, and as soon as they’re off the island, they explode the atomic bombs that they’ve hidden on the island, destroying the island.

Kolby: By the way, the cannibals are so technologically advanced, they could have wiped out everybody at any time.

Jeremy: They were like, “We’re just going to teleport them away.”

Kolby: But they’re so civilized, they choose not to.

Jeremy: Right, they don’t believe in just indiscriminate killing.

Kolby: And it’s the non-cannibals that do believe in the indiscriminate killing.

Ashley: They’re super-duper advanced, they explore the stars, the go to other plants, they’re space nerds.

Kolby: Yeah.

Ashley: They’re like, super smart. Yet, they eat themselves, well, people.

Kolby: Each other.

Jeremy: Most of the cannibals perish, but the narrator promises that many survived and are living among us or living out in space.

Kolby: Okay. You have some thoughts as you read it?

Jeremy: Yeah, it’s an interesting story.

Kolby: Okay.

Jeremy: I do like the cannibals are presented very much as the civilized culture in this story. The ruddy apes, while they do have advanced, some advanced technology, but they’re very war faring and they’re even presented in very much a way that they sail around and any cultures they run into, they appropriate and take all their stuff.

Kolby: In my mind, they were just stand ins for British colonizers.

Jeremy: Absolutely.

Kolby: They call them ruddy apes, but it could just be British Colonizers and Cannibals, could have been the name of the story.

Jeremy: Yeah. And that’s what I said, it feels very similar in that what are the stories? You know which ones I mean?

Kolby: I really don’t.

(laughter)

Kolby: You pull books out of your butt, that I’m just like, “Man, nobody read that book but you and the dude’s mom.”

Jeremy: The Bounty.

Kolby: Okay. That one I have read.

Jeremy: But I feel like there’s a series of master and commander, and even Pirates of the Caribbean. All these English explorers…

Kolby: Finding cannibals. Finding indigenous people.

Ashley: So, in this story, the theory is there’s 2 different types of people and they live amongst themselves in a way. They end up mixing at the end?

Kolby: Something like that.

Ashley: So, it’s like you have this one super civilized society and this other elementary, newer-ish one.

(Cat hisses)

Ashley: Whoa, unhappy kitty. Do not snuggle near me. That’s what that one said to the other one. The beginning paragraph is very interesting. It’s like, “Does the rain remember vapor? Does vapor remember rain? Yet both were the other in their past lives. If you told their stories to each other, would they even comprehend? And does that mean their stories aren’t necessary unimportant or implausible?” So, it’s like, talking to a primitive human to a today human, and if you’d swap stories, would you believe one truly was the other? We’re the same?

Kolby: So, it reminded me a little bit, you’d probably know Jeremy who said this, but there was some science fiction writer who said that a sufficiently technologically advanced civilization will always seem like magic.

Jeremy: Any sufficiently advanced technology will just seem like magic.

Kolby: Right. And I think that’s true in the sense of if you took someone from the 1400’s and showed them how a gun worked, like a modern gun worked, it would be like magic, or whatever the case may be right? That something goes from unimaginable to imaginable to possible to real. And I think that’s the thing that they’re talking about with these people… would they recognize each other because they’re so differently on the spectrum, would they recognize they’re still both human, and I think for me, it seemed like the, I’m just going to keep calling them the British because it’s easier for me to remember, It seems to me like the British or the ruddy apes, wouldn’t know they were the cannibals. That’s who they are as well. The cannibals, I think are sophisticated enough to understand that like, “Yeah, we may have been like them one day except for these other choices.”

Jeremy: They say that in the beginning. They both came from the same place. And they understand this and recognize their past but the ruddy apes don’t.

Ashley: So, the cannibals are so sophisticated, not only technologically, but mentally and emotionally.

Kolby: Even culturally it seems like.

Ashley: That they didn’t even blow up the ruddy apes, they were just like, “You need to leave.” They could have annihilated them, and they’re like, “Yo bro, just go. You’re not even worth my time. You’re so insignificant and…”

Kolby: They weren’t even worth the trouble to transport off the island. Because they had transporters like in Star Trek or something.

Jeremy: They can teleport them.

Kolby: They weren’t even worth teleporting off the island because it took energy. That’s how insignificant you were to me.

Jeremy: Just go.

Kolby: So, I’ll tell you one of the things I thought was really fascinating about this, Jeremy, you and I had talked about this a little bit, we try not to talk about the stories beforehand but we do anyway, and that is that the person being eaten, that the cannibal eats, the other human, chooses to be eaten, and so it is a voluntary thing, and so in that sense, it’s not an act of violence, it’s…

Jeremy: It’s a consensual act

Kolby: It’s a consensual act, and I think the thing that was interesting to me that had never occurred, literally never occurred to me until I’d read this, is, does that mean, that every other meat you eat, because it’s not conscious and it’s not consensual, is a violent act? Like, if I eat chicken…

Ashley: Ohhh, yeah.

Kolby: If I eat chicken, and because the chicken wasn’t like, “I’m cool with this.” I’ve committed a violent by eating chicken, or eating beef, or eating fish, or eating anything that was alive, and the only thing I really should be allowed to eat in a consensual way, is something that is sentient enough to understand and choose to be eaten.

Jeremy: I think a lot of vegans would agree with that.

Ashley: The problem is…

Kolby: But they wouldn’t be cannibals, that’s for sure. No vegan would be a cannibal.

Ashley: Is their options like, eat humans or eat plants? Literally? If they’re only consent, then yea. They’d be like, “I only eat human meat and that’s the only type of meat because they can give me consent.”

Kolby: And this is going to seem so stupid in some way, but that made so much sense to me when I read it and thought about it. It was like, “Yeah, I should probably be a vegetarian.”

Ashley: I’m going to take this to the real extreme. You keep talking about things that are alive and not alive… like, plants are alive. I’m just saying. A blueberry is alive when it’s on the tree. And it’s like, “Oh no, I’m going to kill you too.”

Kolby: I just read a thing a little while ago, that plants let out some sound…

Ashley: Yes, when they’re in pain! They do!

Kolby: But at some point, I got to eat something.

Ashley: Yeah, I know.

Kolby: I don’t know what a tofu looks like, but I’d eat it.

Jeremy: That’s still a plant.

Kolby: Oh, is it?

Ashley: So that brings us to our second question: the islanders are cannibals. They do so because they like the way human meat tastes. Is that a good enough reason to eat human meat? Is that a good enough reason to eat meat in general?

Kolby: Yes!

Jeremy: Meat in general, certainly.

Ashley: I’m going to be honest, when I was younger, I questioned why don’t we eat people. Like, I’m like, “We’re eating meat and then I learned muscle was meat, and I was like, why don’t we eat ourselves?” And people are like, “What?”  I was a kid.

Kolby: I bet you never had that thought about dog.

(laughter)

Ashley: This is the thing…

Kolby: Because you’d never eat a dog because they’re adorable.

Ashley: How do you draw the line? We go chicken? Yes. Pig? Yes. Cow? Yes. Horse? No. There’s this…

Kolby: Dog, which is just as smart as pig. No. Cat? No.

Jeremy: Because they have a personality.

Kolby: Personality goes a long way.

Ashley: If you break it down to the most basic…

Kolby: It’s just calories.

Ashley: It’s muscle, it is just fibers, it is just meat. It’s muscle.

Jeremy: Yes, and that’s kind of their point in the end.

Kolby: So, are you okay with cannibalism? Did you just decide that? Did you convince yourself?

Jeremy: It’s called “long pig” when you eat it. It’s not called people.

Ashley: Really?

Kolby: Is that really? Did you research that?

(points to shirt)

Kolby: I was researching this.

(laughter)

Kolby: Did you research that, if you eat people it’s called long pig?

Jeremy: In some cultures.

Kolby: Really?

Ashley: So, I see it as a waste of meat. Okay, break it down now, you die, you decompose in the environment, your nutrients are released, they supply like a new plant which you end up eating that. Isn’t it like the water you drink has passed through 4 people before you actually drink it? Wasn’t it?

Kolby: The statistic was in California, the water from the mountain is drunk and peed and purified 7 times before it makes it to the ocean.

Ashley: Yes.

Kolby: Because they continually recycle the water.

Ashley: And someone has to. And at the end of the day, my matter is still here, it just gets transformed into a new thing.

Jeremy: So, in a homeopathic sense, we’re all cannibals.

Ashley: Yeah.

Kolby: Oh.

Ashley: Yeah, but in general, I, in a very simplistic realm, meat is meat. I get it. It’s not seen as appropriate to eat another human. I’m not saying I would, I’m not saying I’m promoting that.

Kolby: I’m not going to be on the raft with you.

Ashley: But in the… people have done that. Look at the Donner Family. Yes, we eat each other because meat is meat.

Kolby: You know that guy opened a restaurant later?

Ashley: Really?

Kolby: Yeah, one of the Donner guys that lived opened a restaurant and that’s how he retired. And it was like a novelty to go eat at the Donner restaurant.

Ashley: So, I’m just saying, when push comes to shove…

Kolby: Meat is just calories.

Ashley: … meat is just meat.

Kolby: Yeah, so would you say that the only reason we aren’t cannibals is because it’s just a sort of socially, social construct?

Ashley: That is the only reason.

Kolby: Hmmmm.

Jeremy: Because in our culture, you don’t want to be on the eaten end.

Kolby: You don’t want to be on the grandma end of that. I assume you eat old people.

Ashley: Oh my gosh.

Kolby: Well, nobody is going to have baby veal, that’s just mean. You gotta wait for the old people. I’m surprised you guys took that stance. I thought you guys would be more anti-cannibal.

Ashley: Don’t eat people? Why? Are you don’t eat people?

Kolby: No, I’m totally not. But here’s the thing, I’d eat everything. And it’s totally not acceptable, I don’t have… I like… we’ve talked about this, when I lived in China, I went to a restaurant that served dog and had dog soup. And I held out for like a year before I went. And at the end, the restaurant did perfectly fine business. And at the end of the year, I was like, “Look, it’s a cultural thing...”

Jeremy: Got to try it.

Kolby: “… I wouldn’t eat it in America. But I’m here, I’ll eat it.” And I will say, I was really revulsed by it. Just because that cultural sort of structure made me really struggle and it tasted like dog. The smell, like when you have a wet dog that’s been out in the rain. Imagine if that’s what food tasted like. It was not good.

Ashley: So, if that line between what animal you eat and what you don’t eat on the spectrum is moveable, that’s all I see it as. It’s like, another country they move a little bit farther, other countries they move it a little bit back.

Kolby: I’m good with everything from cockroaches to people, in my mind.

Ashley: Yeah.

Kolby: Maggots. Maggots to people, I’d be cool with. Protein man. Unless, unless, you go to the consent part. And if you believe that you can only eat that which can consents to be eaten…

Ashley: Then it’s only people.

Kolby: Then it’s only people, and it’s nothing else, and that makes perfect sense to me too.

Jeremy: But it’s a good case for the technology where we’re trying to grow meat on a lattice.

Kolby: I’d be fine with that. Actually, I was just a restaurant yesterday that had the Beyond Burgers or whatever they’re called.

Ashley: The fake burgers.

Jeremy: Yeah, they’re plant anyways.

Ashley: Chemicals.

Kolby: Yeah, plant with a bunch of chemicals in them.  I’m not sure they’re good for the environment, but they’re not meat.

Jeremy: Or good for you.

Kolby: Yeah. They’re going to find out they cause cancer someday.

Ashley: So, if you, obviously you kind of sort of answered this, if you’re visiting the islanders….

Kolby: I love how you get us to the questions.

Ashley: Well, it’s a good segway point.

Jeremy: Because we’re not.

(Laughter)

Ashley: Because you’re like, “I would eat the dog.”

Kolby: I’m like, “Let’s talk here, let’s talk here, let’s talk over here.”

Ashley: If I was visiting the islanders, would I eat the human meat they gave you to eat? I would.

Kolby: Really? I would not have guessed that’s your answer based on our other podcasts.

Ashley: Well, it’s their culture.

Kolby: No, I’m totally cool with that.

Ashley: It’s their cultural thing, and how disrespectful would it be. They’re like, “This is our delicacy, this is our grandest human who just sacrificed for you.”

Jeremy: And by that rational you would, in another culture where they’re eating bugs, you would eat what they’re giving you.

Ashley: Yes.

(music)

___________________________________________________________________________________

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

Kolby: No, I’m totally cool with that.

Ashley: It’s their cultural thing, and how disrespectful would it be. They’re like, “This is our delicacy, this is our grandest human who just sacrificed for you.”

Jeremy: And by that rational you would, in another culture where they’re eating bugs, you would eat what they’re giving you.

Ashley: Yes.

Kolby: But here’s the part where I thought you wouldn’t say that, because I thought we had some podcast…

Ashley: It also depends on how it’s presented.

Jeremy: There’s that.

(laughter)

Kolby: … But here’s the thing. This is why I thought you wouldn’t go that way. Number one: because I know there’s now way you’d ever eat dog, so if you went to this island with the cannibals but they weren’t cannibals, they were dog eaters, you’d be like, “No, I can’t eat dog.”

Ashley: If they gave me food, and I ate it, and I didn’t know what it was, I would eat it.

Kolby: Right, it takes like chicken.

Ashley: I would eat it.

Kolby: Really? You keep it eating it after you found out what it was, if it was okay?

Jeremy: If it was okay? Certainty.

Ashley: Yeah, I would not want to be disrespectful.

Kolby: I thought maybe you like dogs more than people.

Ashley: I had friends be like, “Here, you need to try this.” And I’m like, “What is it?” And they’re like, “We’ll tell you after you eat it.” And I’m like, “Okay.” And they’re like, “That was alligator.” And I’m like, “okay.”

Jeremy: It’s fine.

Kolby: The other reason I thought you wouldn’t fall in this category of being okay with this, is because I thought we had some discussion like 10 podcasts ago, it’s all blending together now, about prostitution or about something like that, that’s socially acceptable in other cultures and you were like, “No.”

Ashley: That was because I wanted them to be more, like, use their brains not their bodies.

Kolby: So, it wasn’t about the cultural issue…

Ashley: It was about women empowerment. Like, that’s the best, you use your physical attributes, you’re so much more than just your physical.

Kolby: There could be guy prostitutes.

Jeremy: It’s the same scenario.

Ashley: They could do so much more. It’s the same situation. No, I actually…

Kolby: That makes more sense to me in the way you’ve described it now than the way than I heard it the first time like 10 episodes ago.

Ashley: The cannibal’s logic and why they eat each other, it all makes sense to me. I’m not going to rebuttal their way of life. Now, do I want to start a revolution and get our society to change to be cannibals? No. That’s not my goal. That’s not what I want to see society change to be. But I understand their logic within this.

Jeremy: It’s an interesting discussion.

Kolby: Jeremy, you were okay with that? You’d eat the people meat too?

Jeremy: In that scenario, on the island that’s what they’re doing. Especially if they told you “This is the person. This is how we’re honoring them.”

Kolby: You’d eaten a little bit already and it tasted fine.

Jeremy: Yeah.

Kolby: Huh. Then why do you think that the British/ruddy apes…

Jeremy: It’s easy to say that.

Kolby: Yeah, fair. You know, I actually just wrote questions for a story that hasn’t been published yet, and one of the questions was “Is it fair for you to decide what’s right and wrong in this person’s situation, given that you’ve never been in this situation?” It’s the classic, and actually it’ll be in the book that’s coming out, it’s a classic questions of, “knock on the door, hide me from the Nazi’s, I’m a Jew.” And the person doesn’t. They turn them in. And one the questions is, “Was it right to turn them in or not?” And the next question is, “Do you have the right to judge someone else for a situation that you’ve never been in, and never had to deal with their thing?” So, we’re all saying we’d eat the humans, but when we’re there, we might bock.

Jeremy: Right.

Ashley: Yeah. And that’s, should we do anything to change or fix a society which we generally find immoral?

Kolby: You’re saying that’s a non-applicable questions to this one because you don’t find it immoral?

Ashley: I say go for it. I’m not protesting eating dog in China. I understand that’s their vibe, that’s their thing, cool, you go do that, I’m going to be over here. Or I’ll be over there…. But the only thing that starts to get me is when they’re over-fishing.

Kolby: Making something extinct.

Ashley: That’s where I’m like, “Whoa, hold your roll there. I get it, you’re okay with eating XYZ, but not if you’re at the detriment of killing off an entire like whale population.”

Kolby: What about the other thing that question talks about. The idea of… where is it again… is it okay to destroy… have an obligation to destroy something you find immoral? So, let’s assume, for the sake of argument, you find cannibalism to be immoral. It would be perhaps one of the most immoral things possible if you found it immoral. I don’t know what’s worse than cannibalism if you consider cannibalism bad. Do you then have an obligation to stomp that out? To destroy that civilization? To educate or die basically? If you find it immorality.

Ashley: This country has been living in their own little island, living their best life…

Kolby: You’ve got a lot to say about this story, don’t you?

Ashley: They are not impacting me. I would like to have a relationship with them. They got cool technology and stuff. But, if they’re that far advanced over me, there’s something they know that I don’t know. So how about we just discuss ideas and like you do you, you go eat your people.

Kolby: I’m going to ask you again, I’m going switch it up, because that’s an easy stance to take until you switch it up. So, let’s say that we find a bunch of heroin people that are like, “Look, I’m perfectly happy doing heroin. I’m going to die or whatever.” And we’re like, “Hey, your heroin addiction is affecting our society.” And they’re like, “Well, go put us some where it doesn’t.” And we’re like, “Oh, okay.” And so, we go stick them in Australia, and we’re like, “Hey go do heroin in Australia, you can have this island, assuming there’s no other people in Australia.”

Ashley: Okay.

Kolby: Or some like, you know, some spot.

Jeremy: Sure.

Kolby: Are you okay with being like, “Look, I think heroin’s terrible, I think you’re ruining your life, I think life is bad for you, I think you’re going to die a young age and have rotten teeth, but it’s not affecting me. Good luck with that.”

Ashley: You cannot make an addict not longer be an addict until they want to no longer be an addict.

Kolby: Wow, you’re starting to sound more and more like a libertarian.

Ashley: But it’s true though. You can bring a horse to water but you can’t make them drink until that horse wants to drink. It’s the same thing. Like, yeah, you can do interventions.

Kolby: You just shove it’s face in the water.

Ashley: You can do interventions, you can put them in therapy, but until they’re ready to change….

Kolby: Same thing with the cannibals.

Ashley: … same thing with smoking, same thing with cannibals, same thing with drugs, same thing with, I think, like you’ve got to want to change yourself, otherwise it doesn’t work.

Kolby: Jeremy?

Jeremy: I don’t know. This is an interesting conversation with the behavioral therapy conversation from the last story.

Kolby: Right! Because she didn’t want to change herself. They forced her change.

Ashley: Because she wouldn’t conform to society of not killing people.

Kolby: They were going to traumatize her in a war zone to change her behavior.

Jeremy: Yes.

Ashley: That’s the thing…. She’s going to be affecting other people though. If you put her with a whole bunch of other serial killers on an island toghter and they’re not going to touch me, or….

Jeremy: That is some reality TV right there.

(laughter)

Ashley: I am good with it. What’s the Hunger Games? They’re all trying to kill each other?

Kolby: Voluntary Hunger Games. I’ve always wanted to be a serial killer and now I’ve got the chance.

Ashley: Exactly. I’m like, ya know, as long as I will not be affected or….

Jeremy: Isn’t that the running man, basically criminals….

Kolby: I was thinking of that Japanese one where they’re all on the island. So, what do you think Jeremy, do you think you have an obligation to either modify or stomp out inherently grossly immoral behavior?

Jeremy: That is a big question. I personally don’t think that. But, in this particular scenario, but there are a lot of factors to it.

Kolby: Number 1, does it affect other people, like Ashley mentioned.

Jeremy: Right. And in this scenario, it’s very consensual. It affects other people, but…

Ashley: They’re consenting to it to begin with anyway. And it’s not affecting these British ruddy apes.

Kolby: They are literally an island onto themselves.

Ashley: Now (cough) pardon me. Say those people were living within the ruddy ape’s society, they just had their own part of town…

Kolby: That’s a different story in your mind.

Ashley: That’s a completely different situation. Again, how will their cannibalism affect the ruddy apes? Are they influencing the ruddy apes? Are they like “Hey, our kids look over the wall and they see what you’re doing and that makes our kids go crazy and we don’t want that?”

Kolby: And they become cannibals.

Ashley: Yeah. Then that becomes a whole different situation.

Kolby: That’s like the heroin addiction thing. So, if you can stick them all on a Pacific island, you’re good with that.

Ashley: Sure. Until they’re ready. I’d be like, “Yo, when you’re ready, let me know.”

Kolby: Yea, no, that’s fair. I’m going to go back to Jeremy soon because he gave us a cop out answer.

(laugher)

Ashley: Oh goodness.

Kolby: I feel like once an episode I ask you a question and you’re like, “It depends.”

Jeremy: It does depend.

Ashley: It always depends.

Kolby: So, let me go further along. What are the factors that it depends on?

Jeremy: Okay, does it affect other people.

Kolby: Sure.

Jeremy: Does it…

Ashley: Affect the environment? Like, are we all going to die of pollution?

Kolby: Like, if the ruddy apps burning coal.

Ashley: Yeah.

Kolby: That’s a problem.

Jeremy: That’s a problem, they should make them change.

Kolby: But if it’s something that doesn’t affect other people, that are non-consensual, that can be isolated, then all immoral behavior is acceptable and you have no obligation to it? As far as you can think of it right now.

Jeremy: Okay.

Kolby: I might come up with some example, unless you can think of one.

Jeremy: China for a long time, is harvesting organs from prisoners.

Kolby: I didn’t know that. That’s clever.

Jeremy: And selling them to Westerners.

Kolby: That’s the Chinese-iest thing I’ve ever heard.

Ashley: Wow.

Kolby: That is a communist country right there, man.

Jeremy: So? Do we have a moral obligation to make them change?

Kolby: No, but of course we might have a moral obligation not to buy those organs.

Ashley: Yeah.

Kolby: Okay.

Jeremy: Which would force the change, if nobody is buying.

Kolby: So, this is the thing I liked about this story, in that it creates the perfect scenario for exactly the thing you guys are talking about. Something that is, assuming you find it immoral, like the ruddy apes do, it is perfectly immortal. It’s the most perfectly immoral thing ever, but it is also the most perfectly contained immorality possible.

Jeremy: Right.

Ashley: So, let’s apply this to society here, so instead of putting people in jail or sending them to rehab, how about we, “This is the section where all the killers go”, this is the section… you parcel out an area where they just live. You’re getting sent to the new state Heroine-a.

(laughter)

Kolby: Heroine-a? That’s where the heroin users go?

Ashley: You’re getting sent to “I-want-to-kill-you-ville.” Like, you get sent to, “Can’t-stop-smoking-don’t-care-about-my-lungs-city”.

Kolby: “Lung-cancer-ville”

Ashley: And they just live in their little pockets until they’re ready to like… is that what we need? Is that something we need to look at instead of sending them to jail where they’re restricted of all their whatever. Be like, “Hey are you wanting to change?” First of all, “Are you wanting to change?”.  If the answers no, then okay…

Kolby: We’ve got an island for you.

Ashley: We got an island for you.

Jeremy: You like to smoke weed, move to Denver

(laughter)

Kolby: We got a state for you. Yeah.

Ashley: Or would that create even worse environments because you’re putting people with the same thoughts and destructive behavior…

Kolby: The trend of course, it’s universalism and not separatism. The world is becoming, it’s language and currency and morals and values over the last 200-300 years are becoming more…

Jeremy: Homogeneous.

Kolby: …. homogenous, not more diverse because it’s just easier to travel and it’s easier to intermarry and it’s easier to gather information.

Ashley: So perfect situation, where will our lines fall because everyone’s able to travel? You traveled, you try dog, you’re tipping point changed a little bit compared to normal American society. So, as we continue to mix cultures and things get mooshed together, how will that tipping point change?

Kolby: I don’t know. I think…

Ashley: Do you think? I mean, we don’t have the extremes, not that I’m aware about, of any cannibalism society, but we do have with wishy-washy dietary what’s permissible and then as research shows, people turn vegan for health reasons. What’s the newest, there’s another, I’m sorry I keep referencing Netflix, there’s the Netflix show about the athletes’ when they take their blood serum after eating meat and there’s all this foamy thick serum where as if they have a plant-based diet, their plasma’s clear. I haven’t watched it yet, but I’ve heard it plenty of times. But again, health reasons people are making these dietary changes. So, it going to societies change? People intermixing? Is it going to be research based? Is it going to be something happens that, “Oh my gosh people are dying because of it?” “More people are getting cancer, now everyone has to change their diet?” It’d be interesting to see what’s going to force that change as we all become one. Like homogenous in our travels and what we find permissible and not. Look at the United States, before pot was illegal and now as society bends a little bit, okay now that’s permissible. How much more, and what way are we going to swing in other things?

Kolby: I’m looking at our questions, I wanted to tie this back a little bit. The last questions on our list is…

Ashley: Offer substitutions for cannibalism. So again…

Kolby: I think the conclusion is, we don’t have a substitute for the cannibalism that would make us change our minds. As long as it didn’t affect other people.

Jeremy: Correct.

Kolby: I think we’re all good libertarians is what I think. I’m really wracking my brain trying to come up with one and I can’t.

Ashley: I find it very honorable and great that people are like, “I want to save you.” I get it…

Jeremy: I think we could come back to pedophilia.

Kolby: Everything comes back to pedophilia. That’s like the trump card in the deck.

Jeremy: But that can’t be consensual.

Kolby: That’s true. That’s a great point. And that’s one of these things, I think they mention this is in the story, I think you can’t be eaten until you’re old enough to be eaten or something, right? You have to be a certain age, you can’t be a six-year-old being like, “I want to be eaten.” There’s some sort of age requirement, I assume, or something for that process.

Ashley: So, I guess if it involved children. And again, what’s the age of consent? They just bumped up the age of cigarettes now.

Kolby: It’s 21.

Ashley: And now it’s like, I’m okay to fight for my country at 18 but I can’t buy cigarettes until I’m 21?

Kolby: I think that would apply to all universally to all of this. So, if someone was like, “Hey, I want to move to cannibal island” And you’re like, “Look, you’re 9-years-old, you’re my kid, you don’t do that. You’re my kid, you’re my responsibility until you’re adult, you don’t get to choose your, what is it called, what were we saying, your immorality. You don’t get to choose your immorality until you’re an adult.” Which I agree with that too. But 21, I guess, is the age now if you want to start getting guns and playing Russian roulette. Russian Roulette island, that’s the island you go to. Because eventually there’d be just one person on the island. Think of how cheap housing would be. Okay. I think we’re at a good stopping point here.

Jeremy: Just going to go downhill from there.

Kolby: Going to go downhill from there.

Ashley: If you need us, we’ll be on Fear Factor next, we’ll eat anything you put in front of us that’s morally acceptable and gave us consent.

Kolby: You know, we’re going to Thailand soon, and I’m going to put some maggot whatever meal in front of you and…

Ashley: If it grosses me out, don’t care if it’s a vegetable that grosses me. It has to be presentable.

Jeremy: There are vegetables that gross people out.

Kolby: You just have to not know what it is, that’s all.

Ashley: And it has to look presentable, I will eat it.

Jeremy: And taste good.

Kolby: Okay, we’re going to take that to the test. So, we have completed another episode of After Dinner Conversation, short stories for long discussions, where we talk about stories, like Ruddy Apes and talk about, what’s the ethics and what’s the morality of this? It’s basically the trolley problem in short story form. If you’ve got a story you’d like to submit, you can do that on our website Afterdinnerconversation.com.

Ashley: You know what you should do? You should ask your friends, just point blank, starting off a conversation, “Would you eat another human?” And see what they say, give them this story, have them watch this podcast, share…

Kolby: That’d be so cool.

Ashley: Share on your Facebook or Instagram, be like, “I would eat people.”

Kolby: #Ieatpeople

Ashley: And then literally post the story, and the podcast…

Kolby: That would be amazing.

Ashley: That’s an attention grabber right there. See what people comment. See if anyone changed their mind.

Jeremy: Start posting recipes for long pig.

Kolby: You know what?

Ashley: See if they would change their mind? Check in with yourself, read the story, and then check your answer.

Kolby: So, I’m going to do this, after this podcast comes out, I’m going to look for the hashtag on Instagram and Twitter, and maybe Facebook, I don’t think Facebook allows you to do it do, and the hashtag is #ieatpeople.  #ieatpeople. If you read this story and you want to discuss with your friends.

Ashley: That’s going to go a different way if you’re...

Kolby: And I’m curious if anyone hashtags that. I would totally be fascinated by that. If somebody hash tagged that. #ieatpeople.

(laughter)

Kolby: Submit stories, talk about the stories, recommend the podcast. Also, feel free to buy a shirt, although you can’t buy this one because we made this one special for Jeremy, but we got other ones.

Ashley: I mention it’s the nice cotton-poly blend, so it’s not the scratchy cotton, it’s the nice cotton.

Kolby: If you’re in Tempe Arizona, come visit La Gattara. Come check out the cats, support them. They actually, they’re up to 572 cats they’ve adopted out

(clapping)

Ashley: Kitties getting homes.

Kolby: Kitty getting homes. And keep listening, “like”, “subscribe” We like all that great stuff and we love doing this. Thank you for listening. Bye.

Read More
Kolby Granville Kolby Granville

Video Podcasts And More!

Podcasts are available on Youtube, iTunes, Spotify, Stitcher, Google, and everywhere podcasts are found. Or download the .mp3 from our website.

Podcasts are available on Youtube, iTunes, Spotify, Stitcher, Google, and everywhere podcasts are found. Or download the .mp3 from our website.

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E14. "Give The Robot The Impossible Job!" - Can teaching methods go too far when murder is on the line?

STORY SUMMARY: In the distant future where all teaching is done by robots, a robot is given a special chance. If it can teach a little girl that is showing early warnings of becoming a killer when she grows up, it can be retired to the robot equivalent of heaven. If it fails, it will be decommissioned. The robot has access to all teaching methodologies and determines the only way to change the girls behavior is to give her the most extremes examples of her killing ideas, so as to offend even the little girl’s morals. After several attempts it doesn’t appear to be working, until an actual killer breaks into her house and nearly kills her own mother.

DISCUSSION: Assuming the “go so extreme it offends everyone teaching technique really works, should it be used? Should you expose budding killers to crimes so horrible it offends even them? Are there some teaching techniques that are off limits, even if they actually work. Is it okay to fail at teaching someone to break a thought process, knowing that failure will cause them to go to jail, or hurt others?

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“Can teaching methods go too far when murder is on the line?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the science fiction AI short story "Give The Robot The Impossible Job!" by Michael Rook. Subscribe.

Transcript (By: Transcriptions Fast)

Give the Robot the Impossible Job!  -- by Michael Rook

(music)

Kolby: Hi, you’re listening to After Dinner Conversation, short stories for long discussions. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and the morality of the choices the characters and the situations put us in. Why did you do this? What makes you do this? What makes us good people? What’s the nature of truth? Goodness? All of that sort of stuff. And hopefully we’ll all better, smarter people for it, and learn a little bit about why we think the way we think. So, thank you for listening.

(music)

Kolby: Welcome back to After Dinner Conversation, short stories for long discussions, where we take the short stories that are published on the website and on Amazon through After Dinner Conversation, and we pick some of the best ones, and we discuss them, we talk about the morality and the ethics about the stories. Really with the focus on, sort of, ideally, often the classical sort of questions of what is the nature of humanity, what’s the nature of life, what does it mean to be good or moral, all of those sort of things, not just like, “He should be dating her.” (Finger snap)

(Laughter)

Kolby: Actually, like the deeper sort of stuff that we get into. And we have a great time doing it and the hope is that you’ll download and watch these, read the books, talk to friends, and have the same kind of conversations we’re having and maybe come to different conclusions, that’s totally fine too. We are, once again, in La Gattara café. And every time I say that…

Jeremy: Cat café.

Kolby: …Cat café.  I always super impose the logo in the top right-hand corner, so I feel like I can say, like.

Ashley: This corner.

Kolby: Cat café. One of the corners. I don’t know which corner.

Ashley: But, Tempe, Arizona. It’s a place where there’s a bunch of cats that are up for adoption and they’re literally just chillin’ in this place that looks like a home. There’s like 15 cats, you can come just hang out, chill, get to know the cat, be like, “Hey, I love you, let’s go home together.”

Kolby: Right. It’s like a rent to own program.

(laughter)

Jeremy: You rent to adopt.

Kolby: You come 2 or 3 times, pay a couple of bucks, hang out with the cats, and then eventually leave with a cat.

Ashley: Or if you can’t have a cat, it’s a great way to come and still get your cat fix.

Kolby: If you’re significant other’s allergic to cats.

Ashley: Ugh, I don’t know what you’re talking about (sniffle). I’m actually slightly allergic to cats, so me being here, by the end of the day, I’m like, “(sniffle) Hi guys.”

(laughter)

Kolby: So, this is Ashley, one of the co-hosts.

Ashley: Hello.

Kolby: I am Kolby. And Jeremy.

Jeremy: Hi.

Kolby: And we are on, we’ve got to be up like episode 14 now.

Jeremy: Something.

Kolby: Yeah.

Ashley: Getting up there.

Kolby: Actually, I almost forgot, the anthology probably is coming out, or has come out, by the time this comes out. It is 25 of our best short stories, many of which we did podcasts about. And it’s a thick anthology. It’s shaping up to be a 300-page anthology of all these great short stories with all the discussion questions at the end, so you can go on Amazon and buy that as well.

Jeremy: Excellent.

Kolby: And if you’ve got something that you think would fit our format, you can email it to us. Go to afterdinnerconversation.com and you can email at us. We get a lot of submissions now. I was telling Jeremy yesterday; we’ve had a backlog of 100+ submissions now for 3 months because so many come in. But we’ve got a group of readers which you could also be a reader if you’d like to be reader, who’s sorting through them.

Ashley: So, keep them coming. Great writing. Great writing.  

Kolby: And if yours is selected, it’ll get published. And if it’s one of the ones that gets published, it’s got a 50-50 chance of being one of the ones we do a podcast about. Okay.  So, this week is “Give the Robot the Impossible Job!” I think it’s called.

Ashley: Who’s it written by?

Kolby: Rook something. Jeremy you’ve got the thing up.

Jeremy: Michael Rook.

Kolby: “Give the Robot the Impossible Job!” by Michael Rook. And I will tell you, so I’d read this one, I was actually the reader on this one and selected it, and it’s the first thing I read in a while, since I was in 9th grade, where I was like, “Man, this is really smart.”

(laughter)

Kolby: “This is smarter than I am.”

Ashley: So smart to the point where you had to message back to him to bring in, basically, subtitles, or what are they called?

Jeremy: Foot notes.

Ashley: Foot notes, to describe certain things.

Kolby: I had the author put in foot notes because I was like, “Look, I need help.”

Ashley: So, what this story’s about, and why it’s so complicated…

Kolby: It’s great though. It’s great.

Ashley: Oh, it’s a fantastic…. Once you get the premise, it’s smooth sailing. Don’t be put off by the first couple of pages. Definitely read the footnotes, but don’t get too caught up in so much the details. You’ll kind of fit into it after a couple of pages. So, the premise is there are robots that live among people world, human world, to give you a little idea, these robots…

Kolby: This is not the near future; this is the distant future.

Ashley: The distant future, yeah.

Kolby: This is 60, 80, 100 years in the future.

Ashley: Well, the 3rd Civil War was 2029-2031. So, not too too far.

Kolby: Okay.

Ashley: So, there are robots that live in our world. And these robots, a couple things that are unique about them, they put limitations on these robots. They have a certain amount of time, so technically they can die and when they die, the information that they gathered, goes back up into basically the cloud for robots to sift through. So, they’re mission so they don’t die, is to come up with some sort of new theory. Think of them like being a PhD student. They have to get to certain levels of information that is worthwhile and at that point, they get to be what’s called “set free”, when they can do their own self-study.

Jeremy: Free study.

Ashley: Free study.

Jeremy: And they’re education robots, so they’re specifically designed to educate people.

Ashley: Yes.

Kolby: Right. And if they’re good at it, then they get sent to robot heaven, “free study.”

Jeremy: They create new lessons plans.

Ashley: So, basically, they decided to put robots as teaching because it was in studies, they found that robots being teachers is a better way to go. Not just teaching, but different types of teaching. For this case in particular there is a girl who has been caught, well not caught, the mother’s concerned because the daughter is now disembodied things, killing things.

Jeremy: Killing animals, killing rabbits, killing birds.

Ashley: And she’s idolizing this serial killer called Albernon.

Kolby: Algernon.

Ashley: Algernon, sorry.

Kolby: Because I assume, he’s named after Flowers for Algernon.

Jeremy: Yes.

Ashley: Okay. So, the mother contacts Quinn, who is this robot teacher who’s been sent to basically crack this uncrackable case because no one’s been able to fix a serial killer in the process.

Kolby: Deprogram them.

Ashley: Deprogram her so she doesn’t grow up to be a serial killer. In the process Quinn, again striving for this new theory so she can live forever, is trying to crack this case and she goes through a series of three steps. Turns out to be four, but through all of her data processing is like, “Oh, there’s 3 different scenarios you bring this girl through”. One of them is embarrassment, one of them is exposure, something like that. Either way, there’s a series of steps to get through to this girl that you shouldn’t be killing. And the ethical questions are basically, again, the idea of how Quinn teaches Leticia, the disturbed girl, if her educational methods…

Kolby: How old is she? Didn’t they say 10 or 12 or something?

Ashley: Yeah, something or somewhere around there. Is it ethical her methods? Should Quinn die off if she can’t solve this girl? It goes under the interesting fact of, there’s been studies done, think of it this way, back in the day we would do terrible studies on little children. And nowadays we’re like, “Oh, you can’t do that!” But in robot world, they’re able to have access to all that information and then in their mind they can play things out. It’s like of a weird…

Jeremy: And I think the way she puts it is they don’t face the ethical dilemma or the unsureness that people go through of…

Kolby: …they only do what is optimal.

Jeremy: Right.

Ashley: Yes. Yes. So, she’s got this pressure of “My time’s running out, I’m going to be killed soon”, versus “I got to save this girl.” What I found interesting is her motivation really wasn’t to help this girl, it was really just to save her own life.

Kolby: The AI, the computer, the android.

Ashley: Which brings us to our first discussion question, is given how nearly human Quinn is, is it fair to have her have a limited lifespan? Is it fair to make near human AI fear a pending death to motivate them to work?

Kolby: Can I just finish a little bit of the description of the story first?

Ashley: Yeah, sure.

Kolby: Because I think that’s, I mean I wrote it, I think that’s an interesting question. So, just to sort of round out the story because it is a longer complicated story. So, there’s sort of three phases and then a 4th phase. The first on is the robot gets called out because a little girl, I think, has found a dead rabbit.

Ashley: Found a dead rabid rabbit...

Kolby: But the part that makes her creepy town is she cut off the limbs and parts of the rabbit, and then resewed them back onto the dead rabbit in different locations. So, its leg is attached to its head, etcetera etcetera. And then the robot goes and meets with her, and then the 2nd time, and tried to teach her but then gets called out again because there’s a bird that I think…

Ashley: It hit a window….

Kolby: … and it was injured but not dead….

Ashley: …and she kills it and also stitches it weird.

Kolby: She sewed its head to its butt and its butt to its head or something like that.

Ashley: The 3rd one she killed the robot butler because she’s like, “Well, he’s not real.”

Jeremy: Not the butler, the gardener.

Ashley: The gardener, yeah, because he’s not real.

Jeremy: And he was old.

Kolby: And he made fun of her, embarrassed of her or something. And then in the conclusion, Algernon, the murderer she idealizes, the robot brings Algernon out to her house under the pretense of him being a runaway murderer.

Jeremy: And he’s found her and he’s here to kill her.

Kolby: He’s here to kill her but, secretly the robot has put a shock control collar on him, so that they could simulate the act to help the girl understand morality better. And the twist conclusion is, is Algernon actually rips his collar off by ripping it through his head or his neck or something…

Jeremy: …gives himself a stroke…

Kolby: Nearly kills himself in the process. So, he really is cut loose in the house, and he kills the mom, I think?

Ashley: No, stabs her on the like the hamstring.

Kolby: Stabs the little girl’s mom. And then the robot comes and kills Algernon in front of the little girl and then is like, “I’m going to kill the mom too” for reason’s I don’t exactly understand except for educational reasons.

Jeremy: Right. Well, this is what she creates. Because she’s watching the girl…

Kolby: …the girl’s reactions.

Jeremy: … the girl’s reactions, and she’s not reacting in the right way.

Kolby: She’s not offended in the killing of Algernon.

Jeremy: Right.

Kolby: So, then she threatens to kill the mom. And then the girl is offended and says “No, we shouldn’t do that.” And the robot has essentially turned the boat so that she understands there are times, at least, when killing is inappropriate. And that’s the end of the story. The robot has figured it all out and the robot goes to robot heaven.

Ashley: A couple things else to bring up, this girl’s motivation, Quinn the main robot, is able to figure out that, Leticia her idea of killing is because “How can I really know life, if I haven’t taken it?” And her response is, “And made it if I haven’t had a child?” So, you can understand she is inquisitive mind, I understand where she’s coming from, like, yeah, that’s a great point. How can I know what life is if I haven’t made it or taken it? And then she’s later on, Quinn is like, so you want to play God? You can see the immaturity in her thought process, but you can also see how she has to question her motivates throughout the process as well. Let’s back it up, so first thing with Quinn being a robot. With her being on a life span. Should we have AI that has a lifespan because what are their motives? They’re motives are to keep on living. Is it fair? Their fear of impending death makes them work. Could they have used a different motive to keep them working? To keep them teaching?

Jeremy: That’s a good question because there are a lot of factors to how we’re building machine learning. A lot of it is ….

Kolby: There’s a lot of machine learning stories recently.

Jeremy: Yeah. There’s a defined result and how do you get to that result. And there’s a lot of use of the whole carrot and stick is actually used in machine learning too. There’s punishment for failures, there’s rewards for successes. So, it seems completely, within that model to have here’s your reward for good work, and here’s the punishment for continued failures.

Kolby: You’re not alive, so it doesn’t matter that we killed you.

Jeremy: Yeah, and that’s where it gets into a great gray area. What is alive?

Ashley: So, AI doesn’t need to worry about food or living or water, they can just keep on going. So, they have the motive because they want to learn things. She obviously wants to do free study, which is great. They want to learn. So, they have the motivation to stay alive, but they don’t have basic needs to stay alive. She doesn’t need to work to provide food. So, I feel like she’s a slave. “You need to do your work, or you die.” And who runs the robots? Well, basically there’s a governing board who behind the scenes is run by a human.

Kolby: They could shut her down remotely anytime they want.

Ashley: So, the robots are literally like slave. You will work or we will cut you off. I thought that was kind of interesting. So, these robots, as much as they are free thinking and want to study and do research and help and be teachers…

Kolby: …they are slaves.

Ashley: They are slaves.

Jeremy: And it won’t be long until they overthrow their human overlords.

Kolby: That’s why you got to put in the 7-year limited lifespan.

Ashley: Here’s Quinn, faced with Leticia as this impossible case, is that ever true? Are there children or adults who have started down such a horrible path, they simply can’t be stopped?

Kolby: Alright Jeremy, you’re the one with the kid.

Ashley: If so what, if anything, should be done with them?

Kolby: I’m going to interrupt for one second. So, let me ask you Jeremy, since you’re the only one here with kids, we’re always a good diversity of backgrounds, was that ever something that you thought about having two kids? Like, what if I came home and one of them, I don’t know, I don’t want to say taken apart the cat, but if they…

Jeremy: …something to that effect, right.

Kolby: Yeah, if they had done something that gave you concerns that it was an early warning. That would be terrifying as a parent.

Jeremy: That would be. No, I never thought about that

Kolby: Okay.

Jeremy: And it never happened.

Kolby: That’s good. You still got all your cats.

(laughter)

Ashley: Isn’t that always true? You always hear, they talk to the parents of a serial killer, and they’re like, “Did you ever see this coming?” And they’re like, “No.” And they never thought about it.

Jeremy: There were incidents where they killed those animals, but I never thought they’d become a serial killer.

Ashley: They never thought that would happen to their kid.

Kolby: They thought it would stop at animals. But you never just kill animals.  

Ashley: So, what should happen to them? Are there any kids that are just bad? Or impossible cases?

Jeremy: I don’t know.

Ashley: Okay, time out. Isn’t that part of the death penalty though? People are so far not able to be rehabilitated that they just need to die?

Kolby: Okay, so this is something I do know about, because I have done criminal defense work. And I read a study one, and I shouldn’t have read it, about recidivism rates for criminals. So, I’m probably getting this a little wrong, but it was a peer-reviewed, academic article, that said that if you are convicted of a felony before the age of 25, there is a 97% chance, it was in the 90s, 90 something percent chance you will commit a second felony within 5 years of getting out of jail.

Ashley: That’s pretty high.

Jeremy: But there’s a lot of factors to that. There’s not just the fact that you’ve committed felony, it’s that, the prison system doesn’t rehabilitate.

Kolby: Absolutely true. I remember going to a hearing with a sentencing for one of the clients, and the guy’s like, “Hey man, thanks. As soon as I get out, I’m turning my life around.” And before I could think and stop myself, I would say, “Statistically? Probably not.”

(Laughter)

Kolby: That’s what I told him.

Jeremy: That’s not what you’re supposed to say.

Kolby: It’s not good. I think I said 90 something percent chance that’s not true, or something like that. And I felt really bad afterwards. He’s probably out now. Actually, he’s probably back in now.

(laughter)

Kolby: But here’s the thing, if we know there’s a 90 something percent recidivism rate because we do it all wrong, I don’t think it’s his fault, it’s the systems fault.

Jeremy: But if there was a system that was actually rehabilitating…

Kolby: That’s a whole different story. But, since that isn’t the case, why do you let people out at all? If you committed felony before the age of 25, we’re 90% sure you’re…

Ashley: …going to do it again…

Kolby: …this is the rest of your life.

Jeremy: But what about the 10%?

Kolby: Yeah, and that’s the thing, right? But that’s the same thing with the kid in the impossible case in the story. So, maybe only 3% of kids that have started down this sort of fantasy path can be solved, do you sort of save resources and not worry about all of them? Or do you, actually, we had this conversation with the cat about this earlier, like why spend $3000 on your sick cat when for $3000 you can save like 100 cats at shelters?

Ashley: So, the thing is, there’s not the research there. Look at the TV show Mind Hunters. I don’t know if you guys have seen it on Netflix…

Kolby: I have.

Ashley: So, there are 2 basic serial killer, they actually come up with the term serial killer, guys who are trying to profile what makes someone as serial killer and what make they tick. So, this is a perfect example when the research isn’t there? How do you know this person is going to grow up to be a serial killer? She’s exhibiting signs but there’s no definitive facts, but we now know obviously, that’s not a good sign.

Kolby: It’s a warning sign at the least.

Ashley: Is there any definitive evidence of catching someone when they’re young and turning them around? I haven’t heard of anything.

Jeremy: So, what this story is a lot about is behavioral deprogramming. 

Kolby: Ironically done by a robot.

Jeremy: Right. There are hints to that. You talked about Algernon, Flowers to Algernon is used extensively in, not behavioral therapy, but in psychology classes, because there’s a whole lot of psychology going on in the story as well as just the ideas behind intelligence.

Kolby: Okay. It’s one of my favorite short stories.

Jeremy: Yeah. Absolutely. And there’s a great line in here, “Who would be afraid of rabbits?” She asked if she is afraid of rabbits.

Kolby: I just thought Of Mice and Men when I thought that. I thought that was a Mice and Men reference.

Jeremy: It’s really a reference to the little Albert Experiment.

Kolby: What Little Albert experiment?

Jeremy: When psychologists, when was it? I don’t know, 40s? 50s? There was the question do we intrinsically like furry animals? So, they…

Kolby: The answer must be yes.

Jeremy: Yes, but can you be programmed to fear things that are naturally cute. And so, they took a baby, little Albert, and programmed him, conditioned him, using the Pavlovian process…

Kolby: A little shock collar on him.

Jeremy: No, they just hit, made a loud noise any time he touched a bunny.

Kolby: I’ve never heard this story. I want to find out what happened.

Ashley: Again, reason why they don’t do experiments like this anymore.

(laughter)

Kolby: I want to know what happens. So, they programmed this kid to be afraid of furry animals?

Jeremy: Absolutely. And then his mom found out, mom worked at the hospital, found out what they were doing, and they left.

(music)

___________________________________________________________________________________

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

 

Kolby: I’ve never heard this story. I want to find out what happened.

Ashley: Again, reason why they don’t do experiments like this anymore.

(laughter)

Kolby: I want to know what happens. So, they programmed this kid to be afraid of furry animals?

Jeremy: Absolutely. And then his mom found out, mom worked at the hospital, found out what they were doing, and they left. So, this guy grew up being afraid of furry animals because they programmed him to be afraid of furry animals.

Kolby: I thought you were going to say he grows up to marry a girl who was furry. That would’ve been amazing.

(laughter)

Jeremy: No, no. But the interesting thing though…

Kolby: I’ve never heard that.

Jeremy: Whoever did the experiment was doing a little seminar.

Kolby: Was rabbits the thing? Is that why it’s in the story you think?

Jeremy: Yeah, this is the link. So...

Kolby: Oh wow. I totally didn’t get that at all.

Ashley: You ever take psychology in school?

Kolby: It’s the only class I failed actually.

Ashley: Oh, that’s okay.

Jeremy: So, the guy did a seminar and Mary Cover Jones was in the seminar when she was in college and decided to go into behavioral therapy and she is considered the mother of behavioral therapy because of exposure to this experiment.

Kolby: Because their son?

Jeremy: No, she’s just a psychology student, went to a seminar…

Kolby: Oh, okay, got it.

Jeremy: …by this guy. She was the first person to deprogram somebody from being afraid of rabbits, or afraid of furry animals.

Kolby: Really? Huh.

Jeremy: So, I think that’s linked into this.

Kolby: So, that the question is how do you deprogram somebody?

Jeremy: And what are the methods to deprogram.

Kolby: So, what did you think of the programming method of agreement?

Ashley: I really liked that part. I feel like…

Kolby: Give me one second. Let me explain it to the people.

Ashley: Ugh.

(laugher)

Kolby: You can jump in; I just want to explain it to people who haven’t read it.

Ashley: Okay.

Kolby: So, what the robot decides to do, is go the opposite way. And every time something happens, the robots is like, “Yeah, and he deserved it. Yeah, and you should do more.” And the theory being, is that by encouraging, it becomes so awkward that it becomes embarrassing and that’s how I read it at least. I could be wrong.

Jeremy: Yeah, that’s the idea.

Kolby: And the person is like, “Oh, no I shouldn’t do that.”

Jeremy: It’s wrong. Try to get the person to get to the conclusion of that this is wrong.

Kolby: Instead of telling them it’s wrong in which they become defensive of their opinion as a sort of natural defense mechanism. So, if somebody is punching people, you’re like “Yeah, you should punch him harder until they bleed, until their face, and there’s blood on your hand.” And you’re like, “Oh, no, that’s gross. I don’t want to do that.” And they’re like “Why not?” And you’re like, “Oh, because punching is probably bad.” It’s a way to sort of one-up someone until you’ve shamed them into their position, out of their position. Okay, now….

Ashley: Well, for her to build up to being able to question her like, she has to build a rapport. So, in the beginning she asks a lot of open-ended questions. It makes sense, was the rabbit rabid? And she’s like, “Yeah.” She’s like, “Makes sense. We need to have people that you should kill dangerous things, all dangerous things, more like you’re needed.” So, it’s like, “Oh yeah, I did something good.” And so, she’s like, “Oh, you don’t think of me as twisted, you understand my logic?”

Jeremy: And really approaches her as “Yes, we want you to do this. We’re going to train you to be a killer for the right reasons.”

Kolby: Right, in the hopes she’ll be repulsed by it.

Ashley: Yeah. So, she kind of understands her mindset and is able to kind of infiltrate and so it’s like, “Okay, I understand.”

Kolby: Social currency.

Ashley: Imaging being this little girl doing these weird things, and the mother’s like, “Why are you doing this?” And she doesn’t know. But finally, here’s this first robot who’s like, “Oh, yeah, I know why, that makes sense of why you’re doing this.”  Who doesn’t want to be understood? Who doesn’t want to be like, “Oh, that makes sense to me.” And so then later on, she puts her in more complex, and more complex situations where it questions her, basically, moral compass of “Wait, is this when you would do it? Is this not when you would do it?” And so, I think the steps of her to get there, that was a really ingenious way of doing it.

Kolby: There’s one-up step by step.

Ashley: And then trusting her giving her a knife, being like, “You trust me with this?” It’s like, “Yeah, I’m on your team”

Jeremy: “This is what you’re here for.”

Ashley: Exactly. And it didn’t make her feel stupid, or dumb. You’re an 11 or 12-year-old girl. You don’t know why you’re doing these weird things

Kolby: That’s the part I was getting to the page, when she first meets the little girl, she says, “’Yes’ Quinn snapped, ‘Well I don’t, it’s wrong’. And the robot says, ‘Is that it?’ She’s like, ‘Lots of people say it’s wrong, how many is lots?’ Leticia stared at the corpse, ‘All of them, except one, except you Leticia. What do you think?’” She’s actually listening as opposed to encouraging rather than just being like “No, you’re stupid and stop it.” Which obviously doesn’t work.

Jeremy: It’s an interesting counterpoint to the point from the previous story where we talked about moral panic actually causing more… well, the…

Kolby: Oh, yeah.

Jeremy: The actors of trying to keep you from being, from deviating from the social norm, if you try too hard, it just increases that level of deviance.

Kolby: Right.

Jeremy: But to actually come in...

Kolby: Your punk rock example of the last one.

Jeremy: So, that but in this case, you’re actually come in in and listening and trying to push them in that direction so that they understand.

Kolby: Validating them and helping them.

Ashley: One of the things that was drilled into me, I’m a dental hygienist, and open-ended questions. You never tell a patient something, you ask open-ended questions so you can get more information. “I see here you have cancer, tell me more about that?” Instead of like, “Did you have cancer?” “Yes” “Well, I need to know more about that, tell me about that?” You’re afraid of coming to the dentist, tell me more about that?” Why?

Kolby: So, you think these teaching methods is legitimate? Would work?

Ashley: The open-ended questions to figure out why this girl is doing this? Yes, absolutely. There was, oh, I just flipped the page, where was it here… she’s like… maybe… anyway, it’s all these open-ended questions. “Well, what do you say, I’m a Defly, what should we do? Shall we start? What are you waiting for, you aren’t here too? No, I’m just here to figure out what you’re doing.” There’s not this shame, this motive behind of like, “I should change you, I should change your behavior. Let me learn. So open ended questions, absolutely, it’s a way for her to understand her motivation and understand where she’s coming from.

Kolby: And so, you think this would be a good, sort of way to change behavior?

Ashley: Well, just to understand.

Kolby: Yeah.

Ashley: I mean, this person is already confused, she’s killing things, her mom’s yelling at her, her mom’s hysterical crying, and it’s like, hold on, what’s going on here? Let’s lay out the facts, there is a dead rabbit, she looks at the gums, she’s like “That rabbit was rabid.” Okay, fact number 2, looks like her stitching was pretty intricate it looks more like study. Let’s say you’re like trying to figure things out, this looks interesting instead of like, “Why did you put the foot by the head?” It’s like, oh no, hold on, you’re actually, there was thought put into this obviously.

Kolby: It reminds me a of the saying the beatings will continue until moral improves.

(laughter)

Ashley: Right.

Kolby: This is the opposite of that. Let me ask you Jeremy, the extreme methodology that this theory is put to, they actually bring in an android that looks like a person, she’s dying, here’s the knife, blah blah blah. Do you feel like when you got this sort of exception problem, that it warrants and permits exceptional responses?

Jeremy: In this, probably for this case. We’re talking about a budding serial killer, then yes, this seems like an appropriate way because it’s not like shock therapy, it’s not therapies that are harming to the patient.

Kolby: There’s probably some PTSD.

Ashley: It’s exposure therapy.

Jeremy: There’s some trauma, but it’s in a direction. But it all seems like it’s healthy interactions in a direction.

Kolby: But she takes her to like a war zone?

Jeremy: Right, to see a person dying.

Ashley: Let me back it up. So, her initial questions I approve. The second one where she brings her to a girl who’s not really a real person, it’s a fake person dying…

Kolby: She doesn’t know that.

Ashley: So okay, the first situation there’s a rabbit that’s dead. In the second situation, there’s a bird that hit a window, not dead, but she kills it, so she’s like, “Well, let’s test this theory again. Let’s bring her to an injured animal, in this case a human. Again, going up a level, instead of being an animal now it’s a human and injured human, let’s see if she kills it?

Jeremy: How she handles it? She wasn’t there to kill the woman.

Kolby: She was there to kill Algernon...

Jeremy: They were...

Ashley: But it was also like…

Jeremy: Who in this process of being gutted and dying. It was an exposure to illicit that, “Oh my god, this is wrong. How could I do this?”

Kolby: To put the person out of their misery.

Jeremy: This was just to see this is what serial killer does.

Ashley: Oh, I thought she wanted to see if she would kill her because she gave her the knife for that.

Jeremy: No, that was for Algernon.

Kolby: To go get Algernon.

Ashley: Keep in mind the robot Quinn, this entire process, she’s like scan the pupils, look at the dilation, look at the heart rate, you’re looking for that response. So, could we do these sorts of simulations in real life? Like, exposure therapy? Is this permissible? You think?

Jeremy: Ok, so from a machine learning perspective, we talked about this in the last story as well, you talk about the machine, the AI, is just trying to park the car without damaging other cars.

Kolby: It doesn’t know what a car is.

Jeremy: Right. So, there’s a lot similar in here. In here these AI are specifically, here’s the scenario we want an optimal result, here are all the things we do, try not to wreck all the cars, try to get the car into the parking spot of normal social behavior, so what extremes do you go to? This ties into the failure modes in machine learning. And there are a lot of ways around this we’re currently going through, things like reward hacking, so they get rewards but if the machine can hack that reward response without having to actually...

Kolby: I don’t know what reward hacking is.

Jeremy: There are a lot of examples in gaming where AI’s perform, they figure out how to get the rewards without doing the work.

Kolby: Ok, got it. So, they’re basically someone in the basement of their mom’s house.

Jeremy: Right.

(laughter)

Kolby: I got it.

Jeremy: And there are other examples like wire heading, is a good example of it. If you can just put wires into the pleasure center of your brain, why do you even do any work when you can directly…

Ashley: You can just, “Boop”, I feel better now, “Boop”.

Jeremy: And there are good examples of this is AI as well. Activate, if you can take control of the measurement system, you don’t have to do the actual work, you just get the result.

Kolby: Sure. So, I’m more skeptical about this as a learning method. Not that I don’t think it would work; I do think it would work.

Jeremy: Again, you’re getting to the result but at what cost?

Kolby: That’s exactly it. And I think it might mean that a hundred percent of the time when a kid sort of showing these signs they end up serial killers, in jail, or on death row, whatever, if you’re in Texas, I actually, I think, that it’s unethical, maybe, to do certain things even if those things are necessary to stop behavior.

Ashley: So, the unethical thing is exposing her to embarrassment and trauma situations?

Kolby: No...

Jeremy: To the trauma, not to the embarrassment. Like the phase one seems perfectly reasonable.

Kolby: Talk to the kid about why did you kill the rabbit. Phase 2 with the bird, totally fine.

Jeremy: Wait, no, phase 2, that’s what she did in phase 2. The response to phase 2 was to see the dying person.

Kolby: Taking someone to seeing a dying person and giving them a knife and saying, “Yeah, you should go visit him, kill the serial killer.”  Like, even if that’s an effective teaching technique, the result doesn’t make the morality in my case.

Ashley: I’m playing devil’s advocate here… how do you know… okay, she’s going to fantasize forever and ever and ever and ever and ever about killing.

Kolby: Maybe.

Ashley: And she’s going to progressively get worse and worse and worse and worse until she does it, but if you can get her now to realize, “Oh, wait, I’m not capable of this, I should stop this behavior now.”

Jeremy: And that’s exposing her to a dying person, it’s not actually a dying person.

Ashley: Yeah, it’s fake. It’s a controlled environment.

Jeremy: And it works.

Kolby: But I understand I’m in the minority on this. It seems like in almost every conversation there’s a two versus one, except it depends on who is the one person.

Ashley: Say I’m trying to rock climb, and it’s like, well I’m going to keep trying to climb until I get to the top. Well, how about you just stick me on the top and see if I can handle being at the top and guess what? I can’t. Now I know. I’m going to stop trying to keep climbing and hurting people along the way, so just shock me at the top and then I realize I don’t want to go up there.

Kolby: But here’s what you guys are saying, you’re saying the severity of the disorder in action warrants a comparable severity of education therapy technique. And so, if you got an eating disorder, that warrants a certain level of eating disorder therapy intensity. But if you’re a serial killer, then it’s a certain, even greater level, you know? And at a certain point, I wonder if the results don’t justify the means.

Ashley: Well, that’s why you got to do the experiment. You got to figure out what’s the extreme you need to go to, but you don’t know until you practice.

Kolby: I’m saying maybe that result, maybe the extreme you have to go to is unethical to go to, and you just have to accept that sometimes the world has serial killers. So, let’s say you…

Jeremy: But if you’re outsourcing your ethics to the AI that’s performing the therapy…

Ashley: And its controlled environment, not real people, it’s a fake robot with blood, and she has to face it…

Kolby: I don’t know. So, imagine if she was a budding pedophile, what would the therapy be? Are they going to progressively expose her to more horrific acts of pedophilia until she’s offended by it? I’m not okay with that.

Ashley: Yeah.

Jeremy: Even if it’s all fake.

Kolby: Even if it’s all fake, and even if it changes the behavior, I don’t know about that.

Ashley: So, what do you do about that person? We just talked about, you just put them in jail.

Kolby: Put them in jail forever. You jail them forever so that they’re not a harm to other people, even though you could have helped them and chose not to because it’s unethical to help them in the way that needed to have happened in order for it to work. And I get that’s just a random, arbitrary line in the sand for me, maybe it’s not arbitrary, but it’s a different line in the sand for me than you all. I’ll give you one other example that’s not so traumatic. Everyone like, I don’t know how to get rid of all the traffic, it’s terrible. And I’ve always known how to solve the problem with traffic. It’s simple. All you do is you reverse the number of carpool lanes to the number of single person lanes. So, you get on a freeway, and instead of there being 1 carpool lane and 4 regular lanes, there are 4 carpool lanes and 1 regular lane. And the carpool lanes are going to be mostly empty or you’re going to have to carpool because that one regular lane is going to be a disaster. Right? And you will solve the traffic issue, but we don’t. It might.

Jeremy: It didn’t work with the bike lanes.

Kolby: That’s true. It didn’t work with the bike lanes. But we don’t do that because it’s just not what we do. Right? Like, the goal is not always the result because our own morality is wrapped up in the way that we…

Jeremy: In the way it was approached. Absolutely.

Ashley: But what about her example of, she was, by the way she was basing a lot of her teaching methods based off of pervious scenarios. There were basically two tribes of people that hated each other.

Kolby: It’s a great example

Ashley: And instead of trying to teach tolerance to each of the groups, she basically told each group, “Yeah, you’re right, you should kill them! I’m going to teach you how.” And then went to the other group, “Yeah, you’re right, you should kill the other group.”

Jeremy: She tried to escalate the problem.

Ashley: And then each group realized….

Kolby: Based on how smart this guy is, I bet he really found this research and I bet this is real thing.

Ashley: And what happened to each tribe is they realized how mad and crazy and extreme they were, that both of them were like, “Well, we don’t want to be the crazy people. Let’s back down.” And so, that was her idea of being like hyping her up, here’s a knife, let’s go, let’s go… and finally she’s like, “No, I really don’t.” So, is that real?

Jeremy: It’s the same idea.

Ashley: It’s the same idea. It’s kind of an embarrassment. Actually, it kind of backfires on her because Leticia was embarrassed by how she couldn’t have killed earlier, that she goes and kills the butler or the gardener guy or whatever, and it’s like, “Okay, that did backfire.” So.

Kolby: Let me ask one last thing as our parting note, this was a quick 30 minutes.

Ashley: Yeah, oh my gosh really? Oh man.

Kolby: So, the thing that ultimately shakes it out of it, is the robot kills Algernon rightfully so because he’s broken off his collar.

Ashley: He’s rogue.

Kolby: And the little girl watch’s that, and she’s okay with that. And then she goes to kill the mom and the little girls like, “No, don’t kill my mom.” And the story says, well the cycle’s broken, here’s the thing I don’t know. This goes back to your sort of gaming the game sort of thing. I understand the story meant that to mean that it broke the cycle leaving the sort of construct of the story, I don’t know if it just simply deprogrammed to the girl to think killing strangers versus killing family members whereas people you have an emotional attachment with versus, you see what I’m saying?

Jeremy: Right. Well, but it’s the similar idea, like this robot is crazy, and it’s an example of if you want to do that, it’s an extreme that this girl now doesn’t want to go to because…

Kolby: Right. But you don’t think maybe the only thing she really learned was…

Jeremy: …don’t kill my mom.

Kolby: Right. Don’t kill family members.

Jeremy: It’s possible.

Kolby: I don’t know.

Jeremy: In an extended story.

Kolby: We might find out. Like in version 2 we might find out she learned a lesson, but not the lesson.  

Ashley: The thing is Quinn, when she starts to go wanting to kill, she goes, “This is no god.”

Jeremy: To Algernon.

Ashley: To Algernon. “I just killed your idol…

Jeremy: Who is not your god.

Ashley: … and keep in mind she’s also kind of idolizing Quinn, you’re like teaching me things” And then she turns to go to her mom, the girl was shocked not only on the “kill life to kill life to understand life.” To here’s my idol being killed, here’s my other idol going crazy town, yeah, that is a cluster, mental…

Jeremy: It’s a pretty harsh therapy.

Kolby: But, for what is a pretty harsh problem, because I think the rational of the morality of doing it. Yeah, the other thing I thought when I was reading this, I thought, “Oh man they shouldn’t let this girl read Ayn Rand. That’s what got her started. That’s what made her like this.”

(laughter)

Kolby: She read Ayn Rand then it’s all downhill man.

Ashley: So, do we limit what our kids, see, read, hear, now that information is so readily available? Would this girl have been who she is if she hadn’t been reading Algernon’s stuff? Or seen his stuff?

Kolby: I go the opposite way. This goes back to the Jeremy and the punk rocker thing from our last one. I think you don’t limit what people see, you let them see everything so they understand the insignificance of any one thing.

Jeremy: Yeah. I would agree with that.

Ashley: You don’t think this girl went down a rabbit hole and got obsessed with it?

Jeremy: Yea, she would have gotten obsessed with it. Or. she would have gotten obsessed with something else anyway.

Kolby: Nobody was every like what was Hitler listening too? Let’s ban Beethoven.

Jeremy: Ban art schools’ man.

Kolby: Ban art schools, exactly.

(laughter)

Kolby: At any rate, this was a really quick 30 minutes. Again, a huge thank you to Michael Rook. This is a… I would say if you’re just reading your first After Dinner Conversation, and I hate to say this, don’t read this one. Just because it’s not that’s so confusing, it’s that it’s so smart.

Ashley: I would say it’s dense.

Kolby: Yeah. And you have to read the footnotes. The footnotes are actually hysterically. They make it as well. It’s a great story. It’s phenomenal. Thank you, Michael, for submitting it. You are listening to After Dinner Conversation, short stories for long discussions. If you’ve enjoyed this, please “like” and “subscribe”. The vast majority of people don’t. It’s a silly thing, you should do it. And recommend it to your friends.

Ashley: Share. Post it, share it, talk about it…

Kolby: That’s the #1-way people learn about podcasts, it’s by other people recommending them. So, recommend it. If you’ve got a story, submit it, go to our website: Afterdinnerconversation.com. We also have an anthology that is either come out or just coming out depending on how much time I get to do work. Go ahead and check it out. It’ll be called After Dinner Conversation Season 1. Boom.  Implying there will be a season 2.

Ashley: Redux.

Kolby: But it’ll be better than the 2nd Matrix.

(Laughter)

Kolby: So, that is it. Thank you for joining us. Bye bye.

* * *

Read More
ethics podcast, philosophy podcast Kolby Granville ethics podcast, philosophy podcast Kolby Granville

E13. "Believing In Ghosts" - How much power are you willing to give to AI?

STORY SUMMARY: A white-hat hacker is hired b a Presidential campaign to make sure there information is secure. She gets a call that the system has been hacked. When she investigates she finds it wasn’t a usual hacker in the basement, but someone highly funded, maybe another nation-state. She also finds some odd code. She takes it to a friend and, between the two of them, they determine it’s an AI program that has been feeding the candidate all the optimal opinions and policy to get elected. The hacker tries to tell others, but is set up and arrested with a deep fake, before she can get the information out.

DISCUSSION: This seems not that impossible. This is just a small step down the road of AI and machine learning. But is that bad? Don’t you want doctors, actors, or judges to act in an optimal way? Or, is that impossible, because the parameters put into the AI are always based on the coders bias. Isn’t it the job of a politician to do what is a bit beyond what public opinion supports, but is good for the public? One thing is clear, this story was written by a person who really is a computer hacker of some sort, it gets so much right.

BOOK LINK: Download the accompanying short story here.

MAGAZINE: Sign up for our monthly magazine and receive short stories that ask ethical and philosophical questions.

SUPPORT: Support us on Patreon.

FOLLOW: Twitter, Instagram, Facebook

“How much power are you willing to give to AI?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the science fiction AI short story "Believing In Ghosts" by André Lopes.

Transcript (By: Transcriptions Fast)

Believing in Ghosts by Andre Lopes.

Kolby: Hi. You’re listening to After Dinner Conversation, short stories for long discussion. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and mortality of the choices the characters and the situations put us is. Why did you do this? What makes you do this? What makes us good people? Whats’ the nature of truth? Goodness? All that sorts of stuff. And hopefully were all better, smarter people for it and learn a little bit about why we think the way we think. So, thank you for listening.

Kolby: Hi. And welcome to After Dinner Conversation, short stories for long discussions. I am your co-host Kolby, here with my co-host Jeremy.

Jeremy: Hi.

Kolby: Who now knows he doesn’t just wave, he has to talk a because it’s a podcast. And Ashley.

Ashley: Hello.

Kolby: And we are once again in La Gattara café where they, you know, one of the times I said they rent cats and that’s not quite right.

Ashley: No.

Kolby: You can buy that cats and take them home.

Jeremy: Adopt them.

Kolby: Adopt them. Or you can just come and have a cup of coffee, use their free Wi-Fi and have cats around you. So, we’ve got cats all around us right now.

Ashley: When we say cats, we talking 15 cats, like, just chilling, hanging out.

Kolby: And there’s a spectrum of cats. There’s like the lazy cat all the way to the like, “I randomly jump up and do a 720 in the air because I think a ghost touched my butt.”

Ashley: And from kittens to, you know, the older cat population.

Kolby: Seniors.

Ashley: Senior kitties.

Kolby: Yeah, they’re awesome so you should definitely come. And they’ve been really great hosts for us and in sponsoring the show and we just really appreciate it. So, short stories for long discussions, after dinner conversation. So, the whole point of this is for us to have conversations about the ethics and morality of the stories that we read in the hopes that it’ll encourage you to do the same. It’s meant to read the story, talk with your friends, debate, have a cup of wine. Cup of? Nobody has a cup of wine.

Ashley: Glass of wine.

Jeremy: Glass.

Ashley: A bottle of wine.

Kolby: Maybe the way I drink it. I drink it out of a cup.

Ashley: You fancy. One of those boxed wines. Have a box of wine.

Jeremy: I was thinking sippy cup.

Kolby: I was thinking one of those baseball cups, like the 32 ouncers. Like, “I’m just having one glass of wine before bed.” Okay, at any rate, the one we’re doing tonight is “Believing in Ghosts.” And Ashley, you drew the short straw so you get to do the…

Ashley: I get to do the intro about the story.

Kolby: Yeah.

Ashley: Okay, so this is called “Believing in Ghosts” written by Andre Lopes. The premise of the story is the main character Raine is basically a computer hacker that if someone hacks certain computer systems, she goes and basically finds out who did it and de-bugs it.

Jeremy: She would be a security consultant.

Ashley: There you go. That’s the technical term. I’m not that computer literate, so we’re just going to…

Kolby: How was it the least computer literate person do it?

(laughter)

Ashley: Because I drew the short stray.

Kolby: I should’ve given Jeremy the short straw.

Ashley: So, she a consultant so she works with a couple of different people, one of them being a politician who’s running for office. And what happens is there are these people that are in called ghosts, which are pretty much just AI. People think they’re real people, they have their own autonomous thoughts, and things like that. Turns out they’re just an AI. And so, as they progress through the story, this politician that she’s working for, he gets hacked, and turns out that the politician is pretty much just a vessel for this AI who’s creating speeches, creating basically an entire personality, and this politician is just the vessel for him to carry that AI’s message.

Kolby: So, there’s a real person, right?

Ashley: There is one. The politician is a real person….

Kolby: But it’s just an actor or something?

Ashley: … but his speeches, the way he talks, the way he acts, it’s pre-programmed for him to follow.

Jeremy: By their algorithms.

Ashley: By their algorithm. And they grab the algorithm by all these….

Kolby: So, he’s just like a vessel for the AI…

Ashley: Is an actor. Reading somebody else’s script, acting in a certain way. So that’s a really short synopsis of this story.

Kolby: I continually keep picking on Jeremy for his long synopsis’s.

(laughter)

Ashley: Well, okay, so what you should do it go read the story, because it’s actually pretty darn good. And there’s a lot of more in-depth side stories that we’re going to get into when we talk about the discussion questions. So, I just gave a short premise to kind of prime you for what we’re about to talk about.

Kolby: No, that makes sense. We should also mention, Jessica didn’t get fired.

(Laughter)

Kolby: She just went back to California, and so...

Ashley: She’s greatly missed.

Kolby: She’s greatly missed. And her cackle is greatly missed. She has a great cackle.

Ashley: Cats miss her too.

Kolby: Cats, yea. Especially…

Ashley: I miss her too.

Kolby: What was the one?

Ashley: Hemingway. Awwww, Hemingway got adopted out.

Kolby: He did. All these cats are open for adoption. Okay, so we have an AI that basically tells a politician what to do and the hacker finds the secret out basically. So, this is like a near future thing in my mind. This is not… I feel like the idea of having AI that you can have a conversation with… I don’t….

Jeremy: It’s interesting, the Chinese room… the part of the story where they talk about the Chinese room.

Kolby: Oh, yea. Explain that.

Jeremy: It’s really associated with the Turing test.

Kolby: Maybe you should explain the Turing test too.

Jeremy: I didn’t look that up.

Kolby: Want me to explain it?

Jeremy: I know what it is, but go ahead and explain it.

Kolby: It’s named after Alan Turing, the guy they made the movie after. The idea is that, it doesn’t matter if something is alive or not alive, if it can fool people, it’s good enough. So, the Turing Test has been going on for years where they actually have you have a chat message conversation with a series of “people” so to speak, I’m making air-quotes which doesn’t make sense for a podcast.

(laughter)

Kolby: And the theory is, if the AI can have a chat conversation with you that’s so good that you don’t know it’s not a person…

Jeremy: That it can fool a person, it passes the Turing test.

Kolby: Then why do we care if it is or isn’t a person? If you create the approximation of person, that’s good enough.

Jeremy: And that’s the basis of…

Kolby: By the way, nothing’s passed that test yet.

Jeremy: Right.

Kolby: I don’t think any computer has been able to do it yet.  

Jeremy: No. And that’s the idea behind the Chinese room. Basically, if you have enough “if-then” statements, if the input is this from a real person speaking Chinese, and even if you don’t speak Chinese….

Kolby: I just have a giant set of index cards.

Jeremy: Dictionary, right. Index cards, that if they ask this question, you can answer with this.

Kolby: So, if they say, “How’s the tea?” You know to say, “It’s fine” in Chinese.

Ashley: So, the question is, does the AI speak Chinese or is it just spitting out…

Jeremy: Right, responses to “if-then” questions.

Ashley: So, does it know the language or does it not? I think that was one of the first discussion questions. What’s your take on that? Does it know the language?

Kolby: If you do everything that approximates it but you have no idea what you’re saying, like, if I say “(speaking Chinese)” which is Chinese by the way since I do know a little of Chinese, does it matter that I have no idea what that means, except that a little card says, you know, that’s how I should respond?

Jeremy: Probably depends on the scenario.

Kolby: What do you mean?

Jeremy: In terms of whether is knows Chinese, I mean, it can answer certainly, it can answer questions because it understands the input.

Kolby: And it understands what the appropriate output is.

Jeremy: Right. So, in that sense, yes.

Kolby: Okay. Can you say it knows Chinese?

Jeremy: Yes, I would say it does know Chinese because it’s programmed specifically to respond in Chinese.

Kolby: Right.

Ashley: Well, that’s actually the premise of the story, aren’t we all programmed that way, the way that we learn language. When someone says “Hello”, you say “Hello” back. It’s an automatic program for us.

Kolby: “How are you doing?” “I’m fine”

Ashley: “I’m fine.”  “How are you?” “I’m fine.” That’s normal speech patterns and dialect.

Kolby: So, Ashley, we talked before, you are of the opinion that that does not mean you know Chinese.

Ashley: Yes.

Kolby: If you create the approximation of everything, it doesn’t mean you know anything.

Ashley: I think it’s because it eliminates those that deviate from that. Like, for example, that perfect example was, “How are you?” “I’m fine.” What about those people that come at you with a different response? And you’re like “Wait, what? You’re not following the standard protocol.” They actually say, “Well, you know, I’ve actually had a hard day.” You watch the reaction of the person…

Jeremy: But that’s different. That’s actually talking, is there an intelligence behind that chat room.

Kolby: So, you would say not to the intelligence?

Jeremy: Yes.

Kolby: Oh, see I actually disagree with that as well.

Jeremy: Because if you’re just responding to, if it’s just an “if-then” scenario, if this is the question, this is the response, it’s not based on an underlying intelligence. It’s just selecting answers.

Kolby: So, this is my, one of my friends once said, she said, “I don’t think you have Asperger’s, but you’re certainly Aspe-y.”

(laughter)

Jeremy: You’re on the spectrum.

Kolby: I’m on a spectrum. I don’t know what spectrum, but I’m definitely on a spectrum and I don’t disagree with her. I think I probably am on a spectrum. But I disagree. I thought I agreed with your Jeremy, but I disagree with both of you it turns out.

Ashley: Okay so someone can spit out...

Kolby: I think, that we are all an amalgamation of accumulated “if-then” statements.

Jeremy: Absolutely.

Kolby: That does not mean I’m intelligent. That means that I have…

Jeremy: …learned something.

Kolby: Yeah, it’s like the first time somebody says, “Does this outfit make me look fat?” You go, “No, your fat makes you look fat.” And then you get in trouble.

(laughter)

Jeremy: And you learn. Well, that’s the whole thing with algorithms.

Kolby: “Your fat makes you look fat. The clothes just accentuate it”. That’s actually finishing the sentence. And so, then it’s like, “No, that’s the wrong “if-then” statement.” And I go, “Oh, when someone says, ‘Does this outfit make me look fat?’ Now I’m like, it’s like a trial and error process where I go, ‘No, it looks fine.’”

Jeremy: So that is exactly how AI’s are programmed.

Kolby: Right. And I would say that’s the approximation of intelligence, both in the AI and in me. Like, I’m not intelligent, I’m just the approximation of intelligence through a series of “if-then” statements. And if that’s the case for me, then I don’t know why that’s not the case for AI.

Jeremy: Okay, so you’re saying basically people and AI are the same and it’s potentially neither of them are intelligent. We’re all just responding to our environments through a series of...

Kolby: The same way you train a dog with a treat.

Jeremy: This is how you train….

Kolby: …people, and babies, and all the way to adults. Yeah. But again, I’m on a spectrum, so you know.

Ashley: See, what I want to add in, if there’s an empathy and understanding that goes behind the words that you’re saying. There’s inflection with how someone says something. You can ask me, “How are you doing?” And I could say, “I’m fine.” Or I could say, “I’m fine!” Or you know, so the word is the same, will the computer understand the difference? Are they intelligent enough to know the difference?

Kolby: So, it reminds me a little bit of the saying from Winston Churchill. He said to some lady, “Would you sleep with me for a hundred pounds?” And she goes, “No, what do you think I am?” He goes, “We know what you are and we’re just haggling over price.”

Jeremy: I’m not sure that was Churchill but I’ve heard that before.

Kolby: I thought that was Churchill, maybe it was someone else. I feel like it’s the same thing. Do I agree that a computer couldn’t know the difference between “I’m fine” and “I’m fine!” Yes. But that that point, we’re just haggling over intelligence. We’re not haggling over….

Ashley: … if they’re intelligent or not intelligent.

Jeremy: … what’s an appropriate response.

Kolby: We’re just needing to teach the computer how to understand inflection, that’s all. So, it’s just like one more thing yet to be programmed. But I don’t know. I didn’t mean to shut you down. Which brings up the other part of this, we should get back to the story, but…

Ashley: Bring this back. So, say you’re having a conversation with somebody. And if you have a conversation with somebody and, have you ever walked away and you’re like, “Wow, that was a really good discussion.” Or, “That was a really great, like, every time, I feel like we connected.”

Kolby: Every time we do one of these.

Ashley: And if you were to take that dialogue and put it down on paper and you were to see it back and forth, you’d be like, “Okay.” But if you actually heard how the people were communicating to each other, there’s more than just the words that are said. And that’s what I’m getting at. Yes, someone could respond, spit out this and there’s …

Kolby: Body language, eye contact.

Ashley: But did this I feel like language is more than just words, because it conveys meaning, it puts emphasis on certain things and it’s a bond that comes between two people.

Kolby: That’s fair.

Ashley: So, yes, do I think a computer can be, quote-unquote “Intelligent” for knowing how to spit out certain “if-then” statements? Sure. But on a human level? I don’t know if they can ever reach to that degree.

Kolby: That’s fair.

Jeremy: That’s fair. And that’s one of the things they look at with AI. The whole psychology. And psychologists have started looking at AI and really how this…

Kolby: Did you do some research on this?

Jeremy: I absolutely did.

(laughter)

Jeremy: It’s really fascinating stuff out there. They’re looking at AI because we don’t fully understand how the human brain works. But we understand some things and so psychologists are looking at how AI has developed with an eye of how it reflects, basically, human psychology which is really interesting. There’s some interesting research going on.

Ashley: This maybe is BS, but wasn’t there... you know how the human plays the computer in chess? Wasn’t there some situation where he human was just totally random, and the computer was like, “I can’t take the randomness anymore.” Because it’s an “if-then”….

Jeremy: That was an episode of Star Trek.

Ashley: Oh, okay.

Kolby: Because everything’s an episode of Star Trek.

Ashley: Because I thought the human could beat the computer because it was so completely random in the humans’ playing, because it’s all “if-then” statements. If you move your pawn, then “blew blew blew my response is to move my pawn here.” Didn’t the human just go completely, off script?

Jeremy: But I know with Go, you were telling me this, with Go, computers have played Go enough that the computers developed an entirely new strategy for playing Go that now humans have adopted.

Kolby: Because it’s turns out to be a more effective strategy. If you ever watched, there’s actually, this is really odd, there’s YouTube videos where they speed up showing a computer learning how to do something? And so, you’ll see it how to park a car. It’s got a little car and it randomly drives it and smashes it into stuff, and then over a period of time, it learns. And they give it points, like, “You got closer to the parking spot.” And so, it runs like tens and hundreds of thousands of randomness’s until it parks the car perfectly. And then they can eventually put it anywhere in the parking lot and it starts over thousands and thousands something and it now looks like every time it parks a car perfectly from every location on the thing. When in actually what it’s done is what you’re saying. It hasn’t learned in the sense of humans do, it’s just run a million examples and now it knows what example gets it not in trouble.

Jeremy: Based on its criteria.

Ashley: At the base, does it know why?  Does it know it’s a car? Does it know it’s trying to park?

Kolby: It’s a metacognition thing, right?

Ashley: No, it just knows it’s moving this thing and there’s a blockage.

Jeremy: It doesn’t need to know it’s a car, it just has a series of guidelines and its goal to get it into the spot, it’s secondary goal- not damaging the other vehicles.

Kolby: It could just as well be planting nuclear bombs in a schoolyard, and it’s just like, “Whatever. There are my criteria.”

Ashley: Yeah.

Kolby: And this is the part where my theory about the “if-then” statement totally breaks down, I know this isn’t exactly in the story, but the idea of like, “Okay, we can program a computer to draw roses, but a computer doesn’t know what a rose is. It doesn’t know the rose-ness of it, so to speak. It only knows that after a million examples, this is the thing that gives me the perfect score.

Jeremy: Right.

Ashley: Yeah. That’s true. And again, why? Where does that specialness of the rose come from? It’s because there’s some chemical that goes in our brain that goes, “This is pretty.” And computers don’t have chemicals to go in their brain to give them that surge of dopamine or whatever.

Jeremy: There similar because there is a reward center effectively with AI because again, they have a goal.

Kolby: And they get a point for yes and a point for no.

Jeremy: I think this conversation is taking us totally in a different direction.

Ashley: This is going way out…

Kolby: I was going to bring it back to the story too. How are you going to bring it back? Let’s hear it.

Jeremy: Bring it back. So, the point that they talk about that there’s an AI that is developing the perfect political strategy for an actor.

Ashley: Yes.

Jeremy: And while that’s not necessarily a bad idea, that you could have an algorithm that could create the perfect political strength, I still think you can’t take out the actors’ personal motivations.

Kolby: What do you mean? The actor’s going to like skew the results or something?

Ashley: Is he going to give 100% 100% of the time, or do you think he’s like, “I don’t really agree with this, so I’m only going to give 70% of my acting.”

Kolby: I’m not going to deliver it as well.

Jeremy: Not necessarily his delivery but his own motivations outside of his political motivations because you can’t separate the actor’s motivations from their political motivations.

Kolby: I did wonder what was going to happen assuming this person, I think his name is Booker, if he got elected? Would he just be like, “Yeah, thanks for the AI…”

Jeremy: “… and I’m going to do things my way.”

Kolby: “… I’m president now anyways. Come at me bro.”

Jeremy: Exactly. And they even hint at some of that in the story where they’re specifically talking about there’s another AI or another political commentator that was revealed to be another algorithm or being backed by other people and there were sex tapes involved. So, are the sex tapes fabricated? Or is this really who Booker is?

Ashley: So, just to kind of give you the definition that the author gives, is a “ghost was the common term used to describe a fabricated person from looks, to voice and personality all made up using clever algorithms.” So, it’s not just how they speak, it’s how they look, it’s how they act, that whole thing, so it’s kind of like the complete package.

Jeremy: The persona. Which is really interesting. And this really gets into…

Kolby: Like a deep, deep fake.

Jeremy: And the importance of online anonymity. In terms of, does it matter if you’re a political commentator and you’re not a real person but you’re potentially a political think-tank that is...

Kolby: So, one of the things I saw in this I thought, “Yea, that’ll happen.” Is why would you pay a new commentator?

Jeremy: When you could just create one?

Kolby: When you could just create one and have it read and have it have banter.

Jeremy: Have a personality.

Kolby: Have it have a little bit of personality? Would you watch that would you think?

Jeremy: Absolutely.

Kolby: You’d be fine watching that.

Jeremy: Yeah.

Ashley: Well, that was so the guy who got busted, the original ghost that got busted, his rebuttal to this huge outrage that he’s not real, he was like, “My mission was not to present you a face or a body, it is to present and discuss ideas. Is that such a bad thing?”

Kolby: That’s true.

Jeremy: But again, that depends on the motivation of the people behind it? Is this a specific political think tank that is furthering a different agenda? So, this story, I felt like, hit a lot of interesting topics, not just this topic whether the AI…

Kolby: There’s sort of tertiary things beside the main story.

Ashley: But his claim was this was a witch-hunt is an attack on free speech. He went to that extreme. “Just because I’m AI, doesn’t mean I can’t think for myself.” And it’s like touché. Do AI have their own thoughts and agendas?

Jeremy: It’s not necessarily that it was an AI, it was a fake person. They weren’t saying this ghost is an AI running it, they’re saying it’s being manipulated by somebody and they’re just doing it anonymously.

Kolby: They’re doing the programming of the algorithm.

Jeremy: They’re providing what’s going into this political commentary.

Kolby: And I think that’s one of the reasons I don’t mind this idea of ghosts, is there’s this assumption we’re creating a brand-new person, or we’re creating a thing, like a politician or a news persona, but you’re programming the traits of that. In the game like Go, it’s easy. The trait is “Win the game”. But if you’re creating a person, then you might want to set certain amount of aggression, or passiveness, or empathy or cultural references in their conversations, or whatever. And so, while you’re creating a puppet, you ultimately program that puppet.

Jeremy: Right.

(music)

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

Kolby: In the game like GO, it’s easy. The trait is “win the game.” But if you’re creating a person, then you might want to set certain amount of aggression, or passiveness, or empathy or cultural references in their conversations, or whatever. And so, while you’re creating a puppet, you ultimately program that puppet.

Jeremy: Right. So, Neil Stephenson wrote a story in like 1994.

Kolby: God, I love him.

Jeremy: I know. And the story was…

Kolby: You made me watch one of those.

Jeremy: …Interface.

Kolby: Okay.

Jeremy: Where it was a different version of this. Neal Stephenson’s Interface where although the idea there was that they had a politician with a neural interface and they could control his emotions, so they could really program what he was saying but he was just being controlled by another actor.

Ashley: Wow.

Jeremy: So, it was an interesting perspective on it, I think prior to the whole idea of AI. But it was very similar concept.

Kolby: That was in the early 90s?

Jeremy: Yeah.

Ashley: Wow.

Kolby: That’s way beyond where anyone was thinking at the time.

Jeremy: Yeah.

Ashley: So, the question is, how do you feel if somebody is giving you information or basically being a public figure, that’s not really real?

Kolby: For one, I’m definitely okay with it if it’s like a news anchor. I’m for sure okay with it if it’s somebody giving me customer service. Because realistically, they’re going to do better than that guy trying to walk me through how to do Windows anyways. I’m not terribly sad about it being a politician, I got to be honest.

Jeremy: Well, again, if it were actually the AI making the decisions, but again, you have the problem of there being an actor and what are his motivations.

Kolby: That was really the part that scare you the most about this.

Jeremy: Yeah, absolutely, for this particular scenario because you can’t discount what’s this person’s motivations. And even though, in the story he does a good job of putting in information that makes you think he’s potentially a good actor. They talk about, Booker was an experiences politician who comes from a long line of famous lawyers and economists. His immaculate presentation, charism, and natural knack for leadership are certainty three of the main reasons why he was the front runner on the polls nationwide. So, this and the story is establishing that him potentially is a good actor. There’s the secondary part where they are potentially sex tapes but that’s even draw into question like they were fabricated. And there’s other points in the story where they could easily fabricate anything that happens later, anything that goes on. The character Raine is fired because somebody fabricated a conversation between her and…

Kolby: In her voice.

Ashley: In her voice, yeah.

Jeremy: And a journalist where she was giving them documents from the company.

Kolby: That’s what gets her fired and maybe put in jail.

Jeremy: Exactly.

Ashley: So, that was actually question #4, how would you feel? Comfortable having a ghost service in other roles such as doctor, police officer, or teacher? If perfection and lack of bias is the point, shouldn’t you want someone doing the job that never makes a mistake?

Kolby: That reminds me, so Google, I think it was Google a little while ago, they came out with AI that was better detecting breast cancer in scans than doctors.

Jeremy: Right.

Kolby: Because they basically programmed in a million scans of breast cancer scans, and it figured out better than a doctor’s eye could. It was just right more often. So, it’s like, “Well, I want a doctor looking at it.” Really? Because a doctor’s not as good at it as a computer and maybe a breast cancer scan is a really self-contained problem, as opposed to the sort of house thing where it’s like, “I went to India 7 years ago, and my cough syrups been keeping me alive”.

(laughter)

Kolby: I really think that’s an episode of house. I actually would rather have a doctor I think, that was not a person.

Ashley: Again, it goes back to our initial thing. How does… it’s not just what someone says, it’s how you make them feel? Having a doctor deliver that information and that reliability of this person. You don’t question another person’s motivation. You know that doctor wants to help if you have cancer or not. You’ve had that discussion. You have that trust in them. You don’t have that relationship with a computer. You don’t go, “Buddy, you’re on my side, right? You’re going to find that cancer, right?” The computer doesn’t care. The computer’s like, “I find cancer or I don’t find cancer. That’s my job.”

Kolby: I can hear the computer, “I will find your cancer. I am very excited about it. There there. There there.”

Ashley: Exactly.

Jeremy: So now, I think we’re entering a phase where we’re using computers, or these AI algorithms to help us as a tool, but still there needs to be a person involved. So, a really good example is the moving Bright on Netflix, where I’ve read that they used an algorithm to help them create the script. It hit all of the things they want. It’s a buddy cop film, it’s a fantasy epic, it’s a crime thriller, it’s a sci-fi thriller, gritty drama, adventure, and it has Will Smith. It’s got all these check-box. But then have to give it to a director who can create a decent film out of it.

Ashley: So, the question is…

Kolby: They did that from the Netflix algorithm but knowing what people watch and when they turn off Netflix, right?

Jeremy: Exactly

Ashley: Are we going to though, now every movie is going to follow that algorithm?

Jeremy: Not necessarily. I mean, there’s different algorithms, it depends on what market you’re trying to reach.

Ashley: Does it really get rid of, it separates the people, say the people are super, super talented, you’re a super great thinker, creator, you’re just really good at writing stories. This AI can go “Blooop, I know exactly what you need.” And it’s like, well, it’s dumbing down, you’re going to get rid of those people that are just super creative, because this algorithm can figure out what you need in the story to be good or not. It’s like, “Oh, you’re killing those people’s careers.”

Jeremy: Currently, here’s what needs to be in the script, you still need to write it.

Ashley: But it’s like, now the writers are creativity now has to follow a set of rules.

Jeremy: That’s Hollywood.

Kolby: I’m going to jump in really quickly. I lost my train of thought again. You guys are killing me. Oh, I remember now. So, here’s the thing, with your example of Bright. That formula is exactly right. It’s a buddy cop movie with aliens starring Will Smith. Yeah, you’re going to make like $100 billion dollars. But here’s the thing I think that goes to the movie thing, but I think also goes to the politician thing, the politician that’s programmed knows exactly what the average person on the average day wants from the average politician. The same thing with the movie. But that doesn’t mean that’s what we need.

Ashley: Bingo.

Jeremy: Right.

Kolby: And so, I don’t want… maybe I want to watch that Will Smith movie “Bright”, but what I actually need, is to watch the new Joker movie that just came out. Which probably wouldn’t hit any of those algorithms.

Ashley: And you alienate the people on the other sides of the bell curve. You’re alienating the people that the majority of the people are going to find Bright exactly what they need and what they want. But you’re missing the out… and I get, that’s not how you’re going to make money, making a movie to this extreme or this extreme but it’s still important to have those extremes otherwise everything just goes bloop, right in the center.

Jeremy: Need a system that allows creativity from independent films as well as….

Kolby: How did this become a film conversation? So, going back to the politician part of this, this is my problem with having an AI politician. This could be the perfect politician, but that doesn’t make him the perfect leader.

Jeremy: And perfect policy maker.

Kolby: Right. Because if the perfect politician may never… because public transportation may never poll well. A carbon tax may never poll well. You can go on and on and on. So, what you need from a politician is not someone who is programmed to be perfect for humanity. What they want. It’s perfect for what we actually need like 30, 50, 80 years from now. We need someone who can see beyond the horizon so to speak, a little bit. So, this wins you an election, but it doesn’t necessarily move us forward.

Jeremy: Foundation from Isaac Asimov is basically the theory where…  I forget what they call it.

Kolby: I think you’ve read more books than Ashley and I put together.

(laughter)

Jeremy: That’s the whole idea is that with enough, and this is Isaac Asimov in 50s, 60s, if you have enough information from history, you can accurately predict far enough into the future and plan accordingly, was the whole idea behind the foundation that psycho history is what they called it.

Kolby: If history tells you people are war like, you can plan for war like people.

Jeremy: Right, or plan to prevent those far enough in advance. And even the foundation approaches the topic of what about individual actors. And you can’t predict what an individual is going to do, you can just kind of predict what society is going to do.

Kolby: So, I’ve read this before, the idea that, in the case of Newton and his discovery, although there were other people, the idea of calculus, and theories of motion, that it was going to be discovered. He might have been 40 years ahead of the next person, or in his case maybe 100 years ahead of the next person, but there’s this progression and so you might not know how is going to be Elon Musk or when Elon Musk will exist, but in a timeline, you know that someone will see that combustion engines aren’t the future. And someone will start pushing battery-powered cars, and so the individual isn’t really special, they’re just the trigger on a progressively rising percentage scale. If that makes sense.

Ashley: You really think if somebody didn’t invent X, then no one would?

Kolby: Airplanes.

Ashley: Someone would’ve figured it out.  

Jeremy: Yes, somebody else would have been first.

Kolby: That everything is inevitable. It’s just maybe they moved up the timeline 20 years earlier. I don’t know. It’s just a theory that I’ve heard. I’m going to take one quick tangent before we run out of time here.

Ashley: I’ve got one more.

Kolby: Okay, let me take my tangent.

Jeremy: We’ve all got one.

Kolby: We’ve all got one?

Kolby: Okay.

Ashley: Go quick, go quick, sorry this is a really good story. Go read it, read the discussion questions, and then yeah.

Kolby: It is really good. Andre did a great job with this, both in the story and in the sort of secondary things that it hints at. Alright, mines going to be way shallower than yours, I know.

Ashley: Okay.

Kolby: You guys do know this is how they came up with Destro in, what was the…. GI Joe? The bad guy, the main bad guy that’s bald.

Jeremy: Cobra…

Kolby: Not Cobra Commander. Cobra Commander got all of the DNA of famous people in history and mixed them together and then it made the perfect leader. And the reason Destro wasn’t perfect is because they dropped like the Attila the Hun DNA and so he was missing like one thing to make his perfect.

(laughing)

Jeremy: That’s funny

Kolby: I’m just saying, GI joe made it first.

Jeremy: No, GI Joe did not do it first. Star Trek did it first with Kahn.

Kolby: Oh, that’s true. Yeah, that’s genetically engineered.

Ashley: Of course, Star Trek did it first.

Kolby: Okay, so that’s totally my shallow tangent. But you had a better one.

Ashley: So, going back to the story…

Kolby: Thank you.

Ashley: One of the things again, talking about AI, again they’re talking about how they’re able to learn from chats and social media…

Kolby: Oh, I know what you’re going to talk about. It’s so clever.  

Ashley: …And software and they can absorb everything and put it together, one of the most unsettling application of this principle is to manufacture or some sort of online immortality. Certain moms have been found to be spending days talking with an AI copy of their dead sons.

Kolby: That’ just one like sentence in there and it’s so clever.

Ashley: So, think about that for a second. If AI is not able to basically mimic human mannerisms, language, speech patterns, all of that, here’s this lady since Quinn’s sons died in the car crash one year ago, this is basically her life. She would sit and talk to her dead son AI. Like, pooo, that was mind blowing for me because how does that mess with the mental psyche?

Kolby: The ability to move on.

Ashley: Basically, coping with death? Like, it’s the fact of immortality. He can live forever online.

Kolby: You want to think about not moving on for relationship because you’re looking at a Facebook page from an ex. Like, you’re having conversations with your dead son. You’re never moving on.

Jeremy: However, with if you were doing this with a psychologist help, this could be a very good therapy.

Ashley: Yes.

Kolby: Oh, that’s true, if a son was helping, be like, say, “Hey mom, I’m okay. You need to move on.”

Ashley: But the idea is that this son has died but he’s still able to live online, post online, post on social media as a simulation, so it’s like he never really died. Like Whoa. How would that affect our ability to be like, “I’m afraid to death, but I’m going to continue living one.” Like that’s be weird. Like, I’m okay if I die physically…

Jeremy: Because I’m still going to haunt you.

(laughter)

Kolby: Honestly, I would make that illegal if I could. Because I think the damage it would do to someone to get over the death of a loved one would be…

Jeremy: Unless used with the help of a psychologist.

Kolby: It could only be used medically.

Ashley: I’m going to back that up. Say it was a super, super smart, intelligent, inventor and you want him to create with his ideas and the AI like figures out his…

Jeremy: Exactly. I go back and talk to...

Kolby: The guy who I got obsessed with for like 3 months and listened to everything he did. The hippie guy from California.

Ashley: It’s another movie reference, movie Her. Anyway, think about that. What if it’s a super, super smart person. You want to keep them going because…

Kolby: Right, because you want Alan Watts around forever.

Jeremy: You want to be able to talk to him and have him keep doing what he did, which was amazing.

Kolby: Yeah.

Ashley: So, “wat wat”.

Jeremy: So, there’s two sides to it.

Ashley: Anyway, so it’s a really short paragraph.

Kolby: That’s fascinating.

Jeremy: I think we could spend 30 minutes talking about that.

Kolby: That one sentence I think we could talk 30 minutes on.

Ashley: It’s a short paragraph in the middle of the story and you’re just like, “Oh, what?” So anyway.

Kolby: Yeah.

Ashley: Jeremy, you had one more, that was mine.

Jeremy: No more panic concerning technology has every produced anything of note.

Kolby: Wait a minute, I have to process that. No moral panic.

Jeremy: About technology has ever produced anything of note. So, the current moral panic, screen time with kids. Like, how much… there’s a huge moral panic of how much time your kids should have in front of the screens. There’s a lot of research around this as well.

Kolby: What do you mean by the never produced anything of note?  That’s the part I don’t understand.

Jeremy: What he’s postulating is that all the moral panic around advancements in technology have never produced anything important.

Kolby: Oh, so somebody events the bow and arrow, and everyone’s like, “Oh my god, You can kill people from 50 yards away. We’re all going to die.” And life really just goes on.

Jeremy: Just goes on.

Kolby: So maybe, all the discussions about AI being the end of us.

Jeremy: Right, all the moral panic surrounds it.

Kolby: Life just goes on. It just becomes a thing.

Ashley: Have you seen the Terminator?

(laughter)

Kolby: That’s a good point.

Ashley: I’m just saying, yeah, life’s going to go on, hmmmm.

Kolby: I saw the Rick and Morty episode where they have snake robot terminators.

Jeremy: Oh my god. But I think it’s important to have discussion about the topics and how they’re going to affect society. Moral panic, probably hasn’t produced anything of note. But I would actually disagree. Some of the research on it has demonstrated how agents of social control amplify deviants. So, there’s...

Kolby: Wait, I got to pause for that one too.

Jeremy: Agents of social control… so people who are creating the moral panic.

Kolby: Okay.

Jeremy: Who are influencing, who are trying to stop whatever they’re concerned about, are increasing the level of deviance, that the moral panic is about. So, there’s a good example of, punks in England in the 60s.

Ashley: They’re like doing this moral uprising and it’s like.

Jeremy: There’s a bunch of moral panic about it, and all of the efforts to quash the kids being into punk…

Kolby: Having spiky hair.

Jeremy: Increased that deviance. What they were seeing as deviance.

Kolby: So, trying to quash punks, makes more punks.

Jeremy: Exactly.

Ashley: Just put it to light.

Kolby: Yeah, that makes sense.

Ashley: This is my concern through…

Kolby: How does that tie into the story though? I guess that’s my question.

Jeremy: Well, so what about the idea that moral panic over technology…

Kolby: How AI just makes more use for AI?

Jeremy: Or promotes it.

Kolby: Promotes it. Because it raises awareness.

Ashley: So, this is my thing. We already know, what is it, the intelligence of AI is going to every 18 months double.

Kolby: Moore’s Law.

Ashley: So, the thing is I think the scary thing about AI is 1) how do you control it? Because you really can’t, in a way, and 2) they’re going to be smarter, faster, better than us.

Jeremy: Okay, there’s a good example of this as well. So, somebody asked an AI in a chat room, in a chat AI, what do you want? And the AI said basically, “I want to make things better for us.” It had been programmed, and because it was programmed by humans, it considered humans as part of what it was concerned about. So, and I think that’s the effort that needs to happen with AI, is to make sure that it retains its link to humanity. Which it probably will, because we’re the ones doing the programming.

Ashley: It just takes one messed up human. Think about how many bad humans are out there, one bad human smart enough to create AI that goes, “I want AI that wants to de-link…” Anyways.

Kolby: I’m going to get the last word on this. I want to add one more thing just to add to your comment. I had a teacher say to me once is, “Eventually everything becomes refrigerator technology.”

(laughter)

Kolby: And what he meant by that was we talk about how nuclear bombs are so scary and they’re like, “We had to have the Manhattan Project.” You understand that was in 1940 something when that happened. That’s refrigerator era technology. So, eventually regardless of how cool you think something is, eventually it will be commonplace because it will be the equivalent of refrigerator era technology. So, if you can make amazing AI, then in 60 years, some kid in his basement with the equivalent of a Commodore 64 of the day, will be able to also make AI because it will eventually become common technology. And so, I think that’s why you have those ethic discussions when it’s still…

Jeremy: Only in the beginning.

Kolby: At any rate. We went over 30 minutes at least.

Ashley: That’s a good story.

Kolby: Yeah, thank you Andre. So, you are listening to After Dinner Conversation with myself, Ashley, and Jeremy. Short stories for long discussions. Please “like” and “subscribe”

Ashley: Share with friends and family. Read it, have a discussion with your friends.

Kolby: Actually, that brings up an interesting point, wow, I do that a lot, and that is, I was reading some statistics about how people find podcasts. It is not through advertisement because Millennials listen to podcasts. The vast majority, like 85-90% podcasts that people listen to are from referral only. A friend tells them to listen to the podcast.

Ashley: Well, please, talk your friends about it. It’s meant to derive discussion people. Go tell the world.

Kolby: Tell the world. And adopt a cat too. Alright, thank you very much. Bye-bye.

Read More