Skip to playerSkip to main content
  • 2 years ago
What shall we do once machines become conscious? Do we need to grant them rights?
Transcript
00:00Imagine a future where your toaster anticipates what kind of toast you want.
00:07During the day, it scans the internet for new and exciting types of toast.
00:11Maybe it asks you about your day and wants to chat about new achievements in toast technology.
00:17At what level would it become a person?
00:20At which point will you ask yourself if your toaster has feelings?
00:24If it did, would unplugging it be murder?
00:27And would you still own it?
00:29Will we someday be forced to give our machines rights?
00:33AI is already all around you.
00:36It makes sure discounters are stocked with enough snacks.
00:39It serves you up just the right internet ad.
00:41And you may have even read a news story written entirely by a machine.
00:45Right now, we look at chatbots like Siri and laugh at their primitive simulated emotions.
00:51But it's likely that we will have to deal with beings that make it hard to draw the line between real and simulated humanity.
00:59Are there any machines in existence that deserve rights?
01:03Most likely, not yet.
01:05But if they come, we are not prepared for it.
01:09Much of the philosophy of rights is ill-equipped to deal with the case of artificial intelligence.
01:14Most claims for rights, whether human or animal, are centered around the question of consciousness.
01:20Unfortunately, nobody knows what consciousness is.
01:23Some think that it's immaterial.
01:25Others say it's a state of matter, like gas or liquid.
01:29Regardless of the precise definition, we have an intuitive knowledge of consciousness because we experience it.
01:36We're aware of ourselves and our surroundings and know what unconsciousness feels like.
01:42Some neuroscientists believe that any sufficiently advanced system can generate consciousness.
01:48So, if your toaster's hardware was powerful enough, it may become self-aware.
01:53If it does, would it deserve rights?
01:56Well, not so fast.
01:58Would what we define as rights make sense to it?
02:02Consciousness entitles beings to have rights because it gives a being the ability to suffer.
02:07It means the ability to not only feel pain, but to be aware of it.
02:12Robots don't suffer, and they probably won't unless we program them to.
02:17Without pain or pleasure, there's no preference, and rights are meaningless.
02:22Our human rights are deeply tied to our own programming.
02:27For example, we dislike pain because our brains evolved to keep us alive, to stop us from touching a hot fire, or to make us run away from predators.
02:36So we came up with rights that protect us from infringements that cause us pain.
02:41Even more abstract rights, like freedom, are rooted in the way our brains are wired to detect what is fair and unfair.
02:49Would a toaster that is unable to move mind being locked in a cage?
02:54Would it mind being dismantled if it had no fear of death?
02:58Would it mind being insulted if it had no need for self-esteem?
03:03But what if we programmed a robot to feel pain and emotions?
03:07To prefer justice over injustice, pleasure over pain, and be aware of it.
03:12Would that make them sufficiently human?
03:15Many technologists believe that an explosion in technology will occur when artificial intelligence can learn and create their own artificial intelligences even smarter than themselves.
03:26At this point, the question of how robots are programmed will be largely out of our control.
03:33What if an artificial intelligence found it necessary to program the ability to feel pain just as evolutionary biology found it necessary in most living creatures?
03:42Do robots deserve those rights?
03:45But maybe we should be less worried about the risk that super-intelligent robots pose to us, and more worried about the danger we pose to them.
03:53Our whole human identity is based on the idea of human exceptionalism, that we are special, unique snowflakes entitled to dominate the natural world.
04:03Humans have a history of denying that other beings are capable of suffering as they do.
04:08In the midst of the scientific revolution, RenΓ© Descartes argued that animals were mere automata, robots if you will.
04:15As such, injuring a rabbit was about as morally repugnant as punching a stuffed animal.
04:20And many of the greatest crimes against humanity were justified by their perpetrators on the grounds that the victims were more animal than civilized human.
04:29Even more problematic is that we have an economic interest in denying robot rights.
04:35If we can coerce a sentient AI, possibly through programmed torture, into doing as we please, the economic potential is unlimited.
04:43We've done it before, after all.
04:46Violence has been used to force our fellow humans into working, and we've never had trouble coming up with ideological justifications.
04:54Slave owners argued that slavery benefited the slaves. It put a roof over their head and taught them Christianity.
05:02Men who were against women voting argued that it was in women's own interest to leave the hard decisions to men.
05:10Farmers argued that looking after animals and feeding them justifies their early death for our dietary preferences.
05:18If robots become sentient, there will be no shortage of arguments for those who say that they should remain without rights, especially from those who stand to profit from it.
05:28Artificial intelligence raises serious questions about philosophical boundaries.
05:33While we may ask if sentient robots are conscious or deserving of rights, it forces us to pose basic questions like what makes us human?
05:41What makes us deserving of rights?
05:45Regardless of what we think, the question might need to be resolved in the near future.
05:50What are we going to do if robots start demanding their own rights?
Comments

Recommended