I had an interesting conversation this week with Sarah on the podcast. We explored the idea of super artificial intelligence and the idea of robots becoming self-aware and what the consequences of that might mean for us human beings. Of course that lead us down all sorts of rabbit holes like should a self-aware robot have rights? There are those who believe we should teach robots how to feel so they can be empathetic to humans. Which leads to the argument that if they can feel, they can suffer and if they can suffer then they should have rights.
One of my favourite sci-fi books is Philip K. Dick’s Do Android’s Dream of Electric Sheep? It’s the book that Blade Runner is based off of. It’s also the first book that made me stop and reflect on what our relationship should be to artificially intelligent robots/Android’s. Should we be obligated to treat it like a human if we make it look, act, sound, think, and feel like a human being? Technically it’s a machine – a machine that can be shut down, turned off or decommissioned. Why should I treat it any different than I treat my toaster or my smartphone?
Other rabbit holes we ended up down: is it intelligence that gives us our humanity or is it something else? People like Bill Gates, Elon Musk, and Stephen Hawkins say we should be very concerned about creating artificially intelligent beings or machines. Once a machine becomes super intelligent enough to think independently of it’s creators, then we lose control of it and we end up in a Skynet situation.
So now I guess I need to prepare for the zombie apocalypse and the machine apocalypse. Personally I’d rather face zombies than self-aware machines.