We Don’t Trust Robots
I just got done watching the new netflix movie I Am Mother. While was watching it I noticed my SO frequently saying that she did not trust the “mother” robot. Sure the movie was setup a bit to make you think something fishy was going on, but it made me think: In general people don’t trust robots. Maybe it’s the Uncanny Valley or maybe it’s all the media we’ve been consuming for decades that tell us not to trust robots? Maybe it’s because I work in software but I tend to think the robot can be trusted as much as the engineer who designed the robot. Either way, I imagine this will be an issue for us in the near future.
An interesting thing that I Am Mother touched on was a scene with a complex Trolley Problem. Philosophers have been troubled by this type of questions for centuries, but what will AI do with the question? This is even more important now-a-days with self-driving cars. If the car is met with a “choice” to make between killing 1 person or 4, what does it do? What about 1 child and 4 adults? Does a programmer have to program that type of decision, if not, should they? Lots to think about.