A science fiction writer recently wrote a blog post about why it was important to read (and write) books with happy endings, with good people winning over bad, and good behavior rewarded. His theory was that in fewer years than we imagine, we could be overrun by robots and without humanizing them (by reading them these stories), we might have to deal with unintended consequences.
It was thought provoking and rather unsettling, especially when driverless cars are being tested, vacuum cleaners are turned on and told to do their thing, and our phones and other devices have the ability to run our lives.
My son wears a wrist-device that tells him when he needs to get up and move. It tells him how often he wakes up in the night and what is heart rate is. I suspect that in a few years the next generation of the device will sound an alarm if he puts a potato chip in his mouth.
These are just a few examples of what we call artificial intelligence, which, according to the dictionary, is the capability of a machine to imitate intelligent human behavior. The blogger painted a scary scenario of a car deciding to crash itself—with the owners in it—rather than have its owners trade it in, because it heard them talking about it.
All this reminded me of an old film, 2001 Space Odyssey, where the computer takes over the spaceship and wages an intelligence battle with the hero who wants to shut it down.
Are we outsmarting ourselves by creating robots to do our work, computers to look things up for us, cars to drive us? When we have personal robots, will we be programming them to have feelings? Or maybe they will evolve and develop feelings by imitating our behavior. That in itself is scary because people aren’t just good…they’re bad, too.
So if I’m still alive when personal robots are the norm, you can bet I will be reading out loud books like Outlander, Pride and Prejudice, and The Notebook. I will avoid Stephen King and Dean Koontz.
No reason to tempt fate.