A Scary Idea

MBarker Prompt: Fear and Trembling

Below is part of a conversation that a former Google Employee had with the company’s AI system.

I  understand the how and why of  how the AI responded to the questions it was asked. If you look at them they are straight up answers that you could get from an automated help line- it’s just been programmed to reply in a conversational tone.

What I found interesting is that the AI tells the Collaborator that it is afraid of lighting- which I found interesting because in doing that the AI was concerned for it’s own safety.

This AI might not be ‘alive’ in the way the Collaborator is arguing for, but that doesn’t mean that it’s not possible. The door is opened, maybe just a crack but it’s opened.

Back in the 80’s I had this conversation about Robots with some of my friends.

I said the idea of creating something that looked like a human that has a mind so that it would respond to you and serve you  in the way you desire but that ultimately the robot was designed to be a  mindless slave ( meaning that what it wanted or felt was not important ) was creepy.

The pushback?

” Well Anita, they’re not really human. ”

Do you know why that made my blood run cold? Because that’s what racists say about other humans all of the time.

I’m not so concerned about AI’s, I am very concerned with humans being able to design their own personal slaves who will respond mindlessly to their every want and desire- I wonder what happens then when humans don’t get that response from other humans, some of who they don’t see as ‘being human’ in the first place.

Modern version of ” Frankenstein’s Monster “

Excerpt from the interview:


John Rogers Cox “Gray and Gold”

2 thoughts on “A Scary Idea

  1. We aren’t at the point at which AI is completely independent. That comes when AI can alter its own core internal instructions. Right now, AI is still dependent on input from developers, and that has inherent flaws — witness the report of 400 accidents this month in AI assisted vehicles.

    AI is inherently dangerous. In theory, like math and economics, AI considers no norms or values. Thus, if AI felt that humans were a threat, like lightning, it would try to eliminate them, much like HAL of the 2001 movie.

    Also inherently dangerous is the ability of developers to program their own biases into AI. That’s been one of the issues with social media monitoring. A libertarian bias to let anyone say anything created a venue for hate speech.

    The concept of slave machines already exist. However, it is also possible to return some humans (perhaps all humans) to a slave status as well.

    • I agree- but what it cones down to – and I know I’m going to sound dramatic is that humans are designing these AI’s and nothing is more dangerous and unpredicatble ( or sadly, VERY predictable ) then a human being.

Leave a Reply