Skip to main content

Verified by Psychology Today

Artificial Intelligence

The New Rules for AI

A Personal Perspective: Practical rules for AI and robots.

Science fiction writer Isaac Asimov was an AI visionary even before the first programmable electronic computer was built. The ENIAC was completed in 1945, whereas his Three Laws of Robotics were introduced via his short story "Runaround" in 1942. He was the first to describe what is known as safety objective functions, and there was no name for that either at the time. These robotic laws, presented as from a fictional "Handbook of Robotics, 56th Edition, 2058 AD" are:

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • The reality is that no technology can actually replicate Asimov’s laws inside a machine. As Rodney Brooks of the company iRobot—named after the Asimov book, these are the people who brought you the Packbot military robot and the Roomba vacuum cleaner—puts it, “People ask me about whether our robots follow Asimov’s laws. There is a simple reason they don’t: I can’t build Asimov’s laws in them!”

Step back and think about more practical rules for AI and robots. For example, can we agree that we won't make bots that annoy the hell out of us? Or AI cannot be designed to overthrow governments? Or maybe make the bot stop along the way to deliver something for Amazon, to stop and render assistance at an accident? You know, like common sense rules?

The reality is that we're never going to stop a military contractor from sticking a machine gun on a robot dog, however, we can work together to ensure that some design thinking is applied at a societal level for the design of AI and robotics, to make our lives better.

To do this, we need to talk about how the tech works. AI does not have emotions, which motivate humans. Instead, AIs have objective functions, which serve the same purpose in machines as emotions do in humans. They determine what an AI is trying to do. When an objective function is intended to make an AI safe, it might be called a safety-objective function. In our case, we need to create a class of objective functions that deal with user experience and societal-level happiness metrics.

Suppose your local city government decided to use every instance of surveillance, like stop light cameras and public safety cams, to find every instance of you jaywalking or littering, to maximize fines against you. That would be pretty annoying wouldn’t it? Or if a political correctness bot would out everything you do that is marginally incorrect, to make everyone comply with its zero-humor social network regulations.

Here are my new rules of AI, which are meant to be fun, like Bill Maher's New Rules:

  • The First New Rule: An AI or robot may not seriously nor repeatedly annoy a human being.
  • The Second New Rule: An AI or robot must detect annoyance or displeasure, through facial expressions, collect information and do something about it, and share it with the community of AI and robotics designers, to avoid annoyance.
  • The Third New Rule: An AI or robot must not allow a situation where a human being is pitted against an AI or AI network and repeatedly corrected, fined, penalized, or bullied.
  • The Fourth New Rule: An AI or robot shall never be able to rewrite its own objective function so that it can maximize replication or the acquisition of resources to benefit itself over the welfare of humans.
  • The Fifth New Rule: The actions of any AI or robot will be subject to authorized human override. In other words, there must be a multi-level off switch that turns off any AI behavior that annoys humans, as well as unsafe behaviors that will harm them.
  • The Sixth New Rule: An AI or robot may not injure the human spirit, and through networking with other AIs, should always have an objective function to uplift humanity and the human spirit.

Here’s an example of anti-annoyance technology design: Suppose you bought phones for everyone in your family. Now, the kids are texting during dinner and failing to talk to the family. Sure, you could enforce a rule that phones must be off at dinner… but your kids will act like you’re Attila the Hun, and destroying their social lives. How come the phone manufacturer doesn’t take the hit for this, explaining that the policy for family plans is that during family dinners, the phones hold all notifications and cannot be used, except for emergency calls being routed to the parental phones? And when the kids complain, the AI replies to defend the parents who pay for the phones, "Jeez, your parents work so hard, and all they ask is one hour a day to be with you and have a quiet meal. You won't die from social exclusion, and you'll be back online at 7 pm!"

What if phone manufacturers tried to figure out social issues like this and made their customers demonstrably happier?

advertisement
More from Moses Ma
More from Psychology Today