Skip to main content

Verified by Psychology Today

Artificial Intelligence

Understanding the Pathologies of Deep-Learning AIs

Engineering and clinical approaches will combine to improve the minds of AI.

"There is not much theory behind deep learning" is a quote from Douglas Heaven's new article in Nature, "Deep Trouble for Deep Learning." This isn't to say researchers studying deep neural networks don't have mathematical or conceptual theories about how neural networks work. They do. The quote is rather about how little we know about the psychology underlying the way deep neural networks learn and the many ways they can go wrong.

Psychologists have been tackling similar problems with the gangling cognitive systems called humans for quite some time. It can be quite difficult to understand what a cognitive system knows and what it doesn't. You have to know how to ask.

Computer scientists are taught to debug their code. But just as human pathologies are not always apparent in our DNA, the pathologies of deep-learning AIs are not always apparent in their computational code.

Sometimes you have to understand the conditions under which the AI learned. You have to understand the input data (what were its parents like and what kinds of schools it went to). What past behaviors did it use to solve problems? Does it have short-term or long-term strategies? Is it a battle-bot or a problem solver? And what are its blind spots?

As I've written elsewhere in "Does my algorithm have a mental health problem", deep neural networks have precarious blind spots. These allow antagonistic manipulation of algorithms via adversarial images, images that are specifically designed to fool deep learning image detection algorithms. Adversarial images can fool driverless cars into mistaking stop signs for speed limit signs or disguise porn as kittens.

These problems are symptomatic of representational and processing problems. And these problems are extremely difficult to detect in humans. They often require trained clinical professionals who are good at what they do because they know how to ask the rights questions and how to best evaluate the answers. One could say they know the lay of the pathological land. AI researchers are just taking the first baby steps into this land.

Two Solutions

There are two approaches to solving overcoming pathologies in deep learning.

One is incorporating what might be considered executive processing skills. In psychology, executive processing is associated with the capacity to inhibit low-level impulses based on specific rules. A common children's joke demonstrates the point. Say the word SILK as fast as you can 10 times. Now quickly answer the following question: What do mother cows drink? The first time you hear this joke, there is a strong impulse to say MILK. But after you've heard it a few times, if you're lucky, your executive system kicks in and says, "No, no. He asked what do MOTHER cows drink. Mother cows drink water." That's your executive system, for better or worse, and it often involves conscious processing of explicit rules that someone tells you.

Researchers are investigating how to implement this in deep-learning AIs. It's called 'symbolic AI.' But it needs to be added in addition to the deep neural networks operating in a more automatic fashion under the hood. Of course, hard-coded rules are only as good as a system’s ability to detect when they appropriate. A child that is told that a pair of copulating dogs are making puppies and later, after catching his parents in the act, tells his friends he'll soon be getting a puppy is misapplying the rule.

A second solution is getting algorithms to engage actively with the world and test their own hypotheses. Humans are hypothesis generators and, children at least, are insatiably curious. Small children playing with objects turn the objects this way and that way, almost as if they are asking themselves "I wonder if it’s still a rattle if I hold it this way?"

I used to have an office with a button on the wall with a small permanent sign that said: "Do not press." The button was obviously a leftover from a bygone era and its function was unknown to everyone I asked. Every day I thought about pressing that button. Until I eventually told my brother about it and he pressed it immediately. And nothing happened, as far as we know. Good AI will need to be more like my brother.

The problem with letting algorithms learn in this way is that, in many cases, they often need bodies, or at least robot arms to manipulate things. This might be conceived as a limitation compared with standard AI. AlphaGo, the deep learning algorithm that beat Lee Sedol, taught itself to play Go by playing millions of games against itself, and it did so in days, not years. A robot body with a similar attitude towards learning is likely to be more like a blender than a robot.

But rapid learning may not be as much of a problem as many people think. Many of us still tend to think of AI as happening in one computer at a time or in one robot at a time. But AI has capacities for horizontal transmission (that is, learning from one another) that are unprecedented in humans. They can 'teach' one another things, collectively solve problems by combining their knowledge, and download memories from one another in ways that make the limitations of individual bodies old-fashioned.

My phone has an operating system that millions of people test every day and when it has problems a central testing and consolidation center (like Google or Apple) can fix it and download an update to my phone in minutes. Our future intelligent cars, vacuum cleaners, lawnmowers, and computational friends and assistants will enjoy similar mental services. Many of the apparent cognitive and physical limitations that apply to us are unlikely to hold deep learning back.

If deep learning can be combined with symbolic AI, this will take us some way down the road towards algorithmic wellbeing. And if deep learning can also be aggregated across active individual instantiations of itself allowing it to combine advances in learning created by hypothesis testing, then these mental health problems may be solvable at scale.

Here is one scenario. In the wake of numerous airline fatalities, airline safety became a collective effort by organizations such as the International Air Transport Association, which has reduced airline accidents down to about one accident per 7 million flights. If AI gets promoted to positions where it can kill a lot of people at once, for example by crashing ferries or hypothesis testing while on duty at the local nuclear reactor, then we may expect similar kinds of collective efforts and safety regulators.

But, on the other hand, medical errors still kill thousands of people a year. An article in Lancet estimates that about 142,000 people died from medical errors in 2013. Though these accidents kill more people than airline crashes (188 in the same year), they often only happen to one or a few individuals at a time. Thus, they are not perceived with the same sense of dread as air travel. Dread risks earn a special place in our nightmare scenarios and tend to trickle up to legal action. Individual risks are not met with the same frightening aversion. Perhaps, as a result, they are not handled with the same regulatory authority. Medical errors, as many have noted before me, are often hidden from sight to avoid litigation, instead of publicized to avoid future errors.

While there is not a wealth of theory about deep learning at present, that theory will develop. It will develop faster with the benefit of domain knowledge from psychology, especially in areas of human learning and memory and their associated pathologies. But it will also develop with the benefit of realizing where human limitations can be overcome by engineering solutions that humans have yet to figure out how to apply to themselves.

advertisement
More from Thomas Hills Ph.D.
More from Psychology Today
More from Thomas Hills Ph.D.
More from Psychology Today