Skip to main content

Verified by Psychology Today

Artificial Intelligence

Why Studies of Insect Vision Will Change Your Life

See the future of self-driving cars and other marvels in the brains of flies.

Key points

  • The best AI vision systems don't come close to the performance of a fly's brain.
  • The fly's brain is a good starting point for modeling an entire animal brain to achieve much better AI.
  • To support breakthroughs in applications such as self-driving cars, researchers mapped out every neuron and synapse of the fly's brain.

Ever wondered why it’s so hard to swat a fly? The little buggers seem to have a sixth sense, taking off just when you think you’re about to nail them, then quickly darting here and there faster than your head can turn, navigating around obstacles and choosing to land where you can’t find them.

Sure, with enough patience, you can (sometimes) track the pesky insects down and swat them into the next plane of existence, but think how hard it is for you—possessing a brain with somewhere in the neighborhood of 100 billion neurons—to defeat an organism with only one-millionth that number of brain cells (100,000).

That’s right, an animal with a brain that’s literally only a millionth as complicated as yours—and about the size of a poppy seed—can outsmart and out-maneuver you for longer than you’d like to admit.

As humbling as that realization is, the virtuosity of flies' avoidance behavior carries with it the hope for a better future for all of us in AI technology in everything from self-driving cars to Internet search engines that know exactly what you’re really looking for to lightning-fast, accurate medical diagnoses, and treatments.

I offer up the brains of flies as a model for future AI as someone who struggled mightily for the last few years developing applications of more conventional versions of AI, including popular Machine Learning (ML) systems such as Tensor Flow and Random Forrest.

Such AI systems can do marvelous things, such as face recognition, voice-to-text transcription, and other “narrow” tasks that are highly constrained, provided that you present the AIs with gazillions of training samples so that the ML systems can (and I’m simplifying here) memorize every possible combination of stimulus cues they are likely to have to deal with in an actual operation.

But ML systems are notoriously “brittle” and break down if you present them with stimuli they’ve never seen before or if you ask them to venture outside the narrow task they’ve been assigned.

Worse, AI experts I’ve consulted agree that there is nothing on the near horizon of AI research that promises to make ML systems or other forms of AI come remotely close to the awesome performance of a fly’s brain.

Think about what the fly does when it avoids an early demise at your hands in the kitchen: Regardless of your size, your apparel, the room brightness, your direction or speed of approach, no matter the room lighting conditions, or the size, shape, or color of obstacles it must avoid to escape you, the fly executes its evasive maneuvers brilliantly, taking flight just in time to avoid your swatter, zigging and zagging, all the while avoiding walls, hanging pots, refrigerators, windows… you name it. Then, the fly must locate a safe place to land, land, wait for a safe interval, then navigate around arbitrary obstacles back to the food source in your kitchen it was originally pursuing.

In other words, in contrast to the best modern AIs, the fly’s brain is anything but brittle and narrow, capable of generalizing what constitutes a threat, or an obstacle, or a safe place to land under incredibly wide variations of stimulus conditions (lighting, color, shape, size, texture, etc.).

If only we could somehow duplicate a fly’s brain in computer chips and software, perhaps we might develop an AI visual system as flexible, adaptive, and “non-brittle” as a fly’s brain for important applications such as self-driving cars.

With this exact this idea in mind, neuroscientist Louis Scheffer and colleagues at Howard Hughes Medical Institute, using advanced techniques such as the “dense reconstruction” of many electron microscope sections through a fly’s brain, have mapped out not only all of the neurons in the brain of a fly but all of the synaptic connections among those neurons, establishing a complete ”connectome” of the small animal’s brain.

CC4 Basisnus
Map of brain of drosophila
Source: CC4 Basisnus

This was a daunting task, because as simple as it is, Dr. Scheffer et al. still had to map out both the 100,000 neurons in the fly’s brain and around 20,000,000 synapses to describe this “connectome.”

Dr. Scheffer and the other Howard Hughes researchers have made this “connectome" freely available to AI researchers who might use it to reconstruct the fly’s visual system in silicon, as it were, in order to imbue self-driving cars, for example, with capabilities comparable to those of a fly.

Of course, if this idea works, the AI developers reconstructing the fly’s brain won’t know how the new vision system actually does what it does, but that’s the case right now with “deep-learning” neural networks that perform well on tasks such as face-recognition without the designers of the neural networks having the first clue how the networks they created do what they do.

All that AI designers know right now is that, over many, many training trials, the neural nets they’ve laid out somehow magically connect in ways that solve the problem at hand, without them having any deep understanding of the function of the resultant network. In the AI field, this is known as the “black box” problem, where the AI works, but the way it works is obscured as if the system were locked into an opaque black box.

There will be both bad news and good news if AI developers succeed in putting a fly’s brain into the AI vision system of your next car.

The bad news is that your car, although it navigates in complex environments flawlessly (like a fly does), is still a “black box” that may exhibit unpredictable behaviors. For instance, if your car sees you coming out of a hardware store with a new fly swatter, it might start its engine and drive quickly away from you.

But here’s the good news: When that happens, you should be able to quickly find your evasive new vehicle by looking for the nearest fresh pile of dog poop that your car has stopped to check out.

References

https://hoverflyvision.weebly.com/uploads/4/8/1/0/48109195/1-s2.0-s0960…

https://espace.library.uq.edu.au/view/UQ:693022

https://elifesciences.org/digests/57443/reconstructing-the-brain-of-fru….

https://elifesciences.org/articles/57443

advertisement
More from Eric Haseltine Ph.D.
More from Psychology Today
More from Eric Haseltine Ph.D.
More from Psychology Today