Link Centre - Search Engine and Internet Directory

Helping to share the web since 1996

How AI Is Helping Scientists Decode Optical Illusions

Optical illusions have long shown that human vision does not simply record reality - it interprets it. A well-known example is the Moon illusion, where the Moon appears larger near the horizon even though its physical size never changes. Such effects reveal that the brain relies on shortcuts, filtering vast amounts of visual data to extract what seems most meaningful rather than processing every detail.

an abstract image of a sphere with dots and lines

Artificial intelligence was once thought to be immune to these perceptual quirks because machines are designed to detect fine details with extreme precision. AI systems excel at spotting tiny patterns, which is why they perform so well in areas like medical imaging. Yet researchers have discovered that some advanced AI models fall for visual illusions in ways that closely resemble human perception. This surprising similarity is now helping scientists better understand how the brain works.

Deep neural networks (DNNs), the foundation of many modern AI systems, are modeled loosely on networks of neurons in the brain. According to Eiji Watanabe, a neuroscientist in Japan, these artificial systems provide a powerful research tool because they can be tested and altered freely - something that is not possible with living human brains.

In one study, Watanabe’s team used an AI model called PredNet, which is based on a theory known as predictive coding. This theory suggests that the brain constantly predicts what it expects to see and then compares those expectations with actual visual input. Instead of reacting passively, the brain actively anticipates the world.

PredNet works in a similar way. It was trained using videos of natural environments recorded from head-mounted cameras, allowing it to learn how objects usually move. The model was never shown optical illusions during training. However, when researchers later showed it the “rotating snakes” illusion - a static image that appears to move - it behaved just like humans, detecting motion where none existed. When shown a version of the image that does not fool people, the AI also correctly saw it as still.

This result supports the idea that both human vision and AI rely on prediction to interpret what they see. However, important differences remain. Humans can focus attention on one part of an image and reduce the illusion there, while the AI processes everything at once, lacking selective attention.

Although AI can imitate some aspects of human vision, it does not experience the world the same way we do. Still, these shared weaknesses reveal something profound: even artificial systems must guess at reality, just as our brains do.

Newer Articles

Older Articles

← Back to News Headlines