Skip to content
Surprising Science

MIT study proves humans can see hidden “ghost images”

A team at MIT has discovered that human brains are capable of "seeing" ghost images hidden between groups of patterns captured by single-pixel cameras.
Scientists discover a way we can see the invisible (DC)

Scientists at MIT have just announced the results of a study that presents a startling breakthrough in how our brain visualizes the world and opens the door for extending what humans can see.


Escaping the limitations of sight

Packing our cameras with millions of pixels makes economic sense since silicon is relatively inexpensive. SaysRichard Baraniuk of Rice University, “The fact that we can so cheaply build [silicon camera chips] is due to a very fortunate coincidence, that the wavelengths of light that our eyes respond to are the same ones that silicon responds to.” But there are lots of other areas of the electromagnetic spectrum we’d love to be able to visualize and that silicon is not useful for: infrared, terahertz radiation, and radio frequencies, for example. Capturing these, however, would require far pricier sensors with megapixel-level sensitivity only possible through the expenditure of hundreds of thousands of dollars for a single “camera.”

Compressed sensing

Compressed sensing offers a solution to this problem by allowing cameras to ignore low-value visual content, resulting in less “noisy,” clearer images, even when reducing the digital sampling of the image—the number of snapshots a camera takes for an image—to a fraction of what a typical camera captures.

The angiogram on the left was taken using standard compression, and as the number of samples goes down, so does picture quality. The compressed-sensing angiogram on the right, however, remains crystal clear, even during extreme undersampling (Michael Lustig).

This form of data collection allows for the use of single-pixel camerasor sensors, really. Even when they’re made from expensive materials to capture invisible wavelengths, they’re a game-changer when it comes to cost. Single-pixel cameras produce what are called “ghost images” because they’re derived from light that never actually interacts with the object being imaged and because they exist only in the mathematical difference between pixel values until post-processing allows them to be rendered as visible images.

How a single-pixel camera works

A pattern based on a Hadamard transform is projected onto an object from an LED, and a single-pixel camera captures the amount of lightness/darkness it reflects (for black and white images). This data is recorded as a numerical value, a single data point. The process is then repeated with a long series of different patterns. You might think the data points from these different patterns don’t have much to do with each other, but they all share one thing: They were all reflected by the same object. When they’re processed together, computer algorithms can reveal that object and produce an image of it.

A soccer ball photographed with a normal camera on the left, and a ghost-imaging system using 1,600 Hadamard patterns on the right (R. G. Baraniuk).

Another version of ghost imaging cuts down the number of patterns required for a clear image. For each pattern, the process starts the same way. A single-pixel camera captures the light reflected off the object, but instead of recording the resulting value, it’s sent to a second LED whose light is shifted by that value. The second, modulated LED is then projected at the pattern and reflected toward a second single-pixel camera, bypassing the object altogether. What’s ultimately captured by that camera is the difference between the pattern and the earlier reflection of the pattern off of the object.

(Boccolini, et al.)

Once again, computer processing can parse the values derived from repeating this process with multiple patterns and produce an image of the object.

The processing power on our shoulders

Turning a stack of patterns into a picture obviously requires a lot of computational power. But Alessandro Boccolini and his team at Heriot-Watt University in Edinburgh, Scotland, found themselves wondering something bigger: Is it possible we ourselves have some undiscovered ability to do this without a computer? Maybe something along the lines of the way our brains turn a rapid succession of still images into moving pictures? The team’s experiments reveal, startlingly, that we do, when the conditions are right.

The experiments

Boccolini’s team recruited four subjects to view a series of patterns, giving them control over the rate at which they appeared. At slow speeds, not surprisingly, they simply saw a series of different patterns. However, at very high speeds, in particular when the rate reached 20 kHz—or 200 patterns every 20 milliseconds—an amazing thing occurred: The subjects could see the object the ghost image had captured.

Further testing revealed that even slowing down the rate of display slightly caused the image to degrade and also that the visibility of the object didn’t last, which is what happens when we see things normally. The team notes, “We use this human ghost imaging technique to evaluate the temporal response of the eye and establish the image persistence time to be around 20 ms followed by a further 20 ms exponential decay.”

Why this is so exciting

As we noted earlier, expensive materials can respond to electromagnetic wavelengths, and the use of single-pixel camera and ghost imaging makes this economically feasible. Now we know that human brains are capable of processing—and thus “seeing”—the ghost images they produced, turning a series of patterns into an image all by ourselves. As the study notes, “Ghost-imaging with the eye opens up a number of completely novel applications such as extending human vision into invisible wavelength regimes in real time.”


Related

Up Next