Scientists say they’ve invented a piece of technology that will let us peer through an animal’s eyes better than ever before. The tech uses a combination of novel hardware and software to produce images and videos that accurately represent the colors seen by animals, such as bees and birds. In new research this week, the team found that its innovation nearly matches the accuracy of conventional, yet more limiting, methods used to capture an animal’s color vision.
Human vision is certainly no slouch. Compared to most other animals, for instance, we see things with much better sharpness and detail. But some animals have their own unique visual abilities, such as being able to perceive wavelengths of light not visible to the human eye from the world around them. Scientists have been able to create representations of the colors that animals see through false color imagery. But while the methods used today to create these images are reliably accurate, they also require a lot of effort to implement and have clear limitations, such as only working with still images in certain lighting conditions.
A team of researchers in the U.K. and U.S. believe that they’ve now developed a more versatile and dynamic technique for translating an animal’s color vision to our eyes. It’s meant to work by combining existing photography methods with novel hardware and software.
“The system works by splitting light between two cameras, where one camera is sensitive to ultraviolet light while the other is sensitive to visible light. This separation of ultraviolet from visible light is achieved with a piece of optical glass, called a beam splitter. This optical component reflects UV light in a mirror-like fashion, but allows visible light to pass through just the same way as clear glass does,” study authors Daniel Hanley, an associate professor of biology at George Mason University, and Vera Vasas, a biologist at the Queen Mary University of London, told Gizmodo in a joint email. “In this way the system can capture light simultaneously from four distinct wavelength regions: ultraviolet, blue, green, and red.”
The team’s software then transforms the data received from the cameras into “perceptual units” that correspond to an animal’s known photoreceptor sensitivity. From there, you can create images or, for the first time, truly precise moving videos of the colors that nonhuman animals see in the world, the scientists claim.
“The idea of recording in UV has been around for a long time now, but there have been relatively few attempts due to the technical difficulties involved in it. Interestingly, the first published UV video is from 1969!” Hanley and Vasas said. “Our new approach provides a valuable degree of scientific accuracy enabling our videos to be used for scientific purposes.”
In their paper describing the technology, published Tuesday in PLOS Biology, the researchers tested out their camera system to translate the colors seen by honeybees and the average UV-sensitive bird from objects like butterflies. They also compared the system’s output to those obtained from spectrophotometry, a a gold standard method used to create false color imagery. Depending on the environmental conditions, they found that their system ranged from being 92 percent to 99 percent accurate.
The team already has plans to use their novel technology to improve future nature documentaries. Their work has been funded by the National Geographic Society and the team includes award-winning nature photographer and filmmaker Neil Losin. They also believe that this system will allow them to make new scientific discoveries.
“We have a number of ideas that we are planning to address with our camera, but the most exciting questions will be those we have yet to consider,” Hanley and Vasas said. “Only now that we started taking videos of the natural world, we are beginning to see how much information is out there.”
The researchers have two working systems and are gearing up to build a third, but they also hope that others will be inspired to replicate their technology, too. They note that all of their hardware relies on cameras and parts readily available commercially, and they’ve even made their software code open source for people to peruse and refine to their heart’s content.
“We have intentionally made this all open-access specifically to encourage the research and film community to adapt and improve the system,” they said. “We believe that this will speed up the development, to everyone’s benefit.”