The work still has a long way to go, but it can already identify Kirby.News 

Scientists Create 3D Models from Eye Reflections

Scientists from the University of Maryland have successfully transformed eye reflections into 3D scenes, using the Neural Radiance Fields (NeRF) AI technology. This breakthrough allows for the reconstruction of environments from 2D photos. While the eye-reflection technique is still in its early stages and lacks practical applications, the study offers an intriguing insight into a potential technology that could eventually unveil an entire environment through a collection of basic portrait photos.

The team used the subtle reflections of light captured in a person’s eyes (using sequential images taken from a single sensor) to try to distinguish the person’s immediate surroundings. They started with several high-resolution images of a fixed camera position and captured a moving person looking towards the camera. They then zoomed in on the reflections, isolating them and calculating where the eyes looked in the images.

The results (here the whole sequence animated) show a reconstruction of the environment that can be properly perceived by the human eye in a controlled environment. A scene shot with a synthetic eye (below) produced a more impressive dreamlike scene. However, an attempt to model eye reflections from music videos by Miley Cyrus and Lady Gaga produced only vague blobs that the researchers could only guess were the LED mesh and the camera on the tripod. This illustrates how far the technology is from actual use.

The reconstructions made with the synthetic eye were much more lifelike and lifelike – with a dreamlike quality. (Image credit: University of Maryland)
The reconstructions made with the synthetic eye were much more lifelike and lifelike – with a dreamlike quality. (Image credit: University of Maryland)

The team overcame significant obstacles to reconstruct even rough and fuzzy scenes. For example, the cornea introduces “intrinsic noise” that makes it difficult to distinguish reflected light from the complex human iris patterns. To solve this, they implemented corneal position optimization (by estimating the position and orientation of the cornea) and iris texture decomposition (to remove features unique to an individual’s iris) during training. Finally, radial texture regularization loss (a machine learning technique that simulates smoother textures than the source material) further helped isolate and enhance the reflected scenery.

Despite progress and clever workarounds, significant obstacles remain. “The present real-life results come from ‘laboratory setups’ such as a zoom shot of a person’s face, area lights that illuminate the scene, and intentional movement of the person,” the authors wrote. “We believe that unconstrained setups will remain challenging (e.g., video conferencing with natural head movement) due to lower sensor resolution, dynamic range, and motion blur.” In addition, the team notes that its general assumptions about iris structure may be too simplistic to apply widely, especially since eyes typically rotate more widely than in such a controlled environment.

Still, the team sees its progress as a milestone that can spur future breakthroughs. “With this work, we hope to inspire future explorations that take advantage of unexpected, random visual signals to reveal information about the world around us and expand the horizons of 3D scene reconstruction.” While more mature versions of this work may spawn creepy and unwanted invasions of privacy, at least you can rest easy knowing that the current version can only dimly distinguish a Kirby doll even under the most ideal circumstances.

Related posts

Leave a Comment