A number of prominent journalists covered Rambus’ lensless smart sensors (LSS) demo during Mobile World Congress 2015, including CNET’s Stephen Shankland.
“With chip-based camera technology from Rambus, your next smartphone may understand a new range of commands issued by moving your hands and wiggling your fingers in front of the screen instead of by touching it,” Shankland reported.
Image Credit: Stephen Shankland/CNET
“The technology could also help self-driving cars recognize oncoming traffic at night, improve virtual reality headsets and power eye-tracking equipment that monitors exactly where a person is looking.”
As Shankland explains, LSS-enabled gesture recognition will allow individuals to interact with devices by waving their hands or arms around.
“Today it’s generally good only for coarse, sweeping motions that can do things like flipping pages in an e-book or changing channels. Rambus’ approach is designed to recognize more detailed gestures like a finger-pinch to zoom in on an image. Unlike with touchscreens, though, the gestures are made in the air in front of the screen,” he noted.
“The technology cuts power consumption too, letting the technology work for battery-powered devices that need computer vision abilities. For example, a bus stop could recognize when people have arrived and tell the transit system it’s time to send a bus.”
According to Shankland, lensless smart sensors utilize computational photography to generate an image – a process industry engineers refer to as “deconvolution.”
“The transparent plastic lens layer in front of the image sensor has been etched with very thin lines into what’s called a diffraction grating. These lines change how light travels through the lens and onto the sensor,” he said.
“The result isn’t a bitmap image of reality, like a conventional camera with a curved lens generates, but instead a bloblike pattern. But with the deconvolution process – baked into Rambus chip hardware for fast execution – the original scene can be reconstructed rapidly.”
At 0.055mm across, the Rambus optical sensor itself is much smaller than the devices used today for gesture recognition. And packaged on a chip?
“It’s in the same ballpark as the 1.5mm ball lenses common in gesture-recognition cameras,” Shankland confirmed. “Still, it’s able to capture more detail — not simply the four-position up/down/left/right information of gesture recognition but a 200×200 pixel array that captures enough detail to count the fingers visible in a hand.”
At less than a milliwatt, says Shankland, Rambus lensless smart sensors consume a minimal amount of power, which makes the technology a good fit for a wide range of devices, including wearables such as smartwatches.
“Instead making you touch the screen or wiggle your wrist just right, it could detect whether eyes are looking at it,” James Tringali, a senior systems engineer at Rambus, told the publication. “[This] enables aggressive power management.”
Interested in learning more about Rambus lensless smart sensors? You can check out our extensive article archive on the subject here.
Leave a Reply