Skip to main content

Computing—“Seeing” in real time

Dynamic vision sensors at Oak Ridge National Laboratory were trained to detect and recognize 11 different gestures, like waving and clapping, in real time. The resulting image shows movement on the pixel level. Credit: Kemal Fidan/Oak Ridge National Laboratory, U.S. Dept. of Energy

Oak Ridge National Laboratory is training next-generation cameras called dynamic vision sensors, or DVS, to interpret live information—a capability that has applications in robotics and could improve autonomous vehicle sensing. Unlike a traditional digital camera that records large amounts of information in frames, a DVS transmits per-pixel changes in light intensity. Individual pixel locations are recorded and time stamped to the microsecond, creating data “events” that are processed by a neuromorphic network—a type of intelligent, energy-efficient computing architecture. “Because the DVS records only changes in what it sees, there is no redundant data,” said ORNL SULI intern Kemal Fidan, who spent his summer learning about dynamic vision sensors under ORNL’s Robert Patton. This capability makes the sensors fast, power efficient and effective in wide ranges of light intensity. Fidan’s project taught a DVS to recognize human gestures such as waving, clapping and two-finger peace signs in real time. —Abby Bower