
These do not output raw visuals, the university’s webpage states, but accomplish their own computations, offering “significant-pace and lower-electric power use” to permit “new embedded-vision applications in locations these kinds of as robotics, VR [virtual reality], automotive, toys [and] surveillance”.