Spatiotemporal Features for Asynchronous Event-based Data

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing.These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas.They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal here changes, leading to a very precise temporal resolution.Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking.This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information.

A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection.Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors.It is shown that the networks in the architecture learn distinct and task-specific dynamic biomat for sale visual features, and can predict their trajectories over time.

Leave a Reply

Your email address will not be published. Required fields are marked *