Our patent-pending “4D” object-based technology.
This is a novel method for understanding scenes, and for extracting high-precision, application specific information from them
At the heart of EyeSpace lies a smart, proprietary database storing a very large number of scene objects with their descriptors, and their 3D positions. The database is built via processing of video data, synchronized with corresponding 3D data, which may be obtained via prior photogrammetry or laser scans, etc. or via crowd-sourcing from multiple users’ video inputs.
Also at the heart of certain applications lie proprietary algorithms that obtain input from users’ mobile devices, and process it, along with data obtained from the database, to compute the mobile devices’ precise position and angle of gaze, or to insert Augmented Reality objects into the displayed scene on the mobile devices’ screens.