These challenges affect both the navigation and mapping phases of autonomous robots
Our fusion engine combines all raw sensor output to derive the optimal pose estimate. The main benefits of our tightly coupled fusion approach versus a loosely coupled combination or weighting of the individual sensors are:
Camera images are used to extract significant points (visual features) that are tracked across multiple images. Subsequent observations of visual features compute how the observer moved in between the image captures. All observations can be incorporated into the overall optimization problem so that the relative movement augments the overall pose estimation. Visual sensing is especially relevant as it does not rely on any map or satellite communication and is, therefore, the perfect technology to augment the positioning, and guide robots in challenging situations.
Two multi-band receivers use navigation signals from all four Global Navigation Satellite Systems (GNSS), namely GPS, GLONASS, BeiDou, and Galileo. Using two spatially separated antennas, the sensor determines its absolute position as well as orientation/heading. Real-time kinematics (RTK) technology is used for centimeter-level accurate positioning which requires real-time corrections from a local reference station or RTK Networks. The sensor continuously monitors the GNSS operation and the RTK correction data stream. It assesses the quality and reliability of both using proprietary algorithms to obtain the best possible performance under all circumstances.
Subscribe to our newsletter to get notified about our product launches events and more.
Fixposition AG | Rütistrasse 14 | 8952 Schlieren | Switzerland
Fixposition AG © 2021