These challenges affect both the navigation and mapping phases of autonomous robots
Our sensor fusion engine combines all raw sensor output to derive the optimal position and attitude estimate. GNSS observations, camera images and IMU measurements are all incorporated into one optimization problem to find the most likely pose. Some of the main benefits of a tightly coupled fusion approach versus a loosely coupled combination or weighting of the individual sensors are:
Camera images are used to extract significant points (visual features) that are tracked across multiple images. Subsequent observations of visual features compute how the observer moved in between the image captures. Whenever visual features move out of the field of view, new candidates are selected and added to the tracked features. All observations can be incorporated into the overall optimization problem so that the relative movement augments the overall pose estimation. Visual sensing is especially relevant as it does not rely on any map or satellite communication and is therefore the perfect technology to augment the positioning, and guide robots in challenging situations.
Two dual-band receivers use navigation signals from all four Global Navigation Satellite Systems (GNSS), namely GPS, GLONASS, BeiDou, and Galileo. Using two spatially separated antennas, the sensor determines its absolute position and a measurement of its orientation.
Real-time kinematics (RTK) technology is used for centimeter-level accurate positioning. The sensor uses standard RTCM 10403 version 3 differential GNSS services correction data. Networked transport of RTCM data (NTRIP) is used to provide the data to the sensor. This data can be obtained from a Virtual Reference Station (VRS) network or from a local physical basestation. Optionally Fixposition cloud services can be used to assist data distribution.
The sensor continuously monitors the GNSS operation and the RTK correction data stream. It assesses the quality and reliability of both using proprietary algorithms in order to obtain best possible performance under all circumstances.
See the most important features at one glance
6D global pose, optionally with covariance
Configurable rate from 20Hz to 200Hz
UART serial connection to host (Fixposition ROS driver or manual parsing)
Quad-core ARM CPU with image processing and sensor fusion software stack
Tightly coupled sensor fusion of raw sensor measurements in real time
Powerful set of onboard sensors with hardware time synchronization
(Dual RTK-GNSS receiver, global shutter CMOS Camera, 6-axis IMU, magnetometer, barometer)
Optional extension with external signals (LiDAR odometry, wheel speed)
Online database management tools
Post-processing of data sets to provide even higher accuracy by running a full optimization of the entire dataset
Marker based localization to improve accuracy in dedicated operating environments such as indoor parking or docking
Easy-to-use web application for setup and monitoring
Access via device WiFi access point
Subscribe to our newsletter to get notified about our product launches events and more.
Fixposition AG | Rütistrasse 14 | 8952 Schlieren | Switzerland
Fixposition AG © 2021