Simplifying autonomous navigation by increasing reliability
and expanding availability of precise positioning​.

Vision-RTK sensor for precise positioning in GNNS challenged areas.

Vision-RTK
Built for GNSS degraded and denied areas

  • Real-time fusion of RTK-GNSS, IMU and Computer Vision ensures centimeter-accurate performance.
  • Precise positioning for challenging environments such as urban canyons or below bridges and canopies.
  • The only lightweight and compact solution ready for integration with drones and autonomous robots.
Product Sum.  ENProduct Sum.  CNContact us

Precise positioning is not available everywhere

These challenges affect both the navigation and mapping phases of autonomous robots

GNSS degrades between buildings.
GNSS fails under bridges.
VO camera difficulties in bad weather.

VO: Visual Odometry is the process of determining equivalent  information using sequential camera images to estimate the distance traveled.
GNSS: stands for Global Navigation Satellite System

Our solution: sensor fusion engine, visual sensing & RTK-GNSS

Sensor fusion engine

Our sensor fusion engine combines all raw sensor output to derive the optimal position and attitude estimate. GNSS observations, camera images and IMU measurements are all incorporated into one optimization problem to find the most likely pose. Some of the main benefits of a tightly coupled fusion approach versus a loosely coupled combination or weighting of the individual sensors are:

  • Strengths of different sensing technologies are combined to alleviate weaknesses of the individual sensors.
  • All prior knowledge about the accuracy and the precision of sensor measurements is incorporated into the optimization.
  • Stability and robustness due to inter-sensor prediction. For example, IMU measurements can be used to predict visual features and camera observations help to form a prior for the GNSS estimation problem.
  • 6D pose output at configurable rate of up to 200Hz.
Vision-RTK`s sensor fusion engine technology.
How visual sensing works.

Visual sensing

Camera images are used to extract significant points (visual features) that are tracked across multiple images. Subsequent observations of visual features compute how the observer moved in between the image captures. Whenever visual features move out of the field of view, new candidates are selected and added to the tracked features. All observations can be incorporated into the overall optimization problem so that the relative movement augments the overall pose estimation.  Visual sensing is especially relevant as it does not rely on any map or satellite communication and is therefore the perfect technology to augment the positioning, and guide robots in challenging situations.

RTK-GNSS technology.

RTK-GNSS

Two dual-band receivers use navigation signals from all four Global Navigation Satellite Systems (GNSS), namely GPS, GLONASS, BeiDou, and Galileo. Using two spatially separated antennas, the sensor determines its absolute position and a measurement of its orientation.

Real-time kinematics (RTK) technology is used for centimeter-level accurate positioning. The sensor uses standard RTCM 10403 version 3 differential GNSS services correction data. Networked transport of RTCM data (NTRIP) is used to provide the data to the sensor. This data can be obtained from a Virtual Reference Station (VRS) network or from a local physical basestation. Optionally Fixposition cloud services can be used to assist data distribution.

The sensor continuously monitors the GNSS operation and the RTK correction data stream. It assesses the quality and reliability of both using proprietary algorithms in order to obtain best possible performance under all circumstances.

Vision-RTK product overview

See the most important features at one glance

Output

6D global pose, optionally with covariance

Configurable rate from 20Hz to 200Hz

UART serial connection to host (Fixposition ROS driver or manual parsing)

Sensor fusion

Quad-core ARM CPU with image processing and sensor fusion software stack

Tightly coupled sensor fusion of raw sensor measurements in real time

Powerful set of onboard sensors with hardware time synchronization  
(Dual RTK-GNSS receiver, global shutter CMOS Camera, 6-axis IMU, magnetometer, barometer)

Optional extension with external signals (LiDAR odometry, wheel speed)

Coming soon

Online database management tools

Post-processing of data sets to provide even higher accuracy by running a full optimization of the entire dataset

Marker based localization to improve accuracy in dedicated operating environments such as indoor parking or docking

Device management

Easy-to-use web application for setup and monitoring

Access via device WiFi access point

Over-the-air updates

Thank you for your subscription!
We will keep you posted with exciting news. Stay tuned.
Oops! Something went wrong while submitting the form.

Subscribe to our newsletter

Subscribe to our newsletter to get notified about our product launches events and more.

Privacy Policy

Fixposition AG | Rütistrasse 14 | 8952 Schlieren | Switzerland
UID CHE-309.584.247
Fixposition AG © 2021