
Built for precise positioning under all conditions

Precise positioning faces many challenges
Sensor fusion
The fusion engine leverages the strengths of multiple independent sensor technologies and compensates for individual weaknesses to provide an optimal pose estimate.
Visual Odometry
RTK-GNSS
Common errors in current GNSS systems require corrections to achieve the high-level of accuracy required. RTK technology is used to correct these errors and achieve centimeter-level accurate positioning. NTRIP is used to provide the correction data to the sensor. This data can be obtained from publicly available Virtual Reference Station (VRS) networks, or from a local physical base station.
Key-features Vision RTK 2
Tightly coupled sensor fusion of raw sensor measurements in real time providing an accurate global 3D pose and kinematics estimate of your robot.
Powerful set of onboard sensors with hardware time synchronization.
(Two RTK-GNSS receiver, global shutter Camera, 6-axis IMU, magnetometer, barometer) .
Optional extension with external signals (e.g., wheel speed)
Web-interface for setup and monitoring.
Web-Dashboard for data upload and visualization.
Post-processing providing highest possible accuracy.

Explore your journey with us
Frequently asked questions and answers
Please visit our GitHub page for all information: https://github.com/fixposition
Evaluation stage
We will work with you to study your application, hardware and software platform, and your specific requirements.
Based on this, you can use your Evaluation kit to evaluate our positioning sensor in your application environment.
We will provide you with a platform to upload your data to be reviewed by our engineers and help us support you.
Design-in stage
Our off-the-shelf solutions are plug-and-play thanks to our compliance with industry standard interfaces and protocols. For high-volume opportunities, we can adapt our hardware and software to your application requirements.
Once our sensor is integrated, we will work with you to optimize performance via fine tuning.
After-sales support
After your solution goes into production, we offer continued software updates to ensure your product is up to date.
As your product matures, we would be pleased to work with you to support feature requests and your evolving needs.
Vision-RTK 2 combines the best of global positioning (enabled by GNSS) and relative positioning (VIO).
We have developed state-of-the-art sensor fusion technology to overcome weaknesses in individual sensors and provide high-precision position information in all environments. Our technology removes the time-dependent drift characteristics that are typical of solutions that solely rely on Inertial Measurement Units (IMU) for dead reckoning.
We have developed a unique technique which delivers a more robust and precise solution than the typical Kalman Filter based approaches used by other solutions on the market.
Our solution is easily integrable into a variety of robots with the ability to take further sensors as input as they become available.
All forms of autonomous solutions including:
- Autonomous shuttles
- Small/medium sized robots used for delivery, patrolling, rescue, and cleaning
- Robot lawn mowers
- Agricultural robots such as tractors, harvesters, planters, and sprayers
- Small high-volume agricultural robots
Our positioning sensor has a typical lead-time of 3-6 weeks
The Vision-RTK 2 requires the following:
- Internet connection
- A third-party RTK correction data subscription (NTRIP)" (available in many regions of the world) and,
- Optionally, wheel tick sensor input
The Vision-RTK 2 has a custom message format that streams 3D position, orientation, velocity and other useful information. It can be transmitted over Ethernet (TCP/IP), Wi-Fi (TCP/IP) or serial interfaces. Details about the supported data formats can be found in the user manual.
The Vision-RTK 2 installation should fulfil the following:
- Be rigidly attached to the vehicle.
- The camera should be placed where it has an unobstructed view of the terrain in the direction of travel. Its view should not be dominated by featureless scenes (e.g., the sky) and obstructions that move with the vehicle (e.g., the cars hood) should be minimised (if some remain these can be cropped-out in the sensor settings).
- The appropriate dynamic mode in the sensor settings must be selected for the platform. These are defined in the user manual.
The Vision-RTK 2 camera is used for visual tracking. Future technologies and products are on our roadmap, stay tuned to learn more.
These metrics are calculated by evaluating VRTK’s output against a ground truth system. Hundreds of kilometres of testing data containing different scenarios are evaluated. We can provide the detailed method of calculation on request.
A brief explanation about the accuracy, and position drift during GNSS outage is available in our whitepaper