General questions and answers
Frequently Asked Questions
General / Pricing / Sales / Support
1. Is there any version of the product for robots that are intended for indoor environments?
Our current system focuses on outdoor applications in degraded or denied GNSS environments, with indoor functionality dependent on your travel distance requirements.
We're actively developing technologies that leverage visual feature maps to extend our capabilities in indoor environments.
2. What is the typical lead time when ordering a Vision-RTK2?
For 1 or 2 pieces of the "Starter-Kit" can be a few days to a week depending on current availability.
For larger orders, usually it takes 6 - 8 weeks.
1. For which IP level is Vision-RTK2 certified?
The Vision-RTK 2 has been tested to meet IP66 standards.
1. What kind of RTK correction protocol does Vision-RTK2 support?
We support RTCM (v3) over NTRIP (v1) or UART. Common providers like Swipos, QianXun, etc. are all supported.
2. Vision-RTK2 has two GNSS receivers, does it also need two RTK correction subscription?
A single RTK correction data stream is required. It is used for both GNSS receivers.
3. Will you supported the State Space Representation (SSR) RTK correction format in the future?
The Vision-RTK 2 aims to provide centimetre accurate positioning. We have very good experiences with using the Observation Space Representation (OSR). We are constantly reviewing and assessing the latest SSR solutions. Currently, the performance of these solutions does not meet Fixposition’s accuracy needs. If you have an application for which you believe such a solution provides you with the necessary accuracy, please contact us to discuss in detail.
1. Do you have readily available driver support?
We have a read-to-use ROS driver available. Please contact us to get the package.
2. Does Vision-RTK2 support wheel speed input?
Yes, we currently support wheel speed input via CAN, UART, TCP/IP and ROS.
3. Does Vision-RTK2 support output of the internal IMU and other RAW data?
Yes, there is the option to output live IMU data, both bias corrected values and raw values, as ASCII strings at 200Hz. Also, the GNSS raw data is output on the TCP interface. For the image data, we currently don't support streaming it live but we can record all the data using the web-interface.
For more details please consult our Integration Manual.
Vision / fusion related
1. How long can the localization be stable without RTK-GNSS fix or generally GNSS signals, assuming we keep moving at a constant speed, say inside a tunnel?
The Vision-RTK 2 drifts relative to the driven distance at a rate of 0.5 to 1% per travelled distance.
2. Does your solution work in darker indoor areas and at night?
Our solution was tested in bad illuminated environments like parking garages, evenings and transitions of indoor-outdoor situations including blending sunlight during the day. All situations are handled by our system without a major degradation. Pure darkness can lead to degradation when no GNSS is available.
3. What are the requirements to start and initialize the sensor?
The Vision-RTK 2 requires a GNSS-RTK fix for initialization. We're currently expanding the availability of the start-up of the sensor in GNSS degraded environments.
4. Which kind of platforms does Vision-RTK2 support?
Vision-RTK2 provides multiple operation modes:
- Passenger Car
- Slow moving robot
5. Does the sensor upload data automatically in the background to Fixposition servers?
We don't have any automatic upload of data running in the background nor we require any further connection running in the background except for the NTRIP caster providing the RTK correction stream.
6. Does Vision-RTK2 Need mapping step beforehand?
Testing and Performance
1. What does 0.5 - 1% drift mean? How long does it support indoor operation?
It is not based on time, but rather on distance. How long it supports indoor operation depend therefore on the need of the accuracy level. For a 1m accepted accuracy we can travel roughly 200m while keeping the error below 1m.
Connect with Fixposition:
We're hiring: https://www.fixposition.com/join-us
Follow us on LinkedIn: https://www.linkedin.com/company/fixposition
Follow us on Twitter: https://twitter.com/FixpositionAG
Subscribe for our global testing videos: https://www.youtube.com/channel/UCJGDumf_aTrWGuntxih1TIg
More Updates from Fixposition
Disruptive solution fuses positioning satellites, computer vision, and inertial sensors to enable precise, autonomous vehicle and machine navigation in the most demanding areas
Dr. Zhenzhong Su, CEO and Co-Founder of Fixposition, is invited by Momenta as a guest speaker to talk about Fixposition' vision