Why accuracy alone is the wrong question
RTLS discussions often start and end with accuracy: “Is it 30 cm or 10 cm?”
In the field, that question is incomplete. Operators care about whether
events trigger correctly and on time.
Accuracy, latency, and update rate interact to determine that outcome.
1) Accuracy: what it really represents
Accuracy describes how close a reported position is to the true position,
under specific conditions. In industrial RTLS, those conditions matter more
than the number itself.
1.1 Best-case vs worst-case accuracy
- Best-case: open space, good geometry, minimal obstruction.
- Worst-case: corners, racks, tanks, moving vehicles, partial NLOS.
Most datasheets quote best-case numbers. Acceptance tests should focus on
worst-case zones, because that is where operators lose trust first.
1.2 Accuracy distribution, not a single value
A stable RTLS has a narrow error distribution. An unstable one may have a good
average but frequent spikes. Those spikes—rather than the mean—cause false alarms
and missed events.
2) Latency: the invisible constraint
Latency is the time between a physical action and the system responding to it.
In safety and dispatch scenarios, latency often matters more than absolute accuracy.
2.1 Where latency comes from
- Radio transmission and ranging
- Solver computation and filtering
- Network backhaul and server processing
- Event logic and notification pipelines
2.2 Latency vs accuracy trade-off
Many systems smooth position data to improve apparent accuracy.
Smoothing reduces jitter but adds delay.
For safety triggers, excessive smoothing can make the system react too late,
even if the reported position looks “clean.”
3) Update rate: responsiveness versus noise
Update rate defines how often new position data is produced.
Higher update rates increase responsiveness—but also amplify noise and processing load.
3.1 When higher update rate helps
- Fast-moving vehicles
- Short-range proximity detection
- High-density interaction zones
3.2 When higher update rate hurts
- Battery-powered tags with tight power budgets
- Environments with frequent NLOS events
- Systems relying on heavy filtering to stabilize output
4) The three-parameter interaction
Accuracy, latency, and update rate cannot be optimized independently.
Changing one shifts the operating point of the others.
| Change | Immediate effect | Hidden consequence |
|---|---|---|
| Increase update rate | Faster response | More noise, more false triggers |
| Add filtering | Smoother tracks | Higher latency |
| Tighten accuracy target | Better spatial confidence | Higher infrastructure and tuning cost |
5) Specifying performance by event, not by metric
The most reliable way to specify RTLS performance is to define
event correctness:
- Which event must trigger?
- Where must it trigger reliably?
- How late is “too late”?
For example, a collision warning may tolerate ±50 cm positional error
but only 300 ms latency. A mustering report may tolerate seconds of latency
but require reliable zone membership.
6) Acceptance testing that avoids disputes
- Define worst-case test zones during design.
- Measure P95/P99 error in those zones, not just averages.
- Measure end-to-end latency from movement to event.
- Verify behavior under motion, obstruction, and load.
7) What operators actually trust
Operators trust systems that behave consistently.
A slightly less accurate system with predictable timing and stable behavior
will outperform a “high-accuracy” system that reacts late or inconsistently.
TL;DR
RTLS performance is not defined by accuracy alone. In real deployments, accuracy, latency, and update rate form a coupled system—optimizing one without understanding the others often makes the system worse.
For safety and operations, the correct specification is event reliability in worst zones, bounded by acceptable latency and a realistic update rate. Systems that look “accurate” on paper but respond too slowly or too noisily fail operator trust.
Key takeaways
- Accuracy without latency is useless for safety; latency without stability creates false alarms.
- Update rate controls responsiveness and noise—higher is not always better.
- Worst-zone behavior defines acceptance, not average performance.
- Many RTLS disputes come from unclear event-level specifications, not hardware limits.
- The right metric is “event correctness over time,” not a single cm number.
Quick facts
FAQ
Is higher accuracy always better for RTLS?
Not if it increases latency or instability. For many events, timely and consistent triggering matters more than marginal gains in spatial precision.
What update rate is “enough” for safety applications?
It depends on speed and distance thresholds. Fast interactions require higher rates, but only if latency and noise are controlled.
Why do some systems look accurate but fail in real operations?
Because best-case accuracy was optimized while worst-case zones, latency, and filtering effects were ignored.
Can filtering fix noisy positioning?
Filtering can stabilize output, but it always trades responsiveness for smoothness. Excessive filtering can delay critical events.
How should performance be written in contracts?
Specify event-level behavior in worst zones, including latency and acceptable error ranges, rather than a single accuracy number.