How to get geotagged pictures indoors?
What professional camera and equipment solutions can reliably provide high-accuracy indoor geotagging?
Here are the options!
Indoor Geotagging and Mapping Technologies
Traditional GPS does not work indoors, so mapping systems combine cameras with alternate positioning. Solutions use combinations of LiDAR/SLAM, inertial/RTK fusion, wireless ranging (Wi‑Fi RTT, UWB), or optical tracking to tag images with position. For example, Wi‑Fi RTT (IEEE 802.11mc) on Android can achieve ~1–2 m accuracy indoors. Ultra-wideband (UWB) ranging can reach ~10–30 cm accuracy in line-of-sight setups, far better than Wi‑Fi or BLE. In practice, many high-end systems embed cameras in LiDAR/SLAM scanners or wearable units that self-localize with sub-decimeter precision. Common architectures include:
- LiDAR + SLAM: Handheld/backpack scanners (NavVis VLX, GeoSLAM Zeb, Leica BLK2GO) fuse multi-layer LiDAR with IMU+vision SLAM to build a 3D map and track the device (and attached cameras) in real time. These yield survey-grade accuracy (millimeter-level local precision) indoors.
- Inertial/INS + SLAM: Vehicle or backpack systems (Leica Pegasus, Applanix TIMMS) use GNSS outdoors and switch to an INS/SLAM indoors. For example, the Leica Pegasus backpack relies on GNSS for outdoor geo-reference but indoors it uses inertial navigation and SLAM to maintain positioning. Its nominal absolute accuracy is ~5 cm.
- Photogrammetry + Wireless Ranging: Some setups attach a camera and a UWB tag to a pole or backpack. The camera takes overlapping photos while UWB anchors record ranges. In one test, a Canon G7X with Pozyx UWB devices achieved ~50 cm absolute error (0.5 m) in georeferenced coordinates, enough for decimeter-level models. UWB provides relative positioning of the camera; final geolocation can be fixed by any known reference (e.g. a brief GNSS fix at an entry point).
- Optical Vision Tracking: In film/VFX, systems like NCam or Zeiss’ markerless trackers compute the camera’s 6DOF pose by vision alone. These “camera tracking” rigs use a camera-mounted sensor or cloud-based computer vision to continuously locate the camera in space. They can deliver sub-centimeter tracking without external signals, but output pose rather than a global GPS tag.
- Hybrid Visual/Inertial SLAM: High-end mapping often uses multi-camera rigs (e.g. 360° “streetview” systems or backpack rigs) plus an inertial navigation system (INS). For instance, the Mosaic Meridian mobile mapper pairs a 74 MP 360° camera array with a precision INS and a LiDAR payload. It achieves ~2–4 cm vertical accuracy in real time. This is comparable to survey-grade systems, at much lower cost.
In all cases, geolocation data is recorded alongside imagery. Integrated systems typically log the pose (position + orientation) in metadata or sidecar files. For example, survey-grade RTK+IMU camera rigs attach a module to the camera hot‑shoe that logs each photo’s coordinates and attitude. Similarly, mobile mapping units embed IMU and (when available) GNSS/SLAM solutions so that each captured image automatically carries the estimated X/Y/Z in the device’s coordinate frame. In post-processing, photogrammetry software can ingest these positions (from EXIF or log files) to directly georeference the 3D model.
Professional systems & use cases: Typical applications include architectural documentation, reality capture for BIM, archaeology surveys, and VFX/VR scene reconstruction. For example, Leica’s BLK2GO handheld scanner (LiDAR+SLAM) produces centimeter-accurate scans of interior as-built spaces. NavVis’s wearable VLX scanner (dual LiDAR+SLAM with 4 cameras) can rapidly scan entire buildings at ~6 mm global accuracy. Large indoor projects (malls, airports) use tripod scanners (Trimble X7, FARO Focus) with manually placed control points for registration. In film studios or volume stages, camera tracking rigs (NCam, Zeiss Ocellus) feed live pose data into VFX engines so virtual sets align perfectly with the camera.
Below is a comparison of representative camera-based mapping combinations:
| System (Camera + Positioning) | Positioning Tech | Accuracy (absolute / relative) | Integration / Use | Cost (approx) | Typical Applications |
|---|---|---|---|---|---|
| NavVis VLX (wearable mapper) | Dual 32-layer LiDAR + IMU + SLAM + 4×360° cameras | ~6 mm global accuracy (tested in 500 m² area); survey-grade local accuracy | All-in-one; operator walks through space; live feedback. Requires training. | ∼€80–150 k | Large indoor/outdoor site mapping, detailed BIM and reality capture |
| GeoSLAM Zeb Revo RT (handheld) | Single LiDAR + IMU + SLAM (optional camera) | ~6 mm global accuracy; portable, moderate range | Extremely portable; attach optional panoramic camera; fast deployment | ∼€40–60 k | Small-to-medium building scanning, archaeology, survey in tight spaces |
| Leica BLK2GO (handheld) | Single LiDAR + IMU + SLAM + 3 cameras | ~10 mm indoor SLAM accuracy; sub-cm local precision | Very lightweight; one-button capture; live view on smartphone | ∼€20–30 k | Quick interior scans, as-built inspection, real estate tours |
| Leica Pegasus Backpack | Dual LiDAR + IMU/INS + GNSS/SLAM | ~5 cm absolute (nominal) (with GNSS outdoors); ~3 cm relative | Wearable with wheels; uses GNSS outside and INS-SLAM inside | ∼€100–150 k | Large 3D urban or infrastructure surveys, mine or tunnel mapping |
| Mosaic Meridian (mobile mapper) | 360° high-res camera array + LiDAR + high-grade INS | ~2–4 cm (system vertical accuracy); survey precision | Integrated backpack/mobile system; synchronized sensors; plug & play | ∼(system pending – marketed as cost-effective) | Mobile mapping (indoors/outdoors), city street mapping, PNT research |
| Photogrammetry + UWB (e.g. Canon G7X + Pozyx tags) | Monocular camera + UWB anchors (with optional GNSS) | ~0.5 m absolute RMS (no GCPs); ~6–8 cm relative local | Prototype setup; requires spreading anchors; data fusion in processing | ∼€5–7 k | Low-cost site surveys, heritage/archaeology mapping where sub-meter geo is sufficient |
Embedding Geolocation: In practice, these systems tag images by writing X/Y/Z into photo metadata or linking images to the computed trajectory. High-end scanners automatically georeference each panorama/frame in their software. Custom setups may use smartphone or camera apps (e.g. RTKCamera by Redcatch) to stamp coordinates into EXIF. In photogrammetry, one can use a GPS/INS log (or UWB fixes) as a photo GPS log so mapping software assigns geotags during alignment. In all cases, images and their associated poses are collated so that each photo is tied to a precise location in the reconstructed model.
Key Takeaways: Commercial systems from Leica, Trimble, NavVis, GeoSLAM and others now offer turnkey solutions for indoor mapping with camera payloads. LiDAR-based SLAM units achieve millimeter-to-centimeter precision without GPS. UWB ranging and Wi‑Fi RTT can support camera geolocating to decimeter accuracy when combined with photogrammetry. For VFX, optical motion‐capture systems (e.g. Ncam, Zeiss) deliver camera pose tracking in real time. Overall, sub-meter indoor geotagging is feasible today using the above professional technologies and workflows.
Sources: Recent industry and research reports on indoor mobile mapping, SLAM scanners, and positioning systems. These cite tested accuracies, system descriptions, and integration approaches.
Subscribe and receive updates, lessons, courses and more. No spam!
Get the latest updates and tips.