Lessons for shallow underwater photogrammetry engineering
Contents
Environmental and operational constraints
- Turbidity limits: Data acquisition is limited to clear or medium turbidity water, specifically up to 50 NTU.
- Visibility requirements:
- Optimal results require a visibility of at least one metre from the object.
- While 0.5 metres is technically possible, it is not considered feasible for large-scale production.
- Lighting conditions
- For shallow water, natural light is preferred, as external lights can create inconsistencies.
- Artificial lighting should be reserved for deep-water environments.
- Data quality is best when there is no direct sun at the upper water level to avoid wave-induced light flickering.
- Strobes produce a very intense burst of light that freezes motion and produces sharper images. While video lights are functional if they provide a technically correct exposure, strobes generally allow for faster work.
Cameras
- DSLRs with low-noise sensors are recommended for high-quality results. They are better with low light conditions, have less distortion, and can handle focusing better.- GoPros are cheap and very compact. They have poor photo quality compared to DSLr and mirrorless.
- While high-resolution cameras (e.g., 45 megapixels) provide maximum detail, they create large volumes of data that can fill memory cards and storage quickly. These files can be downgraded to a lower resolution during post-processing or within the software to increase processing speed
- Cameras that rely on internal pop-up flashes are not ideal because they cause significant power drain on the camera battery.
- With GoPro it is common to shoot video and extract individual frames for photogrammetry reconstruction. As long as the movement is: smooth, consistent, steady panning without jerking frames.
- A benefit of video is that if a frame is blurred, frames a few frames earlier or later may be usable.
Optical and hardware engineering
- The “Banana Effect”: A common distortion where surfaces appear curved. This effect is much more pronounced in underwater environments.
- Dome port criticality: The distortion is primarily caused by the interaction between the lens and the underwater housing. Using small, generic domes causes light to refract in multiple directions. Precision-engineered domes are required to ensure minimal distortions.
Lenses
- Wide-angle lenses are recommended. Macro lenses are unsuitable for large subjects.
- Lens choice: Wide-angle lenses are preferred to capture a larger field of view, though they must allow for sufficient light intake.
- Camera calibration: Every camera/lens/dome combination requires individual calibration parameters to account for refraction and eliminate distortion.
- Fisheye lenses have extreme distortion. However, cameras store lens information in image metadata. When images are loaded into softwarre, the software reads the lens information. If the calibration is set to fisheye, the software applies pre-configured parameters. These parameters model how light entered the lens and landed on the sensor pixels. From this information the scene can be reconstructed.
Positioning and sensor fusion
- Dual IMU integration: one above the water and one submerged-to track directions, accelerations, and small movements.
- Geodetic precision: The hardware integrates GPS RTK with Ntrip connections, achieving a horizontal (XY) accuracy of approximately 2 cm and a vertical (Z) accuracy of 3–4 cm.
- Alignment: Precise Z-coordinate tracking for every photograph allows for the seamless overlapping of above-water and underwater data sets, even in tidal environments.
Data acquisition methodology
- There is always a tradeoff between: desired model detail, processing time, what esers can actually view or load.
- Sharp images improve photogrammetry alignment.
- Capture maximum resolution, downscale later during processing (downscale pictures). Also in processing software you can reduce the number of points used in reconstruction- software can select 40 000 points from billions to speed up processing.
- Recommended overlap: 70–80 % overlap.
- Overlap must occur in: X direction and Y direction (vertical overlap).
- Shooting around edges and corners: capture a frame roughly every 10–15 degrees to preserve geometric detail.
- Capture mode: Timed/bracketed still shots (e.g., one photo every second) are preferred over video.
- Stabilisation: All internal camera electronic/optical stabilisation must be disabled. Stabilisation algorithms crop and shift frames, which compromises the “rough data” needed for accurate photogrammetric triangulation.
- Movement speed: Operators must move as slowly as possible to prevent motion blur.
- Video Specifications: If using video for data extraction, a minimum of 4K resolution at 120–150 frames per second (fps) is required to “stop motion” and avoid blurriness, though this results in extremely high data volumes (e.g., 2GB per minute of footage).
Other notes
- Large-scale processing: Projects involve sets of 100,000 to 500,000 photos.
- Recommended hardware for serious work: 32 GB RAM minimum, dedicated GPU, multiple CPU cores, SSD hard disk.
Subscribe and receive updates, lessons, courses and more. No spam!
Get the latest updates and tips.