INVESTIGADORES
VITALE Alejandro Jose
artículos
Título:
Mapping Topobathymetry in a Shallow Tidal Environment Using Low-Cost Technology
Autor/es:
GENCHI, SIBILA A.; VITALE, ALEJANDRO J.; PERILLO, GERARDO M. E.; SEITZ, CARINA; DELRIEUX, CLAUDIO A.
Revista:
Remote Sensing
Editorial:
MDPI
Referencias:
Lugar: Basel; Año: 2020 vol. 12
Resumen:
Detailed knowledge of nearshore topography and bathymetry is required for a wide varietyof purposes, including ecosystem protection, coastal management, and flood and erosion monitoring and research, among others. Both topography and bathymetry are usually studied separately; however, many scientific questions and challenges require an integrated approach. LiDAR technology is often the preferred data source for the generation of topobathymetric models, but because of its high cost, it is necessary to exploit other data sources. In this regard, the main goal of this study was to present a methodological proposal to generate a topobathymetric model, using low-cost unmanned platforms (unmanned aerial vehicle and unmanned surface vessel) in a very shallow/shallow and turbid tidalenvironment (Bah ía Blanca estuary, Argentina). Moreover, a cross-analysis of the topobathymetric and the tide level data was conducted, to provide a classification of hydrogeomorphic zones. As a main result, a continuous terrain model was built, with a spatial resolution of approximately 0.08 m (topography) and 0.50 m (bathymetry). Concerning the structure from motion-derived topography, the accuracy gave a root mean square error of 0.09 m for the vertical plane. The best interpolated bathymetry (inverse distance weighting method), which was aligned to the topography (as reference),showed a root mean square error of 0.18 m (in average) and a mean absolute error of 0.05 m. The final topobathymetric model showed an adequate representation of the terrain, making it well suited for examining many landforms. This study helps to confirm the potential for remote sensing of shallow tidal environments by demonstrating how the data source heterogeneity can be exploited.