CCPortal
DOI10.1111/2041-210X.14307
Scalable semantic 3D mapping of coral reefs with deep learning
发表日期2024
ISSN2041-210X
EISSN2041-2096
起始页码15
结束页码5
卷号15期号:5
英文摘要Coral reefs are among the most diverse ecosystems on our planet, and essential to the livelihood of hundreds of millions of people who depend on them for food security, income from tourism and coastal protection. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular structure-from-motion (SfM) photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: given a trained model, a 100 m video transect acquired within 5 min of diving with a cheap consumer-grade camera can be fully automatically transformed into a semantic point cloud within 5 min. We demonstrate the spatial accuracy of our method and the semantic segmentation performance (of at least 80% total accuracy), and publish a large dataset of ego-motion videos from the northern Gulf of Aqaba, along with a dataset of video frames annotated for dense semantic segmentation of benthic classes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method advances coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs.
英文关键词3D vision; artificial intelligence; computer vision; coral ecology; coral reefs; machine learning; semantic segmentation; structure from motion
语种英语
WOS研究方向Environmental Sciences & Ecology
WOS类目Ecology
WOS记录号WOS:001184769900001
来源期刊METHODS IN ECOLOGY AND EVOLUTION
文献类型期刊论文
条目标识符http://gcip.llas.ac.cn/handle/2XKMVOVA/305058
作者单位Swiss Federal Institutes of Technology Domain; Ecole Polytechnique Federale de Lausanne; Swiss Federal Institutes of Technology Domain; Ecole Polytechnique Federale de Lausanne; University of Lausanne
推荐引用方式
GB/T 7714
. Scalable semantic 3D mapping of coral reefs with deep learning[J],2024,15(5).
APA (2024).Scalable semantic 3D mapping of coral reefs with deep learning.METHODS IN ECOLOGY AND EVOLUTION,15(5).
MLA "Scalable semantic 3D mapping of coral reefs with deep learning".METHODS IN ECOLOGY AND EVOLUTION 15.5(2024).
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
百度学术
百度学术中相似的文章
必应学术
必应学术中相似的文章
相关权益政策
暂无数据
收藏/分享

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。