Open Access
Subscription or Fee Access
Deep Learning-based Damage Detection of Miter Gates Using Synthetic Imagery from Computer Graphics
Abstract
Structural inspections of large, difficult-to-access infrastructure like dams and bridges are often time-consuming, laborious and unsafe. In the United States, federal and state agencies responsible for managing such infrastructure assets are investigating the use of unmanned aerial vehicles (UAV) to allow for remote data acquisition. Processing the large amounts of data acquired by the UAV remains a challenging task. Over the past four years, researchers have been investigating deep learning methods for automated damage detection through image classification and more recently, the use of semantic segmentation where each pixel in the image is given a certain label. For such algorithms to work effectively, deep neural networks need to be trained on large datasets of labelled images. The generation of these labels for semantic segmentation is a very tedious process as it requires each pixel in the image to be labelled. This paper investigates the use of computer graphics to automatically generate synthetic imagery for the purposes of training deep learning algorithms for vision-based damage detection using semantic segmentation. The significant advantage of this is the automatic generation of precise semantic labels due to the implicit information in the developed graphics models. Parametric noise-based graphics texture models are created for defects such as cracks and corrosion and for other features such as vegetation growth, and dirt. The parameterization of the texture models allows for generation of a range of different surface conditions, thereby providing increased flexibility over data generation. To demonstrate the benefits of the proposed methodology for synthetic data generation a virtual environment of inland navigation infrastructure including miter gates and tainter gate dams is created. The developed texture models are applied to the virtual environment to produce a photo-realistic model. Synthetic image data is then rendered from the developed model and used to demonstrate the efficacy for training deep learning-based semantic segmentation algorithms for damage detection.1
DOI
10.12783/shm2019/32463
10.12783/shm2019/32463