An Encoder-Decoder Network Trained with Multi-Branch Auxiliary Learning for Extracting Transverse Aeolian Ridge Morphological Parameters from High-Resolution Mars Imagery
Keywords: Mars exploration, Transverse aeolian ridges, Deep learning, Morphological parameters
Abstract. Transverse aeolian ridges (TARs) are the most widely distributed and enigmatic aeolian landforms on the surface of Mars, holding significant research value and implications for interpreting ancient wind fields and environments, searching for water and life, and selecting landing sites. However, accurately interpreting the morphological parameters of TARs, including their edge contours and ridge lines, remains a challenge. To tackle this issue, this paper proposes a Multi-branch Auxiliary Training Encoder-Decoder Network (MATED-Net) for detecting the edge contours and ridge lines of TARs on Mars. Built upon the Unet architecture, MATED-Net incorporates four auxiliary training losses to perceive features at different scales. Then, We introduce a lightweight attention mechanism to guide the fusion of multi-scale features. Finally, an edge tracing loss is introduced to enhance the distinction between edge pixels and surrounding confusing pixels, thereby accurately tracking the true positions of edges. To verify the effectiveness of the MATED-Net in detecting TARs’ contours and ridge lines, this paper constructs a dataset of TAR ridge lines based on HiRISE and HiRIC imagery. To facilitate subsequent training and testing, all images were clipped to a size of 512 × 512and converted to the VOC dataset format, resulting in a total of 1000 images and corresponding label data. The experimental results demonstrate a precision of 0.72, a recall of 0.67, and a mean Intersection over Union (MIoU) of 0.57 for edge extraction.