INVESTIGATING LOSS FUNCTIONS FOR SEGMENTING AND DETECTING SHIPS ON SAR IMAGERY
Keywords: Mobile Mapping, Point Cloud-Derived Imagery, Deep Learning, Semantic Segmentation, Loss Function
Abstract. In accordance with the United Nations (UN) Sustainable Development Goal (SDG) 16: Peace, Justice, and Strong Institutions, this study explores ship monitoring through the use of Synthetic Aperture Radar (SAR) for its potential applications to economic and security purposes. One method to extract ships through SAR-derived imagery is to employ the use of convolutional neural networks (CNN). However, the extraction of small features continues to be a challenging task for CNNs. To improve the performance in such cases, one way is to employ the use of an appropriate loss function, which helps guide the CNN model during training. In this paper, Focal Combo (FC) loss, a recent loss function designed for extreme class imbalance, will be investigated to analyze its effects when applied to ship extraction. In doing so, this paper also presents a thorough comparison of existing loss functions in their capability to segment and detect ships on SAR imagery. Making use of the U-Net model, our results demonstrate that by using FC loss we can observe an increase in segmentation of about 9% in terms of f3-score and a decrease in missed detections by about 17 ships (after post-processing) when compared to cross-entropy loss. Unfortunately, it has also shown a significant drop in precision of about 35% resulting in an additional 270 ships being incorrectly detected in the background. In future work, varying CNN models shall be tested to see if the pattern persists and several trials shall be conducted to assess consistency.