A simple rotational equivariance loss for generic convolutional segmentation networks: Preliminary results

Abstract

Segmentation convolutional neural networks (SCNNs) are now popular for the semantic segmentation (i.e., dense pixel-wise labeling) of remote sensing imagery, such as color or hyperspectral satellite imagery. One desirable property of SCNNs when applied to remote sensing problems is rotational equivariance. This property implies that the class label assigned to a particular pixel (building, road, etc.) does not change if the input imagery is rotated by an arbitrary angle. We argue that recently proposed methods to make rotational equivariant SCNNs fall into two broad categories: easily employed methods that are somewhat ineffective, and highly effective methods that are complicated and potentially incompatible with state-of-the-art SCNN techniques. We propose a simple addition to the standard SCNN loss function that encourages the SCNN to be rotationally equivariant, and is easily added to modern SCNNs. We test the method on the Inria building labeling dataset and compare it to the popular simple approach of adding random rotational augmentations of the input imagery during training. We show that the proposed approach (i) achieves improved equivariance and (ii) yields performance improvements on average.

DOI
10.1109/IGARSS.2019.8898722
Year