A large comparison of feature-based approaches for buried target classification in forward-looking ground-penetrating radar

Abstract

Forward-looking ground-penetrating radar (FLGPR) has recently been investigated as a remote-sensing modality for buried target detection (e.g., landmines). In this context, raw FLGPR data are beamformed into images, and then, computerized algorithms are applied to automatically detect subsurface buried targets. Most existing algorithms are supervised, meaning that they are trained to discriminate between labeled target and nontarget imagery, usually based on features extracted from the imagery. A large number of features have been proposed for this purpose; however, thus far it is unclear as to which are the most effective. The first goal of this paper is to provide a comprehensive comparison of detection performance using existing features on a large collection of FLGPR data. Fusion of the decisions resulting from processing each feature is also considered. The second goal of this paper is to investigate two modern feature learning approaches from the object recognition literature: the bag-of-visual words and the Fisher vector for FLGPR processing. The results indicate that the new feature learning approaches lead to the best performing FLGPR algorithm. The results also show that fusion between existing features and new features yields no additional performance improvements.

DOI
10.1109/TGRS.2017.2751461
Year