This paper addresses the challenge of 3D object detection from a single panoramic image under severe deformation. The advent of the two-stage approach has impelled significant progress in 3D object detection. However, most available methods only can localize region proposals by a single-scale architecture network, which are sensitive to deformation and distortion. To address this issue, we propose a multi-scale convolutional neural network (MSCNN) to estimate the 3D pose of an object. To be specific, the proposed MSCNN consists of three steps for effectively detecting the distorted object on the panoramic images. The MSCNN contains the CycleGAN network that converts rectilinear images into panoramas, a fused framework that improves both accuracy and speed for object detection, and an adversarial spatial transformer network (ASTN) that extracts the deformation features of the object on panoramic images. Additionally, we recover the 3D pose of the object using a coordinate projection and a 3D bounding box. Extensive experiments demonstrate that the proposed method can achieve a 3D detection accuracy of 38.7% in high-resolution panoramic images, which is higher than the current state-of-the-art algorithm of 5.2%. Moreover, the speed of detection is only about 0.6 seconds per image, which is six times faster than Faster R-CNN (COCO). The code will be available at https://github.com/Yanhui-He.
|Number of pages||10|
|Publication status||Published - 26 Nov 2019|