#170: Semantic depth map fusion for moving vehicle detection in aerial video


M. Poostchi, H. Aliakbarpour, R. Viguier, F. Bunyak, K. Palaniappan, and G. Seetharaman

Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pgs. 32-40, 2016

wami, tracking, fmv, features, motion, image analysis

PlainText, Bibtex, PDF, URL, Google Scholar

Abstract

Automatic moving object detection and segmentation is one of the fundamental low-level tasks for many of the urban traffic surveillance applications. We develop an automatic moving vehicle detection system for aerial video based on semantic fusion of trace of the flux tensor and tall structures altitude mask. Trace of the flux tensor provides spatio-temporal information of moving edges including undesirable motion of tall structures caused by parallax effects. The parallax induced motions are filtered out by incorporating buildings altitude masks obtained from available dense 3D point clouds. Using a level-set based geodesic active contours framework, the coarse thresholded building depth masks evolved into the actual building boundaries. Experiments are carried out on a cropped 2kx2k region of interest for 200 frames from Albuquerque urban aerial imagery. An average precision of 83% and recall of 76% have been reported using an object-level detection performance evaluation method.