Proc. SPIE Conf. Geospatial Informatics, Fusion, and Motion Video Analytics VI,
2016
Deformable and articulated objects are hard to segment and track over time. Purely motion based techniques such as optical flow can produce good results in terms of segmenting motion, however they do not incorporate salient object features. In this work, we propose a textured object segmentation method using temporal coherency in videos. We utilize color and texture feature fusion within a fast globally convex active contour method to obtain multiscale intra frame segmentations. A long distance optical flow based point trajectories is then combined with frame segmentations to obtain label propagation along the temporal direction. Total variation regularization is applied to obtain well-defined object boundaries and a dual minimization implementation is undertaken for solving the overall energy minimization. Preliminary experimental results shows that we obtain dense segmentations even with sparse and noisy initial label sets.