#234: Heterogeneous multi-view information fusion: Review of 3-D reconstruction methods and a new registration with uncertainty modeling


We consider a multisensor network fusion frame- work for three-dimensional (3D) data registration using inertial planes, the underlying geometric relations, and transformation model uncertainties. We present a comprehensive review of 3D reconstruction methods and registration techniques in terms of the underlying geometric relations and associated uncertainties in the registered images. The 3D data registration and scene reconstruction task using a set of multiview images is an essential goal of structure-from-motion algorithms that still remains challenging for many applications such as surveillance, human motion and behavior modeling, virtual-reality, smart-rooms, healthcare, teleconferencing, games, human-robot interaction, medical imaging, and scene understanding. We propose a framework to incorporate measurement uncertainties in the registered imagery which is a critical issue to ensure robustness of these applications but is often not addressed. In our testbed environment, a network of sensors is used where each physical node consists of a coupled camera and associated inertial sensor (IS)/ inertial measurement unit (IMU). Each camera-IS node can be considered as a hybrid sensor or fusion-based virtual camera. The 3D scene information is registered onto a set of virtual planes defined by the IS. The virtual registrations are based on using the homography calculated from 3D orientation data provided by the IS. The uncertainty associated with each 3D point projected onto the virtual planes is modeled using statistical geometry methods. Experimental results demonstrate the feasibility and effectiveness of the proposed approach for multiview reconstruction with sensor fusion.