Abstract:
Shader lamps can augment physical objects with projected virtual replications
using a camera-projector system, provided that the physical and virtual
object are well registered. Precise registration and tracking has been a
cumbersome and intrusive process in the past. In this paper, we present a new
method for tracking arbitrarily shaped physical objects interactively. In
contrast to previous approaches our system is mobile and makes solely use of
the projection of the virtual replication to track the physical object and
"stick" the projection to it. Our method consists of two stages, a fast pose
initialization based on structured light patterns and a non-intrusive
frame-by-frame tracking based on features detected in the projection. In the
initialization phase a dense point cloud of the physical object is
reconstructed and precisely matched to the virtual model to perfectly overlay
the projection. During the tracking phase, a radiometrically corrected
virtual camera view based on the current pose prediction is rendered and
compared to the captured image. Matched features are triangulated providing a
sparse set of surface points that is robustly aligned to the virtual model.
The alignment transformation serves as an input for the new pose prediction.
Quantitative experiments show that our approach can robustly track complex
objects at interactive rates.