Abstract:
In Augmented Reality (AR), visible misregistration can be caused by many
inherent error sources, such as errors in tracking, calibration, and
modeling. In this paper we present a novel pixel-wise closed-loop
registration framework that can automatically detect and correct registration
errors using a reference model comprised of the real scene model and the
desired virtual augmentations. Registration errors are corrected in both
global world space via camera pose refinement, and local screen space via
pixel-wise corrections, resulting in spatially accurate and visually coherent
registration. Specifically we present a registration-enforcing model-based
tracking approach that weights important image regions while refining the
camera pose estimates (from any conventional tracking method) to achieve
better registration, even in the case of modeling errors. To deal with
remaining errors, which can be rigid or non-rigid, we compute the optical
flow between the camera image and the real model image rendered with the
refined pose, enabling direct screen-space pixel-wise corrections to
misregistration. The estimated flow field can be applied to improve
registration in two distinct ways: (1) forward warping of modeled
on-real-object-surface augmentations (e.g., object re-texturing) into the
camera image, leading to surface details that are not present in the virtual
object; and (2) backward warping of the camera image into the real scene
model, preserving the full use of the dense geometry buffer (depth in
particular) provided by the combined real-virtual model for registration,
leading to pixel accurate real-virtual occlusion. We discuss the trade-offs
between, and different use cases of, forward and backward warping with
model-based tracking in terms of specific properties for registration. We
demonstrate the efficacy of our approach with both simulated and real data.