Abstract:
We present initial results from a new image generation approach for
low-latency displays such as those needed in head-worn AR devices. Avoiding
the usual video interfaces, such as HDMI, we favor direct control of the
internal display technology. We illustrate our new approach with a bench-top
optical see-through AR proof-of-concept prototype that uses a Digital Light
Processing (DLP) projector whose Digital Micromirror Device (DMD) imaging
chip is directly controlled by a computer, similar to the way random access
memory is controlled. We show that a perceptually-continuous-tone dynamic
gray-scale image can be efficiently composed from a very rapid succession of
binary (partial) images, each calculated from the continuous-tone image
generated with the most recent tracking data. As the DMD projects only a
binary image at any moment, it cannot instantly display this latest
continuous-tone image, and conventional decomposition of a continuous-tone
image into binary time-division-multiplexed values would induce just the
latency we seek to avoid. Instead, our approach maintains an estimate of the
image the user currently perceives, and at every opportunity allowed by the
control circuitry, sets each binary DMD pixel to the value that will reduce
the difference between that user-perceived image and the newly generated
image from the latest tracking data. The resulting displayed binary image is
"neither here nor there," but always approaches the moving target that is the
constantly changing desired image, even when that image changes every 50µs.
We compare our experimental results with imagery from a conventional DLP
projector with similar internal speed, and demonstrate that AR overlays on a
moving object are more effective with this kind of low-latency display device
than with displays of similar speed that use a conventional video interface.