Date & Time : September 10 04:00 pm - 06:15 pm Location : TBA Chair : Papers :
Interactive Near-Field Illumination for Photorealistic Augmented Reality on Mobile Devices
Authors:Kai Rohmer, Wolfgang Büschel, Raimund Dachselt, Thorsten Grosch
Abstract : Mobile devices become more and more important today, especially for augmented
reality (AR) applications in which the camera of the mobile device acts like
a window into the mixed reality world. Up to now, no photorealistic
augmentation is possible since the computational power of the mobile devices
is still too weak. Even a streaming solution from a stationary PC would cause
a latency, that affects user interactions considerably. Therefore, we
introduce a differential illumination method that allows for a consistent
illumination of the inserted virtual objects on mobile devices, avoiding a
delay. The necessary computation effort is shared between a stationary PC and
the mobile devices to make use of the capacities available on both sides. The
method is designed such that only a minimum amount of data has to be
transferred asynchronously between the stationary PC and one or multiple
mobile devices. This allows for an interactive illumination of virtual
objects with a consistent appearance under both temporally and spatially
varying real illumination conditions. To describe the complex near-field
illumination in an indoor scenario, multiple HDR video cameras are used to
capture the illumination from multiple directions. In this way, sources of
illumination can be considered that are not directly visible to the mobile
device because of occlusions and the limited field of view of built-in
cameras.
Delta Voxel Cone Tracing
Author:Tobias Alexander Franke
Abstract : Mixed reality applications which must provide visual coherence between
synthetic and real objects need relighting solutions for both: synthetic
objects have to match lighting conditions of their real counterparts, while
real surfaces need to account for the change in illumination introduced by
the presence of an additional synthetic object. In this paper we present a
novel relighting solution called Delta Voxel Cone Tracing to compute both
direct shadows and first bounce mutual indirect illumination. We introduce a
voxelized, pre-filtered representation of the combined real and synthetic
surfaces together with the extracted illumination difference due to the
augmentation. In a final gathering step this representation is cone-traced
and superimposed onto both types of surfaces, adding additional light from
indirect bounces and synthetic shadows from antiradiance present in the
volume. The algorithm computes results at interactive rates, is temporally
coherent and to our knowledge provides the first real-time rasterizer
solution for mutual diffuse, glossy and perfect specular indirect reflections
between synthetic and real surfaces in mixed reality.
Importance Weighted Image Enhancement for Prosthetic Vision: An Augmentation Framework
Authors:Chris McCarthy, Nick Barnes
Abstract : Augmentations to enhance perception in prosthetic vision (also known as
bionic eyes) have the potential to improve functional outcomes significantly
for implantees. In current (and near-term) implantable electrode arrays
resolution and dynamic range are highly constrained in comparison to images
from modern cameras that can be head mounted. In this paper, we propose a
novel, generally applicable adaptive contrast augmentation framework for
prosthetic vision that addresses the specific perceptual needs of low
resolution and low dynamic range displays. The scheme accepts an externally
defined pixel-wise weighting of importance describing features of the image
to enhance in the output dynamic range. Our approach explicitly incorporates
the logarithmic scaling of enhancement required in human visual perception to
ensure perceivability of all contrast augmentations. It requires no
pre-existing contrast, and thus extends previous work in local contrast
enhancement to a formulation for general image augmentation. We demonstrate
the generality of our augmentation scheme for scene structure and looming
object enhancement using simulated prosthetic vision.
P-HRTF: Efficient Personalized HRTF Computation for High-Fidelity Spatial Sound
Abstract : Accurate rendering of 3D spatial audio for interactive virtual auditory
displays requires the use of personalized head-related transfer functions
(HRTFs). We present a new approach to compute personalized HRTFs for any
individual using a method that combines state-of-the-art image-based 3D
modeling with an efficient numerical simulation pipeline. Our 3D modeling
framework enables capture of the listener's head and torso using
consumer-grade digital cameras to estimate a high-resolution non-parametric
surface representation of the head, including the extended vicinity of the
listener's ear. We leverage sparse structure from motion and dense surface
reconstruction techniques to generate a 3D mesh. This mesh is used as input
to a numeric sound propagation solver, which uses acoustic reciprocity and
Kirchhoff surface integral representation to efficiently compute an
individual's personalized HRTF. The overall computation takes tens of minutes
on multi-core desktop machine. We have used our approach to compute the
personalized HRTFs of few individuals, and we present our preliminary
evaluation here. To the best of our knowledge, this is the first commodity
technique that can be used to compute personalized HRTFs in a lab or home
setting.
Visibility-Based Blending for Real-Time Applications
Abstract : There are many situations in which virtual objects are presented
half-transparently on a background in real time applications. In such cases,
we often want to show the object with constant visibility. However, using the
conventional alpha blending, visibility of a blended object substantially
varies depending on colors, textures, and structures of the background scene.
To overcome this problem, we present a framework for blending images based on
a subjective metric of visibility. In our method, a blending parameter is
locally and adaptively optimized so that visibility of each location achieves
the targeted level. To predict visibility of an object blended by an
arbitrary parameter, we utilize one of the error visibility metrics that have
been developed for image quality assessment. In this study, we demonstrated
that the metric we used can linearly predict visibility of a blended pattern
on various texture images, and showed that the proposed blending methods can
work in practical situations assuming augmented reality.
Applications
Session :
Applications
Date & Time : September 10 02:00 pm - 03:30 pm Location : TBA Chair : Papers :
AR-IVI – Implementation of In-Vehicle Augmented Reality
Authors:Qing Rao, Tobias Tropper, Christian Grünler, Markus Hammori, Samarjit Chakraborty
Abstract : In the last three years, a number of automotive Augmented Reality (AR)
concepts and demonstrators have been presented, all looking for an
interpretation of what AR in a car may look like. In October 2013,
Mercedes-Benz exhibited to a public audience the AR In-Vehicle Infotainment
(AR-IVI) system aimed at defining an overall in-vehicle electric/electronic
(E/E) architecture for augmented reality rather than showing specific use
cases. In this paper, we explain the requirements and design decisions that
lead to the system-design, and we share the challenges and experiences in
developing the AR-IVI system in the prototype vehicle. Based on our
experiences, we give an outlook on future software and E/E architectural
challenges of in-vehicle augmented reality.
Thermal Touch: Thermography-Enabled Everywhere Touch Interfaces for Mobile Augmented Reality Applications
Author:Daniel Kurz
Abstract : We present an approach that makes any real object a true touch interface for
mobile Augmented Reality applications. Using infrared thermography, we detect
residual heat resulting from a warm fingertip touching the colder surface of
an object. This approach can clearly distinguish if a surface has actually
been touched, or if a finger only approached it without any physical contact,
and hence significantly less heat transfer. Once a touch has been detected in
the thermal image, we determine the corresponding 3D position on the touched
object based on visual object tracking using a visible light camera. Finally
the 3D position of the touch is used by human machine interfaces for
Augmented Reality providing natural means to interact with real and virtual
objects. The emergence of wearable computers and head-mounted displays
desires for alternatives to a touch screen, which is the primary user
interface in handheld Augmented Reality applications. Voice control and
touchpads provide a useful alternative to interact with wearables for certain
tasks, but particularly common interaction tasks in Augmented Reality require
to accurately select or define 3D points on real surfaces. We propose to
enable this kind of interaction by simply touching the respective surface
with a fingertip. Based on tests with a variety of different materials and
different users, we show that our method enables intuitive interaction for
mobile Augmented Reality with most common objects.
AR-Mentor: Augmented Reality Based Mentoring System
Authors:Zhiwei Zhu, Vlad Branzoi, Michael Wolverton, Louise Yarnall, Girish Acharya, Supun Samarasekera, Rakesh Kumar, Glen Murray , Nicholas Vitovitch
Abstract : AR-Mentor is a wearable real time Augmented Reality (AR) mentoring system
that is configured to assist in maintenance and repair tasks of complex
machinery, such as vehicles, appliances, and industrial machinery. The system
combines a wearable Optical-See-Through (OST) display device with high
precision 6-Degree-Of-Freedom (DOF) pose tracking and a virtual personal
assistant (VPA) with natural language, verbal conversational interaction,
providing guidance to the user in the form of visual, audio and locational
cues. The system is designed to be heads-up and hands-free allowing the user
to freely move about the maintenance or training environment and receive
globally aligned and context aware visual and audio instructions (animations,
symbolic icons, text, multimedia content, speech). The user can interact with
the system, ask questions and get clarifications and specific guidance for
the task at hand. A pilot application with AR-Mentor was successfully
developed to instruct a novice to perform an advanced 33-step maintenance
task on a training vehicle. The initial live training tests demonstrate that
AR-Mentor is able to help and serve as an assistant to an instructor, freeing
him or her to cover more students and to focus on higher-order teaching.
Towards Augmented Reality User Interfaces in 3D Media Production
Authors:Max Krichenbauer, Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, Hirokazu Kato
Abstract : The idea of using Augmented Reality (AR) user interfaces (UIs) to create 3D
media content, such as 3D models for movies and games has been repeatedly
suggested over the last decade. Even though the concept is intuitively
compelling and recent technological advances have made such an application
increasingly feasible, very little progress has been made towards an actual
real-world application of AR in professional media production. To this day,
no immersive 3D UI has been commonly used by professionals for 3D computer
graphics (CG) content creation. In this paper, we are first to publish a
requirements analysis for our target application in the professional domain.
Based on a survey that we conducted with media professionals, the analysis of
professional 3D CG software, and professional training tutorials, we identify
these requirements and put them into the context of AR UIs. From these
findings, we derive several interaction design principles that aim to address
the challenges of real-world application of AR to the production pipeline. We
implemented these in our own prototype system while receiving feedback from
media professionals. The insights gained in the survey, requirements
analysis, and user interface design are relevant for research and development
aimed at creating production methods for 3D media production.
Reconstruction and Fusion
Session :
Reconstruction and Fusion
Date & Time : September 11 02:00 pm - 03:45 pm Location : TBA Chair : Papers :
Improved Registration for Vehicular AR using Auto-Harmonization
Authors:Eric Foxlin, Thomas Calloway, Hongsheng Zhang
Abstract : This paper describes the design, development and testing of an AR system that
was developed for aerospace and ground vehicles to meet stringent accuracy
and robustness requirements. The system uses an optical see-through HMD, and
thus requires extremely low latency, high tracking accuracy and precision
alignment and calibration of all subsystems in order to avoid
mis-registration and “swim”. The paper focuses on the optical/inertial
hybrid tracking system and describes novel solutions to the challenges with
the optics, algorithms, synchronization, and alignment with the vehicle and
HMD systems. A system accuracy analysis is presented with simulation results
to predict the registration accuracy. Finally, a car test is used to create a
through-the-eyepiece video demonstrating well-registered augmentations of the
road and nearby structures while driving.
Real-Time Illumination Estimation from Faces for Coherent Rendering
Authors:Sebastian B. Knorr, Daniel Kurz
Abstract : We present a method for estimating the real-world lighting conditions within
a scene in real-time. The estimation is based on the visual appearance of a
human face in the real scene captured in a single image of a monocular
camera. In hardware setups featuring a user-facing camera, an image of the
user's face can be acquired at any time. The limited range in variations
between different human faces makes it possible to analyze their appearance
offline, and to apply the results to new faces. Our approach uses radiance
transfer functions - learned offline from a dataset of images of faces under
different known illuminations - for particular points on the human face.
Based on these functions, we recover the most plausible real-world lighting
conditions for measured reflections in a face, represented by a function
depending on incident light angle using Spherical Harmonics. The pose of the
camera relative to the face is determined by means of optical tracking, and
virtual 3D content is rendered and overlaid onto the real scene with a fixed
spatial relationship to the face. By applying the estimated lighting
conditions to the rendering of the virtual content, the augmented scene is
shaded coherently with regard to the real and virtual parts of the scene. We
show with different examples under a variety of lighting conditions, that our
approach provides plausible results, which considerably enhance the visual
realism in real-time Augmented Reality applications.
Comprehensive Workspace Calibration for Visuo-Haptic Augmented Reality
Authors:Ulrich Eck, Frieder Pankratz, Christian Sandor, Gudrun Klinker, Hamid Laga
Abstract : Visuo-haptic augmented reality systems enable users to see and touch digital
information that is embedded in the real world. Precise co-location of
computer graphics and the haptic stylus is necessary to provide a realistic
user experience. PHANToM haptic devices are often used in such systems to
provide haptic feedback. They consist of two interlinked joints, whose angles
define the position of the haptic stylus and three sensors at the gimbal to
sense its orientation. Previous work has focused on calibration procedures
that align the haptic workspace within a global reference coordinate system
and developing algorithms that compensate the non-linear position error,
caused by inaccuracies in the joint angle sensors. In this paper, we present
an improved workspace calibration that additionally compensates for errors in
the gimbal sensors. This enables us to also align the orientation of the
haptic stylus with high precision. To reduce the required time for
calibration and to increase the sampling coverage, we utilize time-delay
estimation to temporally align external sensor readings. This enables users
to continuously move the haptic stylus during the calibration process, as
opposed to commonly used point and hold processes. We conducted an evaluation
of the calibration procedure for visuo-haptic augmented reality setups with
two different PHANToMs and two different optical trackers. Our results show a
significant improvement of orientation alignment for both setups over the
previous state of the art calibration procedure. Improved position and
orientation accuracy results in higher fidelity visual and haptic
augmentations, which is crucial for fine-motor tasks in areas including
medical training simulators, assembly planning tools, or rapid prototyping
applications. A user friendly calibration procedure is essential for
real-world applications of VHAR.
Recognition and reconstruction of transparent objects for Augmented Reality
Authors:Alan Francisco Torres-Gomez, Walterio Mayol-Cuevas
Abstract : Dealing with real transparent objects for AR is challenging due to their lack
of texture and visual features as well as the drastic changes in appearance
as the background, illumination and camera pose change. The few existing
methods for glass object detection usually require a carefully controlled
environment, specialized illumination hardware or ignore information from
different viewpoints. In his work, we explore the use of a learning approach
for classifying transparent objects from multiple images with the aim of both
discovering such objects and building a 3D reconstruction to support
convincing augmentations. We extract, classify and group small image patches
using a fast graph-based segmentation and employ a probabilistic formulation
for aggregating spatially consistent glass regions. We demonstrate our
approach via analysis of the performance of glass region detection and
example 3D reconstructions that allow virtual objects to interact with them.
Tracking
Session :
Tracking
Date & Time : September 11 04:00 pm - 06:00 pm Location : TBA Chair : Papers :
Pixel-Wise Closed-Loop Registration in Video-Based Augmented Reality
Abstract : In Augmented Reality (AR), visible misregistration can be caused by many
inherent error sources, such as errors in tracking, calibration, and
modeling. In this paper we present a novel pixel-wise closed-loop
registration framework that can automatically detect and correct registration
errors using a reference model comprised of the real scene model and the
desired virtual augmentations. Registration errors are corrected in both
global world space via camera pose refinement, and local screen space via
pixel-wise corrections, resulting in spatially accurate and visually coherent
registration. Specifically we present a registration-enforcing model-based
tracking approach that weights important image regions while refining the
camera pose estimates (from any conventional tracking method) to achieve
better registration, even in the case of modeling errors. To deal with
remaining errors, which can be rigid or non-rigid, we compute the optical
flow between the camera image and the real model image rendered with the
refined pose, enabling direct screen-space pixel-wise corrections to
misregistration. The estimated flow field can be applied to improve
registration in two distinct ways: (1) forward warping of modeled
on-real-object-surface augmentations (e.g., object re-texturing) into the
camera image, leading to surface details that are not present in the virtual
object; and (2) backward warping of the camera image into the real scene
model, preserving the full use of the dense geometry buffer (depth in
particular) provided by the combined real-virtual model for registration,
leading to pixel accurate real-virtual occlusion. We discuss the trade-offs
between, and different use cases of, forward and backward warping with
model-based tracking in terms of specific properties for registration. We
demonstrate the efficacy of our approach with both simulated and real data.
Semi-Dense Visual Odometry for AR on a Smartphone
Authors:Thomas Schöps, Jakob Engel, Daniel Cremers
Abstract : We present a direct monocular visual odometry system which runs in real-time
on a smartphone. Being a direct method, it tracks and maps on the images
themselves instead of extracted features such as keypoints. New images are
tracked using direct image alignment, while geometry is represented in the
form of a semi-dense depth map. Depth is estimated by filtering over many
small-baseline, pixel-wise stereo comparisons. This leads to significantly
less outliers and allows to map and use all image regions with sufficient
gradient, including edges. We show how a simple world model for AR
applications can be derived from semi-dense depth maps, and demonstrate the
practical applicability in the context of an AR application in which
simulated objects can collide with real geometry.
Sticky Projections - A New Approach to Interactive Shader Lamp Tracking
Authors:Christoph Resch, Peter Keitler, Gudrun Klinker
Abstract : Shader lamps can augment physical objects with projected virtual replications
using a camera-projector system, provided that the physical and virtual
object are well registered. Precise registration and tracking has been a
cumbersome and intrusive process in the past. In this paper, we present a new
method for tracking arbitrarily shaped physical objects interactively. In
contrast to previous approaches our system is mobile and makes solely use of
the projection of the virtual replication to track the physical object and
"stick" the projection to it. Our method consists of two stages, a fast pose
initialization based on structured light patterns and a non-intrusive
frame-by-frame tracking based on features detected in the projection. In the
initialization phase a dense point cloud of the physical object is
reconstructed and precisely matched to the virtual model to perfectly overlay
the projection. During the tracking phase, a radiometrically corrected
virtual camera view based on the current pose prediction is rendered and
compared to the captured image. Matched features are triangulated providing a
sparse set of surface points that is robustly aligned to the virtual model.
The alignment transformation serves as an input for the new pose prediction.
Quantitative experiments show that our approach can robustly track complex
objects at interactive rates.
Dense Planar SLAM
Authors:Renato Salas-Moreno, Ben Glocker, Paul Kelly, Andrew Davison
Abstract : Using higher-level entities during mapping has the potential to improve
camera localisation performance and give substantial perception capabilities
to real-time 3D SLAM systems. We present an efficient new real-time approach
which densely maps an environment using bounded planes and surfels extracted
from depth images (like those produced by RGB-D sensors or dense multi-view
stereo reconstruction). Our method offers the every-pixel descriptive power
of the latest dense SLAM approaches, but takes advantage directly of the
planarity of many parts of real-world scenes via a data-driven process to
directly regularize planar regions and represent their accurate extent
efficiently using an occupancy approach with on-line compression. Large areas
can be mapped efficiently and with useful semantic planar structure which
enables intuitive and useful AR applications such as using any wall or other
planar surface in a scene to display a user's content.
Real-time Deformation, Registration and Tracking of Solids Based on Physical Simulation
Authors:Ibai Leizea, Hugo Álvarez, Iker Aguinaga, Diego Borro
Abstract : This paper proposes a novel approach to registering deformations of 3D
non-rigid objects for Augmented Reality applications. Our prototype is able
to handle different types of objects in real-time regardless of their
geometry and appearance (with and without texture) with the support of an
RGB-D camera. During an automatic offline stage, the model is processed in
order to extract the data that serves as input for a physics-based
simulation. Using its output, the deformations of the model are estimated by
considering the simulated behaviour as a constraint. Furthermore, our
framework incorporates a tracking method based on templates in order to
detect the object in the scene and continuously update the camera pose
without any user intervention. Therefore, it is a complete solution that
extends from tracking to deformation formulation for either textured or
untextured objects regardless of their geometrical shape. Our proposal
focuses on providing a correct visual with a low computational cost.
Experiments with real and synthetic data demonstrate the visual accuracy and
the performance of our approach.
UI
Session :
UI
Date & Time : September 11 11:00 am - 12:45 pm Location : TBA Chair : Papers :
Grasp-Shell vs Gesture-Speech: A comparison of direct and indirect natural interaction techniques in Augmented Reality
Authors:Thammathip Piumsomboon, David Altimira, Hyungon Kim, Adrian Clark, Gun Lee, Mark Billinghurst
Abstract : In order for natural interaction in Augmented Reality (AR) to become widely
adopted, the techniques used need to be shown to support precise interaction,
and the gestures used proven to be easy to understand and perform . Recent
research has explored free-hand gesture interaction with AR interfaces, but
there have been few formal evaluations conducted with such systems. In this
paper we introduce and evaluate two natural interaction techniques: the
free-hand gesture based Grasp-Shell, which provides direct physical
manipulation of virtual content; and the multi-modal Gesture-Speech, which
combines speech and gesture for indirect natural interaction. These
techniques support object selection, 6 degree of freedom movement, uniform
scaling, as well as physics-based interaction such as pushing and flinging.
We conducted a study evaluating and comparing Grasp-Shell and Gesture-Speech
for fundamental manipulation tasks. The results show that Grasp-Shell
outperforms Gesture-Speech in both efficiency and user preference for
translation and rotation tasks, while Gesture-Speech is better for uniform
scaling. They could be good complementary interaction methods in a
physics-enabled AR environment, as this combination potentially provides both
control and interactivity in one interface. We conclude by discussing
implications and future directions of this research.
Improving Co-presence with Augmented Visual Communication Cues for Sharing Experience through Video Conference
Authors:Seungwon Kim, Gun Lee, Nobuchika SAKATA, Mark Billinghurst
Abstract : Video conferencing is becoming more widely used in areas other than
face-to-face conversation, such as sharing real world experience with remote
friends or family. In this paper we explore how adding augmented visual
communication cues can improve the experience of sharing remote task space
and collaborating together. We developed a prototype system that allows users
to share live video view of their task space taken on a Head Mounted Display
(HMD) or Handheld Display (HHD), and communicate through not only voice but
also using augmented pointer or annotations drawn on the shared view. To
explore the effect of having such an interface for remote collaboration, we
conducted a user study comparing three video-conferencing conditions with
different combination of communication cues: (1) voice only, (2) voice +
pointer, and (3) voice + annotation. The participants used our remote
collaboration system to share a parallel experience of puzzle solving in the
user study, and we found that adding augmented visual cues significantly
improved the sense of being together. The pointer was the most preferred
additional cue by users for parallel experience, and there were different
states of the users’ behavior found in remote collaboration.
A Study of Depth Perception in Hand-Held Augmented Reality using Autostereoscopic Displays
Authors:Matthias Berning, Daniel Kleinert, Till Riedel, Michael Beigl
Abstract : Displaying three-dimensional content on a flat display is bound to reduce the
impression of depth, particularly for mobile video see-trough augmented
reality. Several applications in this domain can benefit from accurate depth
perception, especially if there are contradictory depth cues, like occlusion
in a x-ray visualization. The use of stereoscopy for this effect is already
prevalent in head-mounted displays, but there is little research on the
applicability for hand-held augmented reality. We have implemented such a
prototype using an off-the-shelf smartphone equipped with a stereo camera and
an autostereoscopic display. We designed and conducted an extensive user
study to explore the effects of stereoscopic hand-held augmented reality on
depth perception. The results show that in this scenario depth judgment is
mostly influenced by monoscopic depth cues, but our system can improve
positioning accuracy in challenging scenes.
Measurements of Live Actor Motion in Mixed Reality Interaction
Authors:Gregory Hough, Ian Williams, Cham Athwal
Abstract : This paper presents a method for measuring the magnitude and impact of errors
in mixed reality interactions. We define the errors as measurements of hand
placement accuracy and consistency within bimanual movement of an interactive
virtual object. First, a study is presented which illustrates the amount of
variability between the hands and the mean distance of the hands from the
surfaces of a common virtual object. The results allow a discussion of the
most significant factors which should be considered in the frame of
developing realistic mixed reality interaction systems. The degree of error
was found to be independent of interaction speed, whilst the size of virtual
object and the position of the hands are significant. Second, a further study
illustrates how perceptible these errors are to a third person viewer of the
interaction (e.g. an audience member). We found that interaction errors
arising from the overestimation of an object surface affected the visual
credibility for the viewer considerably more than an underestimation of the
object. This work is presented within the application of a real-time
Interactive Virtual Television Studio, which offers convincing real-time
interaction for live TV production. We believe the results and methodology
presented here could also be applied for designing, implementing and
assessing interaction quality in many other Mixed Reality applications.
Layout and HMD-VST
Session :
Layout and HMD-VST
Date & Time : September 12 01:00 pm - 03:30 pm Location : TBA Chair : Papers :
Creating Automatically Aligned Consensus Realities for AR Videoconferencing
Authors:Nicolas Lehment, Daniel Merget, Gerhard Rigoll
Abstract : This paper presents an AR videoconferencing approach merging two remote rooms
into a shared workspace. Such bilateral {AR} telepresence inherently suffers
from breaks in immersion stemming from the different physical layouts of
participating spaces. As a remedy, we develop an automatic alignment scheme
which ensures that participants share a maximum of common features in their
physical surroundings. The system optimizes alignment with regard to initial
user position, free shared floor space, camera positioning and other factors.
Thus we can reduce discrepancies between different room and furniture layouts
without actually modifying the rooms themselves. A description and discussion
of our alignment scheme is given along with an exemplary implementation on
real-world datasets.
FLARE: Fast Layout for Augmented Reality Applications
Abstract : Creating a layout for an augmented reality (AR) application which embeds
virtual objects in a physical environment is difficult as it must adapt to
any physical space. We propose a rule-based framework for generating object
layouts for AR applications. Under our framework, the developer of an AR
application specifies a set of rules (constraints) which enforce
self-consistency (rules regarding the inter-relationships of application
components) and scene-consistency (application components are consistent with
the physical environment they are placed in). When a user enters a new
environment, we create, in real-time, a layout for the application, which is
consistent with the defined constraints (as much as possible). We find the
optimal configurations for each object by solving a constraint-satisfaction
problem. Our stochastic move making algorithm is domain-aware, and allows us
to efficiently converge to a solution for most rule-sets. In the paper we
demonstrate several augmented reality applications that automatically adapt
to different rooms and changing circumstances in each room.
Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented Reality
Authors:William Steptoe, Simon Julier, Anthony Steed
Abstract : Non-photorealistic rendering (NPR) has been shown as a powerful way to
enhance both visual coherence and immersion in augmented reality (AR).
However, it has only been evaluated in idealized pre-rendered scenarios with
handheld AR devices. In this paper we investigate the use of NPR in an
immersive, stereoscopic, wide field-of-view head-mounted video see-through AR
display. This is a demanding scenario, which introduces many real-world
effects including latency, tracking failures, optical artifacts and
mismatches in lighting. We present the AR-Rift, a low-cost video see-through
AR system using an Oculus Rift and consumer webcams. We investigate the
themes of consistency and immersion as measures of psychophysical
non-mediation. An experiment measures discernability and presence in three
visual modes: conventional (unprocessed video and graphics), stylized
(edge-enhancement) and virtualized (edge-enhancement and color extraction).
The stylized mode results in chance-level discernability judgments,
indicating successful integration of virtual content to form a visually
coherent scene. Conventional and virutalized rendering bias judgments towards
correct or incorrect respectively. Presence as it may apply to immersive AR,
and which, measured both behaviorally and subjectively, is seen to be
similarly high over all three conditions.
WeARHand: Head-Worn, RGB-D Camera-Based, Bare-Hand User Interface with Visually Enhanced Depth Perception
Authors:Taejin Ha, Steven Feiner, Woontack Woo
Abstract : We introduce WeARHand, which allows a user to manipulate virtual 3D objects
with a bare hand in a wearable augmented reality (AR) environment. Our method
uses no environmentally tethered tracking devices and localizes a pair of
near-range and far-range RGB-D cameras mounted on a head-worn display and a
moving bare hand in 3D space by exploiting depth input data. Depth perception
is enhanced through egocentric visual feedback, including a semi-transparent
proxy hand. We implement a virtual hand interaction technique and feedback
approaches, and evaluate their performance and usability. The proposed method
can apply to many 3D interaction scenarios using hands in a wearable AR
environment, such as AR information browsing, maintenance, design, and games.
HMD-Optical
Session :
HMD-Optical
Date & Time : September 12 11:45 am - 12:45 pm Location : TBA Chair : Papers :
Performance and Sensitivity Analysis of INDICA: INteraction-free DIsplay CAlibration for Optical See-Through Head-Mounted
Authors:Yuta Itoh, Gudrun Klinker
Abstract : An issue in AR applications with Optical See-Through Head- Mounted Display
(OST-HMD) is to correctly project 3D information to the current viewpoint of
the user. Manual calibration methods give the projection as a black box which
explains observed 2D- 3D relationships well (Fig. 1). Recently, we have
proposed an INteraction-free DIsplay CAlibration method (INDICA) for OSTHMD,
utilizing camera-based eye tracking[7]. It reformulates the projection in two
ways: a black box with an actual eye model (Recycle Setup), and a combination
of an explicit display model and an eye model (Full Setup). Although we have
shown the former performs more stably than a repeated SPAAM calibration, we
could not yet prove whether the same holds for the Full Setup. More
importantly, it is still unclear how the error in the calibration parameters
affects the final results. Thus, the users can not know how accurately they
need to estimate each parameter in practice. We provide: (1) the fact that
the Full Setup performs as accurately as the Recycle Setup under a
marker-based display calibration, (2) an error sensitivity analysis for both
SPAAM and INDICA over the on-/offline parameters, and (3) an investigation of
the theoretical sensitivity on an OST-HMD justified by the real measurements.
Analysing the Effects of a Wide Field of View Augmented Reality Display on Search Performance in Divided Attention Tasks
Authors:Naohiro Kishishita, Kiyoshi Kiyokawa, Ernst Kruijff, Jason Orlosky, Tomohiro Mashita, Haruo Takemura
Abstract : A wide field of view augmented reality display is a special type of head-worn
device that enables users to view augmentations in the peripheral visual
field. However, the actual effects of a wide field of view display on the
perception of augmentations have not been widely studied. To improve our
understanding of this type of display when conducting divided attention
search tasks, we conducted an in depth experiment testing two view management
methods, in-view and in-situ labelling. With in-view labelling, search target
annotations appear on the display border with a corresponding leader line,
whereas in-situ annotations appear without a leader line, as if they are
affixed to the referenced objects in the environment. Results show that
target discovery rates consistently drop with in-view labelling and increase
with in-situ labelling as display angle approaches 100 degrees of field of
view. Past this point, the performances of the two view management methods
begin to converge, suggesting equivalent discovery rates at approximately 130
degrees of field of view. Results also indicate that users exhibited lower
discovery rates for targets appearing in peripheral vision, and that there is
little impact of field of view on response time and mental workload.
SmartColor: Real-Time Color Correction and Contrast for Optical See-Through Head-Mounted Displays
Authors:Juan David Hincapié-Ramos, Levko Ivanchuk, Srikanth Kirshnamachari Sridharan, Pourang Irani
Abstract : Users of optical see-through head-mounted displays (OHMD) perceive color as a
blend of the display color and the background. Color-blending is a major
usability challenge as it leads to loss of color encodings and poor text
legibility. Color correction aims at mitigating color blending by producing
an alternative color which, when blended with the background, more closely
approaches the color originally intended. To date, approaches to color
correction do not yield optimal results or do not work in real-time. This
paper makes two contributions. First, we present QuickCorrection, a real-time
color correction algorithm based on display profiles. We describe the
algorithm, measure its accuracy and analyze two implementations for the
OpenGL graphics pipeline. Second, we present SmartColor, a middleware for
color management of user-interface components in OHMD. SmartColor uses color
correction to provide three management strategies: correction, contrast, and
show-up-on-contrast. Correction determines the alternate color which best
preserves the original color. Contrast determines the color which best
warranties text legibility while preserving as much of the original hue.
Show-up-on-contrast makes a component visible when a related component does
not have enough contrast to be legible. We describe the SmartColor’s
architecture and illustrate the color strategies for various types of display
content.
Minimizing Latency for Augmented Reality Displays: Frames Considered Harmful
Authors:Feng Zheng, Turner Whitted, Anselmo Lastra, Peter Lincoln, Andrei State, Andrew Maimone, Henry Fuchs
Abstract : We present initial results from a new image generation approach for
low-latency displays such as those needed in head-worn AR devices. Avoiding
the usual video interfaces, such as HDMI, we favor direct control of the
internal display technology. We illustrate our new approach with a bench-top
optical see-through AR proof-of-concept prototype that uses a Digital Light
Processing (DLP) projector whose Digital Micromirror Device (DMD) imaging
chip is directly controlled by a computer, similar to the way random access
memory is controlled. We show that a perceptually-continuous-tone dynamic
gray-scale image can be efficiently composed from a very rapid succession of
binary (partial) images, each calculated from the continuous-tone image
generated with the most recent tracking data. As the DMD projects only a
binary image at any moment, it cannot instantly display this latest
continuous-tone image, and conventional decomposition of a continuous-tone
image into binary time-division-multiplexed values would induce just the
latency we seek to avoid. Instead, our approach maintains an estimate of the
image the user currently perceives, and at every opportunity allowed by the
control circuitry, sets each binary DMD pixel to the value that will reduce
the difference between that user-perceived image and the newly generated
image from the latest tracking data. The resulting displayed binary image is
"neither here nor there," but always approaches the moving target that is the
constantly changing desired image, even when that image changes every 50µs.
We compare our experimental results with imagery from a conventional DLP
projector with similar internal speed, and demonstrate that AR overlays on a
moving object are more effective with this kind of low-latency display device
than with displays of similar speed that use a conventional video interface.
Medical
Session :
Medical
Date & Time : September 12 04:00 pm - 05:30 pm Location : TBA Chair : Papers :
Abstract : This paper proposes an efficient method to capture and augment highly elastic
objects from a single view. 3D shape recovery from a monocular video sequence
is an underconstrained problem and many approaches have been proposed to
enforce constraints and re-solve the ambiguities. State-of-the art solutions
enforce smoothness or geometric constraints, consider specific deformation
properties such as inextensibility or ressort to shading constraints.
However, few of them can handle properly large elastic deformations. We
propose in this paper a real-time method which makes use of a mechanical
model and is able to handle highly elastic objects. Our method is formulated
as a energy minimization problem accounting for a non-linear elastic model
constrained by external image points acquired from a monocular camera. This
method prevents us from formulating restrictive assumptions and specific
constraint terms in the minimization. The only parameter involved in the
method is the Young’s modulus but we show in experiments that a rough
estimate of the Young’s modulus is sufficient to obtain a good
reconstruction. Our method is compared to existing techniques with
experiments conducted on computer-generated and real data that show the
effectiveness of our approach. Experiments in the context of minimally
invasive liver surgery are also provided.
Improved Interventional X-ray Appearance
Authors:Xiang Wang, Christian Schulte zu Berge, Stefanie Demirci, Pascal Fallavollita, Nassir Navab
Abstract : Depth cues are an essential part of navigation and device positioning tasks
during clinical interventions. Yet, many minimally-invasive procedures, such
as catheterizations, are usually performed under X-ray guidance only
depicting a 2D projection of the anatomy, which lacks depth information.
Previous attempts to integrate pre-operative 3D data of the patient by
registering these to intra-operative data have led to virtual 3D renderings
independent of the original X-ray appearance and planar 2D color overlays
(e.g. roadmaps). A major drawback associated to these solutions is the
trade-off between X-ray attenuation values that is completely neglected
during 3D renderings, and depth perception not being incorporated into the 2D
roadmaps. This paper presents a novel technique for enhancing depth
perception of interventional X-ray images preserving the original attenuation
appearance. Starting from patient-specific pre-operative 3D data, our method
relies on GPU ray casting to compute a colored depth map, which assigns a
predefined color to the first incidence of gradient magnitude value above a
predefined threshold along the ray. The colored depth map values are
carefully integrated into the X-Ray image while maintaining its original
grayscale intensities. The presented method was tested and analysed for three
relevant clinical scenarios covering different anatomical aspects and
targeting different levels of interventional expertise. Results demonstrate
that improving depth perception of X-ray images has the potential to lead to
safer and more efficient clinical interventions.
Computer-Assisted Laparoscopic Myomectomy by Augmenting the Uterus with Pre-operative MRI Data
Authors:Toby Collins, Daniel Pizarro, Adrien Bartoli, Nicolas Bourdel
Abstract : An active research objective in Computer Assisted Intervention (CAI) is to
develop guidance systems to aid surgical teams in laparoscopic Minimal
Invasive Surgery (MIS) using Augmented Reality (AR). This involves
registering and fusing additional data from other modalities and overlaying
it onto the laparoscopic video in realtime. We present the first AR-based
image guidance system for assisted myoma localisation in uterine
laparosurgery. This involves a framework for semi-automatically registering a
pre-operative Magnetic Resonance Image (MRI) to the laparoscopic video with a
deformable model. Although there has been several previous works involving
other organs, this is the first to tackle the uterus. Furthermore, whereas
previous works perform registration between one or two laparoscopic images
(which come from a stereo laparoscope) we show how to solve the problem using
many images (e.g. 20 or more), and show that this can dramatically improve
registration. Also unlike previous works, we show how to integrate occluding
contours as registration cues. These cues provide powerful registration
constraints and should be used wherever possible. We present retrospective
qualitative results on a patient with two myomas and quantitative
semi-synthetic results. Our multi-image framework is quite general and could
be adapted to improve registration other organs with other modalities such as
CT.
MASHD: Theory and Evaluation
Session :
MASH'D: Theory and Evaluation
Date & Time : September 12 02:00 pm - 03:30 pm Location : TBA Chair : Papers :
nARratives of augmented worlds
Authors:Roy Shilkrot, Nick Montfort, Pattie Maes
Abstract : This paper presents an examination of augmented reality (AR) as a rising form
of interactive narrative that combines computer-generated elements with
reality, fictional with non-fictional objects, in the same immersive
experience. Based on contemporary theory in narratology, we propose to view
this blending of reality worlds as a metalepsis, a transgression of reality
and fiction boundaries, and argue that authors could benefit from using
existing conventions of narration to emphasize the transgressed boundaries,
as is done in other media. Our contribution is three-fold, first we analyze
the inherent connection between narrative, immersion, interactivity,
fictionality and AR using narrative theory, and second we comparatively
survey actual works in AR narratives from the past 15 years based on these
elements from the theory. Lastly, we postulate a future for AR narratives
through the perspective of the advancing technologies of both interactive
narratives and AR.
A Theory of Meaning for Mixed Reality Walking Tours
Author:Evan Barba
Abstract : In the broadest sense, Mixed and Augmented Reality experiences mix sensory
and conceptual elements both externally in the world and in the minds of
their users. The question of how participants in these experiences derive
meaning from these hybrid realities is important for both analysis and
design. By focusing on MAR cultural heritage walking tours, this paper
develops a theory of meaning-making based on the aboriginal walkabout that
accounts for both physical and conceptual experience. Through an interweaving
of concepts from anthropology, architecture, design, cognitive science and
MAR itself, I demonstrate that his theory is compatible with known principles
of brain function and human behavior and thus it serves as a more general
theory of meaning-making applicable beyond the MAR walking tours from which
it was derived.
Can mobile augmented reality systems assist in portion estimation? A user study.
Authors:Thomas Stütz, Radomir Dinic, Michael Domhardt, Simon Ginzinger
Abstract : Accurate assessment of nutrition information is an important part in the
prevention and treatment of a multitude of diseases, but remains a
challenging task. We present a novel mobile augmented reality application,
which assists users in the nutrition assessment of their meals. Using the
realtime camera image as a guide, the user overlays a 3D form of the food.
Additionally the user selects the food type. The corresponding nutrition
information is automatically computed. Thus accurate volume estimation is
required for accurate nutrition information assessment. This work presents an
evaluation of our mobile augmented reality approaches for portion estimation
and offers a comparison to conventional portion estimation approaches. The
comparison is performed on the basis of a user study (n=28). The quality of
nutrition assessment is measured based on the error in energy units. In the
results of the evaluation one of our mobile augmented reality approaches
significantly outperforms all other methods. Additionally we present results
on the efficiency and effectiveness of the approaches.
Evaluating Controls for a Point and Shoot Mobile Game: Augmented Reality, Tilt and Touch
Authors:Asier Marzo, Benoît Bossavit, Martin Hachet
Abstract : Controls based on Augmented Reality (AR), Tilt and Touch have been evaluated
in a point and shoot game for mobile devices. A user study (n=12) was
conducted to compare the three controls in terms of player experience and
accuracy. Tilt and AR controls provided more enjoyment, immersion and
accuracy to the players than Touch. Nonetheless, Touch caused fewer nuisances
and was playable under more varied situations. Despite the current technical
limitations, we suggest to incorporate AR controls into the mobile games that
supported them. Nowadays, AR controls can be implemented on handheld devices
as easily as the more established Tilt and Touch controls. However, this
study is the first comparison of them and thus its findings could be of
interest for game developers.
MASHD: AR Interaction and Creativity
Session :
MASH'D: AR Interaction and Creativity
Date & Time : September 12 11:00 am - 12:30 pm Location : TBA Chair : Papers :
Effects of Mobile AR-Enabled Interactions on Retention and Transfer for Learning in Art Museum Contexts
Authors:Weiquan Lu, Linh-Chi Nguyen, Teong Leong Chuah, Ellen Yi-Luen Do
Abstract : In this paper, we describe an experiment to study the effect of mobile
Augmented Reality (AR) on learning in art museum contexts. We created six
original paintings and placed them in a mini art museum. We then created an
AR application on the iPad to enable the artist to visually augment each
painting by introducing animation. We then measured the ability of the
visitors to remember the appearance of the paintings after 24 hours, as well
as their ability to objectify the paintings. Experiment results show that
while AR does improve retention and transfer of such art information, the
benefits of AR are mediated by other factors such as interference from other
elements of the exhibition, as well as subjects' own prior art experience and
training. The use of AR may also produce unexpected benefits, such as
providing users with a new perspective of the artwork, as well as increasing
their curiosity and encouraging them to experiment with the technology. Such
benefits may potentially improve the chances for learning and analytical
activities to take place.
AR PETITE THEATER: Augmented Reality Storybook for Supporting Children'
Authors:Kyungwon Gil, Jimin Rhim, Taejin Ha, Young Yim Doh, Woontack Woo
Abstract : In this paper, we present an AR Petite Theater, a story book that enables
role-play using augmented reality (AR) technology. It provides an opportunity
for children to learn the ability of empathy through interactive reading
experience by thinking and speaking in accordance with the character’s role
of the story. In general, empathy is one of most important elements for
children to make friends at school and to expand their social relations. In
particular, it is crucial for early school-age children who have difficulties
in getting along with friends due to their egocentric perspective. Through
the experiment with 24 six-year-old children, we measured children’s
role-playing participation and perspective taking state. As a result, more
empathic behaviors were revealed in the AR group. Children in the AR
condition were more actively involved in role-playing and showed less
unrelated perspectives than children in the non-AR condition. Therefore, we
verified that AR Petite Theater had the potential of expanding children’s
ability to empathize with others.
Integrating Augmented Reality to Enhance Expression, Interaction &
Authors:Alexis Clay, Gaël Domenger, Julien Conan, Axel Domenger, Nadine Couture
Abstract : The democratization of high-end, affordable and off-the-shelf sensors and
displays triggered an explosion in the exploration of interaction and
projection in arts. Although mostly witnessed in interactive artistic
installations (e.g. museums and exhibitions), performing arts also explore
such technologies, using interaction and augmented reality as part of the
performance. Such works often emerge from collaborations between artists and
scientists. Despite being antonymic in appearance, we advocate that both
fields can greatly benefit from this type of collaboration. Since 2006 the
authors of this paper (from a research laboratory and a national ballet
company) have collaborated on augmenting a ballet performance using a
dancer’s movements for interaction. We focus on large productions using
high-end motion capture and projection systems to allow dancers to interact
with virtual elements on an augmented stage in front of several hundred
people. To achieve this, we introduce an ‘augmented reality engineer’,
whose role is to design the augmented reality systems and interactions
according to a show’s aesthetic and choreographic message, and to control
them during the performance alongside light and sound technicians. Our last
production: Debussy3.0 is an augmented ballet based on La Mer by Claude
Debussy, featuring body interactions by one of the dancers and backstage
interactions by the augmented reality engineer. For the first time, we
explored 3D stereoscopy as a display technique for augmented reality and
interaction in real-time on stage. The show was presented at Biarritz Casino
in December 2013 in front of around 700 people. In this paper, we present the
Debussy3.0 augmented ballet both as a result of the use of augmented reality
in performing arts and as a guiding thread to provide feedback on
arts-science collaboration. First, we will describe how the ballet was
constructed aesthetically, technically and in its choreography. We will
discuss and provide feedback on the use of motion capture and stereoscopy
techniques in a live show and will then broaden the scope of discussion,
providing feedback on art-science collaboration, the traps and benefits for
both parties, and the positive repercussions it can bring to a laboratory
when working on industrial projects.
AIBLE: An Inquiry-Based Augmented Reality Environment for Teaching Astronomical Phenomena
Authors:Stéphanie FLECK, Gilles SIMON, J.M. Christian BASTIEN
Abstract : We present an inquiry-based augmented reality (AR) learning environment
(AIBLE) designed for teaching basic astronomical phenomena in elementary
classroom (children of 8-11 years old). The novelty of this environment lies
in the combination of both Inquiry Based Sciences Education and didactics
principles with AR features. This environment was user tested by 69 pupils in
order to assess its impact on learning. The main results indicate that AIBLE
provides new opportunities for the identification of learners’ problem
solving strategies.
VAL: Visually Augmented Laser cutting to enhance and support creativity
Abstract : Laser cutters are increasingly relevant within many user contextsand have
become an essential tool for model building and prototyping. While providing
precision and flexibility, these tools are typically suited for expert staff
in industrial settings. VAL (Visually Augmented Laser cutting) proposes a
novel system utilizing spatial augmented reality techniques to provide visual
augmentation directly on the work surface. VAL involves projection of the
user’s model prior to and during laser cutting providing key benefits
including minimizing idle time, reduction of errors, and support for new
creative practices. We interview and observe laser cutter users to identify
issues and concerns in the shared work context of a design school and
describe the design process for our prototype, which aims to address these
problems and unmet needs. Initial evaluation suggests VAL reduces complexity
and raises user confidence. Our findings extend research on adapting new use
contexts and creative practices with industrial fabrication tools.