Date & Time : Wednesday, October 02 01:30 pm - 04:30 pm
Location : TBA
Posters :
Acceleration Methods for Radiance Transfer in Photorealistic Augmented Reality
Organizers:
Lukas Gruber, Pradeep Sen, Tobias Höllerer, Dieter Schmalstieg
Description:
Radiance transfer computation from unknown real-world environments is an
intrinsic task in probe-less photometric registration for photorealistic
augmented reality, which affects both the accuracy of the real-world light
estimation and the quality of the rendering. We discuss acceleration methods
that can reduce the overall ray-tracing costs for computing the radiance
transfer for photometric registration in order to free up resources for more
advanced augmented reality lighting. We also present evaluation metrics for a
systematic evaluation.
An Outdoor Ground Truth Evaluation Dataset for Sensor-Aided Visual Handheld Camera Localization
Organizers:
Daniel Kurz, Peter Georg Meier, Alexander Plopski, Gudrun Klinker
Description:
We introduce the first publicly available test dataset for outdoor handheld
camera localization comprising over 45,000 real camera images of an urban
environment captured under natural camera motions and different illumination
settings. For all these images the dataset not only contains readings of the
sensors attached to the camera, but also ground truth information on the
geometry and texture of the environment and the full 6DoF ground truth camera
pose. This poster describes the extensive process of creating this
comprehensive dataset that we have made available to the public. We hope this
not only enables researchers to objectively evaluate their camera
localization and tracking algorithms and frameworks on realistic data but
also stimulates further research.
AR in the Library: A Pilot Study of Multi-Target Acquisition Usability
Organizers:
Bo Brinkman, Stacy Brinkman
Description:
Libraries use call numbers to organize their books and enable patrons to
locate them. To keep the books in order, library workers conduct a
time-consuming and tedious task called "shelf-reading." Workers look at the
call numbers on the spines of each book in the library, one at a time, to
make sure they are in the correct places. ShelvAR is an augmented reality
shelf-reading system for smart phones that reduces time spent, increases
accuracy, and produces an inventory of the books on their shelves as a
byproduct. Shelf-reading requires rapid acquisition of many targets (books).
Unlike many target acquisition tasks considered in the AR literature, the
user is not trying to select a single target from among many. Instead, the
user is trying to scan all of the targets, and must be able to easily
double-check that none were missed. Our goal is to explore the usability of
augmented reality applications for this type of "multiple target acquisition"
task. We present the results of a pilot study on the effectiveness of
ShelvAR. We demonstrate that individuals with no library experience are just
as fast and accurate, when using ShelvAR, as experienced library workers at
the shelf-reading task.
Are HMDs the Better HUDs?
Organizers:
Felix Lauber, Andreas Butz
Description:
Head-mounted displays (HMDs) have the potential to overcome some of the
technological limitations of currently existing automotive head-up displays
(HUDs), such as the limited field of view and the restrictive boundaries of
the windshield. In an initial study we evaluated the use of HMDs in cars by
means of a typical HUD visualization, using a HUD as baseline output
technology. We found no significant differences in terms of driving
performance, physical uneasiness or visual distraction. User statements
revealed several advantages and drawbacks of the different output
technologies apart from technological maturity and ergonomics. These results
will hopefully inspire researchers as well as application developers and even
might lead us to novel HMD visualization approaches.
Augmented Reality Driving Supported by Vehicular Ad Hoc Networking
Organizers:
Michel Ferreira, Pedro Gomes, Michelle Kruger Silvéria, Fausto Vieira
Description:
The confined space of a car and the configuration of its controls and
displays towards the driver, offer significant advantages for Augmented
Reality (AR) systems in terms of the immersion level provided to the user. In
addition, the inherent mobility and virtually unlimited power autonomy
transform cars into perfect mobile computing platforms. However, the limited
network connectivity that is currently available in automobiles leads to the
design of Advanced Driver Assistance Systems (ADAS) that create AR objects
based only on the information generated by on-board sensors, stored maps and
databases, and eventually high-latency online content for Internet-enabled
vehicles. By combining the new paradigm of Vehicular Ad Hoc Networking
(VANET) with AR human machine interfaces, we show that it is possible to
design novel cooperative ADAS, that base the creation of AR content on the
information collected from neighbouring vehicles or roadside infrastructures.
We provide a prototype implementation of a visual AR system that can
significantly improve the driving experience.
Augmenting Markerless Complex 3D Objects By Combining Geometrical and Color Edge Information
Organizers:
Antoine Petit, Eric Marchand, Keyvan Kanani
Description:
This paper presents a method to address the issue of augmenting a markerless
3D object with a complex shape. It relies on a model-based tracker which
takes advantage of GPU acceleration and 3D rendering in order to handle the
complete 3D model, whose sharp edges are efficiently extracted. In the pose
estimation step, we propose to robustly combine geometrical and color
edge-based features in the nonlinear minimization process, and to integrate
multiple-hypotheses in the geometrical edge-based registration phase. Our
tracking method shows promising results for augmented reality applications,
with a Kinect-based reconstructed 3D model.
Content First - A concept for Industrial Augmented Reality Maintenance Applications using Mobile Devices
Organizers:
Timo Engelke, Jens Keil, Sabine Webel, Pavel Rojtberg, Folker Wientapper, Ulrich Bockholt
Description:
Although AR has a long history in the area of maintenance and service-support
in industry, there still is a lack of lightweight, yet practical solutions
for handheld AR systems in everyday workflows. Attempts to support complex
maintenance tasks with AR still miss reliable tracking techniques, simple
ways to be integrated into existing maintenance environments, and practical
authoring solutions, which minimize costs for specialized content generation.
We present a general, customisable application framework, allowing to employ
AR and VR techniques in order to support technicians in their daily tasks. In
contrast to other systems, we do not aim to replace existing support systems
such as traditional manuals. Instead we integrate well-known AR- and novel
presentation techniques with existing instruction media. To this end
practical authoring solutions are crucial and hence we present an application
development system based on web-standards such as HTML,CSS and X3D.
Design of an AR marker for Cylindrical Surface
Organizers:
Asahi Suzuki, Yoshitsugu Manabe, Noriko Yata
Description:
This paper proposes an augmented reality marker that can be robustly detected
even on a cylindrical surface. The marker enables the surface normal
estimation of a cylindrical object to realize the presentation of appropriate
virtual information on the object. Conventional markers have difficulty
detecting and obtaining accurate surface normal in the presence of occlusion
or distortion of the marker in the image. Furthermore, it is difficult to
identify a feature on a cylindrical object on which to position a marker.
These problems are resolved by relying on the characteristic that a line
parallel to the central axis of the cylinder maintains linearity. In
addition, surface normal is calculated by estimating the object's shape by
using transformation matrices.
Diminished Reality Considering Background Structures
Organizers:
Norihiko Kawai, Tomokazu Sato, Naokazu Yokoya
Description:
This paper proposes a new diminished reality method for 3D scenes considering
background structures. Most conventional methods using image inpainting
assumes that the background around a target object is almost planar. In this
study, approximating the background structure by the combination of local
planes, perspective distortion of texture is corrected and searching area is
limited for improving the quality of image inpainting. The temporal coherence
of texture is preserved using the estimated structures and camera pose
estimated by Visual-SLAM.
Fast and Automatic City-Scale Environment Modeling for an Accurate 6DOF Vehicle Localization
Organizers:
Dorra LARNAOUT, Vincent Gay-Bellile, Steve Bourgeois, Benjamin Labbe, Michel Dhome
Description:
To provide high quality augmented reality service in a car navigation system,
accurate 6DoF localization is required. To ensure such accuracy, most of
current vision-based solutions rely on an off-line large scale modeling of
the environment. Nevertheless, while existing solutions require expensive
equipments and/or a prohibitive computation time, we propose in this paper a
complete framework that automatically builds an accurate city scale database
using only a standard camera, a GPS and Geographic Information System (GIS).
As illustrated in the experiments, only few minutes are required to model
large scale environments. The resulting databases can then be used during a
localization algorithm for high quality Augmented Reality experiences.
Further Stabilization of a Microlens-Array-Based Fiducial Marker
Organizers:
Hideyuki Tanaka, Yasushi Sumi, Yoshio Matsumoto
Description:
Fiducial markers (AR/visual markers) are still useful tools in many AR/MR
applications. But conventional markers have two fundamental problems in
orientation estimation. One is degradation of orientation accuracy in frontal
observation. The other is ``pose ambiguity" where the estimated orientation
repeats switching between two values. We previously developed a novel marker
``ArrayMark" which uses a microlens array and solved the former problem. This
time we propose a practical solution to the latter problem by improving the
ArrayMark. We attach an additional reference point to detect the occurrence
of invalid estimation, and modify the orientation of the marker by inverting
the zenith-angle of the visual-line. This marker enables stable pose
estimation from a one-shot image without using any filtering techniques. The
method is applicable to conventional markers, too. We demonstrated the
availability of this improvement to the pose-ambiguity problem.
Geometric Registration for Zoomable Camera Using Epipolar Constraint and Pre-calibrated Intrinsic Camera Parameter Change
Organizers:
Takafumi Taketomi, Kazuya Okada, Goshiro Yamamoto, Jun Miyazaki, Hirokazu Kato
Description:
In general, video see-through based augmented reality (AR) cannot change the
magnification of camera zooming parameter due to the difficulty of dealing
with changes in intrinsic camera parameters. To realize the usage of camera
zooming in AR, we propose a novel simultaneous intrinsic and extrinsic camera
parameter estimation method based on an energy minimization framework. Our
method is composed of the online and offline stages. An intrinsic camera
parameter change depending on the zoom values is calibrated in the offline
stage. Intrinsic and extrinsic camera parameters are then estimated based on
the energy minimization framework in the online stage. In our method, two
energy terms are added to the conventional marker-based camera parameter
estimation method. One is reprojection errors based on the epipolar
constraint. The other is the constraint of continuity of zoom values. By
using a novel energy function, our method can estimate accurate intrinsic and
extrinsic camera parameters. In an experiment, we confirmed that the proposed
method can achieve accurate camera parameter estimation during camera
zooming.
Giving Mobile Devices a SIXTH Sense: Introducing the SIXTH Middleware for Augmented Reality applications
Organizers:
Abraham Campbell, Levent Görgu, David Lillis, Barnard Kroon, Dominic Carr, Gregory M.P. O's Hare
Description:
With the increasing availability of sensors within smartphones and within the
world at large, a question arises about how this sensor data can be leveraged
by Augmented Reality (AR) devices. AR devices have traditionally been limited
by the capability of a given device's unique set of sensors. Connecting
sensors from multiple devices using a Sensor Web could address this problem.
Through leveraging this Sensor Web existing AR environments could be improved
and new scenarios made possible, with devices that previously could not have
being used as part of an AR environment. This paper proposes the use of
SIXTH: a middleware designed to generate a Sensor Web, which allows a device
to leverage heterogeneous external sensors within its environment to help
facilitate the creation of richer AR experiences. This paper will present a
worst case scenario, in which the device chosen will be a see-through,
Android-based Head Mounted Display that has no access to sensors. This device
is transformed into an AR device through the creation of a Sensor Web
allowing it to sense its environment facilitated through the use of SIXTH.
In-Situ Interactive Modeling Using a Single-Point Laser Rangefinder Coupled with a New Hybrid Orientation Tracker
Organizers:
Christel Léonet, Gilles Simon, Marie-Odile Berger
in-situ Lighting and Reflectance Estimation for indoor AR systems
Organizers:
Tomohiro Mashita, Hiroyuki Yasuhara, Alexander Plopski, Kiyoshi Kiyokawa, Haruo Takemura
Description:
We introduce an in-situ lighting and reflectance estimation method that does
not require specific light probes and/or preliminary scanning. Our method
uses images taken from multiple viewpoints while data accumulation and
lighting and reflectance estimations run in the background of the primary AR
system. As a result, our method requires little in the way of manipulations
for image collection because it consists primarily of image processing and
optimization. When used, lighting directions and initial optimization values
are estimated via image processing. Eventually, the full parameters are
obtained by optimization of the differences between real images. This system
uses current best parameters because the parameter estimation and input image
updates are run independently.
Interactive Exploration of Augmented Aerial Scenes with Free-Viewpoint Image Generation from Pre-Rendered Images
Organizers:
Fumio Okura, Masayuki Kanbara, Naokazu Yokoya
Description:
This paper proposes a framework that supports the photorealistic
superimposition of virtual objects onto real scenes obtained by
free-viewpoint image generation, which enables users to freely change their
viewpoints in virtualized real world constructed using preliminarily recorded
images. The framework combines the offline rendering of virtual objects and
the free-viewpoint image generation to take advantage of the higher quality
of offline rendering without the additional computational cost of online
processing; i.e., it incurs only the cost of the online free-viewpoint image
generation, which is simplified by pre-generating structured viewpoints.
Based on the proposed framework, we develop a practical application that
superimposes lost buildings of a historical relic into a virtualized
environment using omnidirectional images captured from the sky, thereby,
allowing the users to change of their viewpoint on a two-dimensional 400m x
400m plane using viewpoints of 20m x 20m grid structure.
Kinect for Interactive AR Anatomy Learning
Organizers:
Meng Ma, Pascal Fallavollita, Tobias Blum, Ulrich Eck, Christian Sandor, Simon Weidert, Jens Waschke, Nassir Navab
Description:
Education of anatomy is a challenging but crucial element in educating
medical professionals, but also for general education of pupils. Our research
group has previously developed a prototype of an Augmented Reality (AR) magic
mirror which allows intuitive visualization of realistic anatomical
information on the user. However, the current overlay is imprecise as the
magic mirror depends on the skeleton output from Kinect. These imprecisions
affect the quality of education and learning. Hence, together with clinicians
we have defined bone landmarks which users can touch easily on their body
while standing in front of the sensor. We demonstrate that these landmarks
allow the proper deformation of medical data within the magic mirror and onto
the human body, resulting in a more precise augmentation.
KITE: Platform for Mobile Augmented Reality Gaming and Interaction using Magnetic Tracking and Depth Sensing
Organizers:
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst
Description:
In this paper, we describe the KITE, a mobile Augmented Reality (AR) platform
that uses a magnetic tracker and a depth sensor for games and interaction
development that is typically only available on a desktop system. We have
achieved this using off-the-shelf hardware and efficient software that can be
easily assembled and executed. We demonstrate four possible modalities based
on hand input to provide a platform that game and interaction designers can
use to explore new possibilities for gaming in AR.
Panoramic Mapping on a Mobile Phone GPU
Organizers:
Georg Reinisch, Clemens Arth, Dieter Schmalstieg
Description:
Creating panoramic images in real-time is an expensive operation for mobile
devices. Mapping of individual pixels into the panoramic image is the main
focus of this paper, since it is one of the most time consuming parts. The
pixel-mapping process is transferred from the Central Processing Unit (CPU)
to the Graphics Processing Unit (GPU). The independence of pixels being
projected allows OpenGL shaders to perform this operation very efficiently.
We propose a shader-based mapping approach and confront it with an existing
solution. The application is implemented for Android phones and works
fluently on current generation devices.
Passive Deformable Haptic Glove to Support 3D Interactions in Mobile Augmented Reality Environments
Organizers:
Thuong N Hoang, Ross T Smith, Bruce H Thomas
Description:
We present a passive deformable haptic (PDH) glove to enhance mobile
immersive augmented reality manipulation with a sense of computer-captured
touch, responding to objects in the physical environment. We extend our
existing pinch glove design with a Digital Foam sensor, placed under the palm
of the hand. The novel glove input device supports a range of
touch-activated, precise, direct manipulation modeling techniques with
tactile feedback including hole-punching, trench cutting, and chamfer
creation. The PDH glove helps improve a user's s task performance time,
decrease error rate and erroneous hand movements, and reduce fatigue.
Photo-Shoot Localization of a Mobile Camera Based on Registered Frame Data of Virtualized Reality Models
Organizers:
Koji Makita, Jun Nishida, Tomoya Ishikawa, Takashi Okuma, Masakatsu Kourogi, Thomas Vincent, Laurence Nigay, Jun Yamashita, Hideaki Kuzuoka, Takeshi Kurata
Description:
This paper presents a study of a method for estimating the position and
orientation of a photo-shoot in indoor environments for augmented reality
applications. Our proposed localization method is based on registered frame
data of virtualized reality models, which are photos with known photo-shoot
positions and orientations, and depth data. Because registered frame data are
secondary product of modeling process, additional works are not necessary to
create registered frame data especially for the localization. In the method,
a photo taken by a mobile camera is compared to registered frame data for the
localization. Since registered frame data are linked with photo-shoot
position, orientation, and depth data, 3D coordinates of each pixel on the
photo of registered frame data is available. We conducted experiments with
employing five techniques of the estimation for comparative evaluations.
Poor Man's SimulCam: Real-Time And Effortless MatchMoving
Organizers:
Markéta Dubská, István Szentandrási, Michal Zachariáš, Adam Herout
Description:
In this article, we propose an instant matchmoving solution for green screen.
It uses a recent technique of planar uniform marker fields. Marker fields are
an extension of planar markers used in augmented reality, offering better
reliability and performance suitable for our task: tolerance to occlusion,
speed of detection, and use of arbitrary low-contrast colors. We show that
marker fields of shades of green (or blue or other color) can be used to
obtain an instant and effortless camera pose estimation. We provide exemplar
applications of the presented technique: virtual camera/simulcam and live
storyboarding or shot prototyping. The matchmoving technique based on marker
fields of shades of green is very computationally efficient - our
measurements show that the matchmoving preview and living storyboard editing
and recording can be easily done on today's ultramobile devices. Our
technique is thus available to anyone at low cost and with easy setup,
opening space for new levels of filmmakers' creative expression.
Psychophysical Exploration of Stereoscopic Pseudo-Transparency
Organizers:
Mai Otsuki, Paul Milgram
Description:
We report an experiment related to perceiving (virtual) objects in the
vicinity of (real) surfaces when using stereoscopic augmented reality
displays. In particular, our goal was to explore the effect of various visual
surface features on both perception of object location and perception of
surface transparency. Surface features were manipulated using random dot
patterns on a simulated real object surface, by manipulating dot size, dot
density, and whether or not objects placed behind the surface were partially
occluded by the surface.
See-through Window vs. Magic Mirror: A Comparison in Supporting Visual-Motor Tasks
Organizers:
Zhen Bai, Alan Blackwell
Description:
There are two alternative display metaphors for Augmented Reality (AR)
screens: a see-through window or a magic mirror. Commonly used by
task-support AR applications, the see-through display has not been compared
with the mirror display in terms of user's s task performance, even though
the `mirror' hardware is more accessible to general users. We conducted a
novel experiment to compare participants's performance when following object
rotation cues with the two display metaphors. Results show that
participants's overall performance under the mirror view was comparable to
the see-through view, which indicates that the augmented mirror display may
be a promising alternative to the window display for AR applications which
guide moderately complex three-dimensional manipulations with physical
objects.
Study of Augmented Gesture Communication Cues and View Sharing in Remote Collaboration
Organizers:
Seungwon Kim, Gun Lee, Nobuchika SAKATA, Andreas Duenser, Elina Vartiainen, Mark Billinghurst
Description:
In this research, we explore how different types of augmented gesture
communication cues can be used under different view sharing techniques in a
remote collaboration system. In a pilot study, we compared four conditions:
(1) Pointers on Still Image, (2) Pointers on Live Video, (3) Annotation on
Still Image, and (4) Annotation on Live Video. Through this study, we found
three results. First, users collaborate more efficiently using annotation
cues than pointer cues for communicating object position and orientation
information. Second, live video becomes more important when quick feedback is
needed. Third, the type of gesture cue has more influence on performance and
user preference than the type of view sharing method.
Subtle Cueing for Visual Search in Head-Tracked Head Worn Displays
Organizers:
Weiquan Lu, Dan Feng, Steven Feiner, Qi Zhao, Henry Been-Lirn Duh
Description:
Goal-oriented visual search in augmented reality can be facilitated by using
visual cues to call attention to a target. However, traditional use of
explicit cues can degrade visual search performance due to scene distortion,
occlusion and addition of visual clutter. In contrast, Subtle Cueing has been
previously proposed as an alternative to explicit cueing, but little is known
about how well it works for head-tracked head worn displays (HWDs). We
investigated the effect of Subtle Cueing for head-tracked head worn displays,
using visual search research methods in simu-lated augmented reality
environments. Our user study found that Subtle Cueing improves visual search
performance, and serves as a feasible cueing mechanism for AR environments
using HWDs.
Third Person Perspective Augmented Reality for High Accuracy Applications
Organizers:
Stéphane Côté, Philippe Trudel
Description:
We proposed a 6 degrees of freedom augmentation system aimed at meeting the
high accuracy requirements of engineering tasks. A stationary panoramic video
camera captures a stream that is augmented by a portable computer. A handheld
tablet device located in the same area broadcasts its instantaneous
orientation, and receives the augmented view in the corresponding
orientation, in real time. The panoramic camera can also be moved to other
locations and simultaneously tracked by the system, providing 6 degrees of
freedom augmentation. This gives the user a third person perspective
augmentation, which is very precise and potentially more accurate than
handheld augmentation.
Towards Intelligent View Management: A Study of Manual Text Placement Tendencies in Mobile Environments Using Video See-through Displays
Organizers:
Jason Orlosky, Kiyoshi Kiyokawa, Haruo Takemura
Description:
When viewing content in a see-through head mounted display (HMD), displaying
readable information is still difficult when text is overlayed onto a
changing background or lighted surface. Moving text or content to a more
appropriate place on the screen through automation or intelligent algorithms
is one viable solution to this kind of issue. However, many of these
algorithms fail to act as a human would when placing text in a more
appropriate location in real time. In order to improve these text and view
management algorithms, we report the results and analysis of an experiment
designed to evaluate user tendencies when placing virtual text in the real
world through an HMD. In the conducted experiment, 20 users manually
overlayed text in real time onto 4 different videos taken from the
first-person perspective of a pedestrian. We find that users have a tendency
to place overlayed text in locations near the center of the viewing field,
gravitating towards a point just below the horizon. Common locations for text
overlay such as walls, shaded areas, and pavement are classified and
discussed.
User Attention Oriented Augmented Reality on Documents with Document Dependent Dynamic Overlay
Organizers:
Takumi Toyama, Wakana Suzuki, Andreas Dengel, Koichi Kise
Description:
When we read a document (any kind of, scientific papers, novels, etc.), we
often encounter a situation that the information from the reading document is
too less to comprehend what the author(s) would like to convey. In this
paper, we demonstrate how the combination of a wearable eye tracker, a
see-through head-mounted display (HMD) and an image based document retrieval
engine enhances people's s reading experiences. By using our proposed system,
the reader can get supportive information in the see-through HMD when he
wants. A wearable eye tracker and a document retrieval engine are used to
detect which line in the document the reader is reading. We propose a method
to detect the reader's s attention on a word in a reading document, in order
to present information at a preferable moment. Furthermore, we also propose a
method to project a point of the document to a point of the HMD screen, by
calculating the pose of the reading document in the camera image. This
projection enables the system to overlay the information dynamically in an
augmented view on the reading line. The results from the user study and the
experiments show the potential of the proposed system in a practical use
case.
User Awareness of Tracking Uncertainties in AR Navigation Scenarios
Organizers:
Frieder Pankratz, Andreas Dippon, Tayfur Coskun, Gudrun Klinker
Description:
Current Augmented Reality navigation applications for pedestrians usually do
not visualize tracking errors. However, tracking uncertainties can accumulate
so that the user is presented with a distorted impression of navigation
accuracy. To increase the awareness of users about potential imperfections of
the tracking at a given time, we alter the visualization of the navigation
system. We developed four visualization and error visualization concepts and
used a controlled Mixed Reality environment to conduct a pilot study. We
found that, while error visualization has the potential to improve AR
navigation systems, it is difficult to find suitable visualizations, which
are correctly understood among the users.
Social Program