Session : Works In Progress Talks
Date & Time : October 04 01:30 pm - 03:30 pm
Location : H2-02 Hetzel Building Main Lecture Theatre
Chair : Papers :
Using a HHD with a HMD for Mobile AR InteractionAuthors:Rahul Budhiraja, Gun A. Lee, Mark Billinghurst
Abstract :
Mobile Augmented Reality (AR) applications are typically deployed either on head mounted displays (HMD) or handheld displays (HHD). This paper explores novel interaction techniques for a combined HHD-HMD hybrid system that builds on the strengths of each type of device. We use the HMD for viewing AR content and a touch screen HHD for interacting with the content. A prototype system was developed and a user study was conducted comparing four interaction techniques for selection tasks.
3D Interactions with a Passive Deformable Haptic GloveAuthors:Thuong N. Hoang, Ross Smith, Bruce H. Thomas
Abstract :
This paper explores enhancing mobile immersive augmented reality
manipulations by providing a sense of computer-captured touch through the use
of a passive deformable haptic glove that responds to objects in the physical
environment. The glove extends our existing pinch glove design with a Digital
Foam sensor that is placed under the palm of the hand. The novel glove input
device supports a range of touch-activated, precise, direct manipulation
modeling techniques with tactile feedback including hole cutting, trench
cutting, and chamfer creation. A user evaluation study comparing an image
plane approach to our passive deformable haptic glove showed that the glove
improves a user’s task performance time, decreases error rate and erroneous
hand movements, and reduces fatigue.
Ego- and Exocentric interaction for mobile AR conferencingAuthors:Timo Bleeker, Gun Lee, Mark Billinghurst
Abstract :
In this research we explore how a handheld display (HHD) can be used to
provide input into an Augmented Reality (AR) conferencing application shown
on a head mounted display (HMD). Although AR has successfully been used for
many collaborative applications, there has been little research on using HHD
and HMD together to enhance remote conferencing. This research investigates
two different HHD interfaces and methods for supporting file sharing in an AR
conferencing application. A formal evaluation compared four different
conditions and found that an Exocentric view and using Visual cues for
requesting content produced the best performance. The results were used to
create a set of basic design guidelines for future research and application
development.
CARMa: Content Augmented Reality MarkerAuthors:Mohammed Hossny, Mustafa Hossny, Saeid Nahavandi
Abstract :
The current marker-based augmented reality (AR) rendering has demonstrated
good results for online and special purpose applications such as
computer-assisted tasks and virtual training. However, it fails to deliver a
solution for off-line and generic applications such as augmented books,
newspapers, and scientific articles. These applications feature too many
markers that imposes a serious challenge on the recognition module. This
paper proposes a novel design for augmented reality markers. The proposed
marker design employs multi-view orthographic projection to derive dense
depth maps and relies on splats rendering for visualisation. The main
objective is to interpret the marker rather than recognising it. The proposed
marker design stores six depth map projections of the 3D model along with
their coloured textures in the marker.
Psychophysical Exploration of Stereoscopic Pseudo-TransparencyAuthors:Mai Otsuki, Paul Milgram
Abstract :
We report a two part experiment related to perceiving (virtual) objects in
the vicinity of (real) surfaces when using stereoscopic augmented reality
displays. In particular, our goal was to explore the effect of various visual
surface features on both perception of object location and perception of
surface transparency. Surface features were manipulated using random dot
patterns on a simulated real object surface, by manipulating dot size, dot
density, and whether or not objects placed behind the surface were partially
occluded by the surface.
Adapting Ray Tracing to Spatial Augmented RealityAuthors:Markus Broecker, Bruce Thomas, Ross Smith
Abstract :
Ray tracing is an elegant and intuitive image generation method. The
introduction of GPU-accelerated ray tracing and corresponding software
frameworks makes this rendering technique a viable option for Augmented
Reality applications. Spatial Augmented Reality employs projectors to
illuminate physical models and is used in fi elds that require photorealism,
such as design and prototyping. Ray tracing can be used to great effect in
this Augmented Reality environment to create scenes of high visual fidelity.
However, the peculiarities of SAR systems require that core ray tracing
algorithms be adapted to this new rendering environment. This paper
highlights the problems involved in using ray tracing in a SAR environment
and provides solutions to overcome these. In particular, the following issues
are addressed: ray generation, hybrid rendering and view-dependent rendering.
Augmented Reality Image Generation with Virtualized Real Objects Using View-dependent Texture and GeometryAuthors:Yuta Nakashima, Yusuke Uno, Norihiko Kawai, Tomokazu Sato, Naokazu Yokoya
Abstract :
Augmented reality (AR) images with virtualized real objects can be used for
various applications. However, such AR image generation requires hand-crafted
3D models of that objects, which are usually not available. This paper
proposes a view-dependent texture (VDT)- and view-dependent geometry
(VDG)-based method for generating high quality AR images, which uses 3D
models automatically reconstructed from multiple images. Since the quality of
reconstructed 3D models is usually insufficient, the proposed method inflates
the objects in the depth map as VDG to repair chipped object boundaries and
assigns a color to each pixel based on VDT to reproduce the detail of the
objects. Background pixel exposure due to inflation is suppressed by the use
of the foreground region extracted from the input images. Our experimental
results have demonstrated that the proposed method can successfully reduce
above visual artifacts.
View Management for Driver Assistance in an HMDAuthors:Felix Lauber, Andreas Butz
Abstract :
Head-mounted displays (HMDs) have the potential to overcome some of the technological limitations of currently existing automotive head-up displays (HUDs), such as the limited field of view and the restrictive boundaries of the windshield. However, in a formative study, we identified other, partially known problems with HMDs regarding content stability and occlusion. As a counter-measure we propose a novel layout mechanism for HMD visualization, which, on the one hand, benefits from the unique characteristics of HMDs and, on the other, combines the advantages of head-stabilized and cockpit-stabilized content. By subdividing the HMD’s field of view into different slots to which the content is dynamically assigned depending on the user’s head rotation, we ensure that the driver’s vision is effectively augmented in every possible direction.
SIXTH middleware for Sensor Web enabled AR ApplicationsAuthors:Abraham G. Campbell, Levent Görgü, Barnard Kroon, Dominic Carr, David Lillis, Gregory M.P. O’Hare
Abstract :
We increasingly live in a world where sensors have become truly ubiquitous in
nature. Many of these sensors are an integral part of devices such as
smartphones, which contain sufficient sensors to allow for their use as
Augmented Reality (AR) devices. This AR experience is limited by the
precision and functionality of an individual device's sensors and the its
capacity to process the sensor data into a useable form. This paper discuss
the current work on a mobile version of the SIXTH middleware which allows for
creation of Sensor Web enabled AR applications. This paper discusses current
work on mobile SIXTH, which involves the creation of a sensor web between
different Android and non-Android devices. This has led to several small
demonstrators which are discussed in this work in progress paper. Future work
on the project outlines the aims of the project to allow for the integration
of additional devices so as to explore new abilities such as leveraging
additional proprieties of those devices.
Motion Capturing Empowered Interaction with a Virtual Agent in an Augmented Reality EnvironmentAuthors:Ionut Damian, Felix Kistler, Mohammad Obaid, René Bühling, Mark Billinghurst, Elisabeth André
Abstract :
We present an Augmented Reality (AR) system where we immerse the user's whole
body in the virtual scene using a motion capturing (MoCap) suit. The goal is
to allow for seamless interaction with the virtual content within the AR
environment. We describe an evaluation study of a prototype application
featuring an interactive scenario with a virtual agent. The scenario contains
two conditions: in one, the agent has access to the full tracking data of the
MoCap suit and therefore is aware of the exact actions of the user, while in
the second condition, the agent does not get this information. We then report
and discuss the differences we were able to detect regarding the users'
perception of the interaction with the agent and give future research
directions.
Blur with Depth: A Depth Cue Method Based on Blur Effect in Augmented RealityAuthors:Xueting Lin, Takefumi Ogawa
Abstract :
In this paper, a depth cue method based on blur effect in augmented reality
is proposed. Different from previous approaches, the proposed method offers
an algorithm which estimates the blur effect in the whole scene based on the
spatial information in the real world and the intrinsic parameters of the
camera. We implemented a prototype using the proposed method and conducted
two user tests on how the users might perceive the blur effect rendered by
different blurring methods. The test settings are introduced and the results
are discussed. The test results show that our blur estimation method is
acceptable for moving virtual objects. We also find that the users might
prefer a stronger contrast of blur than the blur consistent to the
background.
A Pilot Study for Augmented Reality Supported Procedure Guidance to Operate Payload Racks On-Board the International Space StationAuthors:Daniela Markov-Vetter, Oliver Staadt
Abstract :
We present our current state in developing and testing of Augmented Reality
supported spaceflight procedures for intra-vehicular payload activities. Our
vision is to support the ground team and the flight crew to author and
operate easily AR guidelines without programming and AR knowledge. For
visualization of the procedural instructions using an HMD, 3D registered
visual aids are overlaid onto the payload model operated by additional voice
control. Embedded informational resources (e.g., images and videos) are
provided through a mobile tangible user interface. In a pilot study that was
performed at the ESA European Astronaut Centre by application domain experts,
we evaluated the performance, work-load and acceptance by comparing our AR
system with the conventional method of displaying PDF documents of the
procedure.
Comparing Pointing and Drawing for Remote CollaborationAuthors:Seungwon Kim, Gun Lee, Nobuchika SAKATA, Elina Vartiainen, Mark Billinghurst
Abstract :
In this research, we explore using pointing and drawing in a remote
collaboration system. Our application allows a local user with a tablet to
communicate with a remote expert on a desktop computer. We compared
performance in four conditions: (1) Pointers on Still Image, (2) Pointers on
Live Video, (3) Annotation on Still Image, and (4) Annotation on Live Video.
We found that using drawing annotations would require fewer inputs on an
expert side, and would require less cognitive load on the local worker side.
In a follow-on study we compared the conditions (2) and (4) using a more
complicated task. We found that pointing input requires good verbal
communication to be effective and that drawing annotations need to be erased
after completing each step of a task.
Region-based tracking using sequences of relevance measuresAuthors:Sandy Martedi, Bruce Thomas, Hideo Saito
Abstract :
We present the preliminary results of our proposal: a region-based detection
and tracking method of arbitrary shapes. The method is designed to be robust
against orientation and scale changes and also occlusions. In this work, we
study the effectiveness of sequence of shape descriptors for matching
purpose. We detect and track surfaces by matching the sequences of descriptor
so called relevance measures with their correspondences in the database.
First, we extract stable shapes as the detection target using Maximally
Stable Extreme Region (MSER) method. The keypoints on the stable shapes are
then extracted by simplifying the outline of the stable regions. The
relevance measures that are composed by three keypoints are then computed and
the sequences of them are composed as descriptors. During runtime, the
sequences of relevance measures are extracted from the captured image and are
matched with those in the database. When a particular region is matched with
one in the database, the orientation of the region is then estimated and
virtual annotations can be superimposed. We apply this approach in an
interactive task support system that helps users for creating paper craft
objects.
Consider Your Clutter: Perception of Virtual Object Motion in ARAuthors:Vicente Ferrer, Yifan Yang, Alejandro Perdomo, John Quarles
Abstract :
Background motion and visual clutter are present in almost all augmented
reality applications. However, there is minimal prior work that has
investigated the effects that background motion and clutter (e.g., a busy
city street) can have on the perception of virtual object motion in augmented
reality. To investigate these issues, we conducted an experiment in which
participants’ perceptions of changes in overlaid virtual object velocity
were evaluated on a black background and a high clutter/motion background.
Results offer insights into the impact that background clutter and motion has
on perception in augmented reality.
Bare Hand Natural Interaction with Augmented ObjectsAuthors:Lucas Silva Figueiredo, Jorge Lindoso, Rafael Roberto, Veronica Teichrieb, Ronaldo Ferreira dos Anjos Filho, Edvar Vilar Neto, Manoela Silva
Abstract :
In this work in progress we address the problem of interacting with augmented
objects. A bare hand tracking technique is developed, which allied to gesture
recognition heuristics, enables interaction with augmented objects in an
intuitive way. The tracking algorithm uses a flock of features approach that
tracks both hands in real time. The interaction occurs by the execution of
grasp and release gestures. Physics simulation and photorealistic rendering
are added to the pipeline. This way, the tool provides more coherent feedback
in order to make the virtual objects look and respond more likely real ones.
The pipeline was tested through speci c tasks, designed to analyze its
performance regarding the easiness of use, precision and response time.
Using a HHD with a HMD for Mobile AR InteractionAuthors:Rahul Budhiraja, Gun Lee, Mark Billinghurst
Abstract :
Mobile Augmented Reality (AR) applications are typically deployed either on
head mounted displays (HMD) or handheld displays (HHD). This paper explores
novel interaction techniques for a combined HHD-HMD hybrid system that builds
on the strengths of each type of device. We use the HMD for viewing AR
content and a touch screen HHD for interacting with the content. A prototype
system was developed and a user study was conducted comparing four
interaction techniques for selection tasks.
A Projected Augmented Reality System for Remote CollaborationAuthors:Matthew Tait, Tony Tsai, Nobuchika Sakata, Mark Billinghurst, Elina Vartiainen
Abstract :
This paper describes an AR system for remote collaboration using a captured
3D model of the local user’s scene. In the system a remote user can
manipulate the scene independently of the view of the local user and add AR
annotations that appear projected into the real world. Results from a pilot
study and the design of a further full study are presented.
Towards Object Based Manipulation in Remote GuidanceAuthors:Dulitha Ranatunga, Matt Adcock, David Feng, Bruce Thomas
Abstract :
This paper presents a method for using object based manipulation and spatial
augmented reality for the purpose of remote guidance. Previous remote
guidance methods have typically not made use of any semantic information
about the physical properties of the environment and require the helper and
worker to provide context. Our new prototype system introduces a level of
abstraction to the remote expert, allowing them to directly specify the
object movements required of a local worker. We use 3D tracking to create a
hidden virtual reality scene, mirroring the real world, with which the remote
expert interacts while viewing a camera feed of the physical workspace. The
intended manipulations are then rendered to the local worker using Spatial
Augmented Reality (SAR). We report on the implementation of a functional
prototype that demonstrates an instance of this approach. We anticipate that
techniques such as the one we present will allow more efficient collaborative
remote guidance in a range of physical tasks.
Tangible Interaction Techniques To Support Asynchronous CollaborationAuthors:Andrew Irlitti, Stewart Von Itzstein, Leila Alem, Bruce Thomas
Abstract :
Industrial uses of Augmented Reality (AR) are growing, however their uses are
consistently fashioned with an emphasis on consumption, delivering additional
information to the worker to assist them in the completion of their job. A
promising alternative is to allow user data creation during the actual
process by the worker performing their duties. This not only allows spatially
located annotations to be produced, it also allows an AR scene to be
developed in-situ and in real-time. Tangible markers offer a physical
interface while also creating physical containers to allow for fluent
interactions. This form factor allows both attached and detached annotations,
whilst allowing the creation of an AR scene during the process. This
annotated scene will allow asynchronous collaboration to be conducted between
multiple stakeholders, both locally and remotely. In this paper we discuss
our reasoning behind such an approach, and present the current work on our
prototype created to test and validate our proposition.
Markerless 3D Gesture-based Interaction for Handheld Augmented Reality InterfacesAuthors:Huidong Bai, Lei Gao, Jihad EL-SANA, Mark Billinghurst
Abstract :
Conventional 2D touch-based interaction methods for handheld Augmented
Reality (AR) cannot provide intuitive 3D interaction due to a lack of natural
gesture input with real-time depth information. The goal of this research is
to develop a natural interaction technique for manipulating virtual objects
in 3D space on handheld AR devices. We present a novel method that is based
on identifying the positions and movements of the user's fingertips, and
mapping these gestures onto corresponding manipulations of the virtual
objects in the AR scene. We conducted a user study to evaluate this method by
comparing it with a common touch-based interface under different AR
scenarios. The results indicate that although our method takes longer time,
it is more natural and enjoyable to use.
Social Program