References:
Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces by Andrew D. Wilson and Hrvoje Benko.
Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.
Author Bio:
Andrew Wilson got his undergraduate degree from Cornell University and his PhD from MIT.
Hrvoji Benko is a researcher at Adaptive Systems and Microsoft Research.
Summary:
Hypothesis:
How can we allow interaction and visualization in-depth with a user?
Methods:
The prototype was displayed at a convention for three-days, during which the authors observed the many users and their interactions with the device.
Results:
Six people seemed to be the cap for users. Many users led to problems such as them blocking off each other so that gestures could not be recognized by the camera. Holding objects also seemed to prove difficult for some users.
Contents:
LightSpace attempts to expand the already existing touch-interaction technology to a three dimensional environment. "Everything is a surface" is a theme that we see along with treating the entire room as a computer as well as noting that the body can, in face, be a display. 2D images are able to be projected into 3D and be interacted with by the users.
Discussion:
I thought this was really neat. Any type of display where you are interacting with computer created objects directly has always been fascinating to me. I believe that this system they created was, for the most part, successful. They seemed to be able to create an environment in which people could successfully interact with objects. The fact that the more users you get the more problems with obscuring cameras doesn't seem to be that big of and issue to me. It can be solved by simply adding more cameras, re-positioning already existing ones, things of that nature. Overall i would love to see something like this become something that is more heavily researched.
No comments:
Post a Comment