Saturday, October 29, 2011

Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures

References:
Gesture avatar: a technique for operating mobile user interfaces using gestures by Hao Lu and Yang Li.  
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.

Author Bios:
Yang Li is a research scientist working for Google.
Hao Lu is a graduate student at the University of Washington.

Summary:
Hypothesis:
That users can create their own avatars to enable them to better provide precise touch input to small areas of the screen.

Methods:
The authors developed a system called the Gesture Avatar. This allowed for the user to create an avatar which was bound to a location on the screen. The first test that they conducted was having users select one letter out of a series by drawing an avatar. The second test was having users select a target on the screen. Both of these tests were also conducted with the user walking while attempting them. 

Results:
Gesture Avatar was slower than is competitor for large target sizes, but much faster for smaller sizes. Both of the systems increased with size, but SHIFT increased at a faster rate. 

Contents:
In this paper the researches test the Gesture Avatar system. They have users attempt to select small portions of a mobile device by drawing their own avatar on the screen and selecting it.

Discussion:
I feel like this could be really useful. I have often had problems with selecting small regions of my phones screen and something like this could really come in handy. It seems much more intuitive and easy than the current method of having to zoom in to select the link or object.

Sunday, October 23, 2011

Reading #23: User-defined Motion Gestures for Mobile Interaction

References
User-defined Motion Gestures for Mobile Interaction by Jaime Ruiz, Yang Li, Edward Lank.  

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Jamie Ruis is a doctoral student at the University of Waterloo.
Yang Li is a research scientist at Google.
Edward Land is an assistant professor at the University of Waterloo.


Summary:
Hypothesis:
That certain sets of gestures will be better for a user than others.


Methods:
The subjects were asked to design their own set of gestures to accomplish tasks. The researchers then analyzed these gestures and several were selected. These were then given to the subjects and they were given a set of tasks to perform with them. The results were recorded.


Results:
Many of the subjects designed gestures that were the same, or at least very similar, for the same task. These gestures usually mimicked some real life motion associated with the task. Opposite tasks were performed in opposite directions, etc. 


Contents:
The paper starts by having the subjects design gestures to complete tasks. Several of these are then selected by the researchers. They then return these gestures to the subjects and ask them to perform tasks with them, recording the results.


Discussion:
This paper is basically why i am taking this class. Mobile devices are the current big thing and things like this will greatly increase our productivity with them. Being able to create more user friendly and intuitive gestures will help to unveil many new uses for mobile devices and help streamline those uses that we already have for them.

Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays

References
Mid-air pan-and-zoom on wall-sized displays by Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay.  

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Mathieu Nancel is a PhD student at Université Paris-Sud XI.
Julie Wagner is a student at the insitu lab in Paris.
Emmanuel Pietriga is a researcher for INRIA.
Olivier Chapuis is a researcher at LRI.
Wendy Mackay is a research director at INRIA.


Summary:
Hypothesis:
That, one large touch displays, two handed gestures would be faster than one handed gestures, smaller gestures would be preferred over larger, fingers would provide more accuracy than a whole hand, and that circular motions would be preferred.


Methods:
The authors simply built a large touch display and tested different gestures on it. The panning and zooming test required users to move through several sets of rings while zooming in and out and panning. Different things such as target distance were varied through the experiment.


Results:
The found that two hands were in fact faster than one, although users seemed to prefer one handed gestures. Linear and 1 dimensional path control were faster than their counters, 2D and circular, which were said to be hard to use by users.


Contents:
The paper recorded the results of studying subjects using different forms of touch input to use a large touch display. The results were recorded based on performance time as well as user input.


Discussion:
I really like touch displays so i thought this paper was really neat. I hope to one day have a wall sized touch display in my home so i feel like this research is pretty relevant. Honestly though i was surprised by how different it is to interact with such a large scale touch display, rather than say your cell phone or tablet. There are a ton of new challenges that i would not have thought of and i found that really fascinating. 

Thursday, October 20, 2011

Reading #21: Human model evaluation in interactive supervised learning

References
Human model evaluation in interactive supervised learning by Rebecca Fiebrink, Perry R. Cook, and Daniel Trueman. Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems


Author Bios:
Rebecca Fiebrink is an assistant professor in Computer Science at Princeton University. 
Perry R. Cook is a professor at Princeton University.
Daniel Trueman is a professional musician.


Summary:
Hypothesis:
Finding the criteria for a model that is most important to a user will help develop better interactive machine learning systems.


Methods:
Several subjects were studied while they participated in some learning work with a system called The Wekinator. This allows for the subjects to train the system given certain input, usually gestures. The authors created three different studies to test this system. The first was to have several composers study the system and after using it for a set period of time, gave feedback to the researchers. The second experiment involved children from 1st to 4th year. They were told to create two interfaces on the machine, one was interaction based, the other duration based. The final experiment involved a cellist and teaching the machine to track and record the motions of the bow correctly.


Results:
Participants from the first study complained about the controls. Commenting that they were confusing to use and not intuitive. Cross validation was a feature that was only used in the latter two studies, and in those it was indicated to be of high importance. But participants from all studies used direct rather than cross validation most of the time.


Contents:
Here the authors observe how people interact with a machine learning system.  They discuss the different ways in which people work with the system, and which ones are most effective and why.


Discussion:
An interesting paper. I think something like this could definitely have cool applications down the line. Anything having to do with machine learning is something that i really feel can make a huge impact. Being able to designate tasks to a machine could lead to some cool developments in machines assisting humans.

Tuesday, October 18, 2011

Reading #18: Biofeedback Game Design

References:
Biofeedback Game Design: Using Direct and Indirect Physiological Control to Enhance Game Interaction by Lennart E. Nacke, Michael Kalyn, Calvin Lough, and Regan L. Mandryk.


Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Lennary E. Nacke is a professor at UOIT and holds a PhD in game development.
Michael Kalyn is a graduate student at the University of Saskatchewan.
Calvin Lough is a student at the University of Saskatchewan.
Regan L. Mandryk is an assistant professor at the University of Saskatchewan.


Summary:
Hypothesis:
That physiological input can help to enhance a users gaming experience and control.


Methods:
The first step towards testing users interaction with a game is to create one. So the authors started by creating a simple side scrolling platformer and added sensors to the equation. The sensors they had tested eye movement, electric activation of muscles, skin response, heart rate, breathing rate and body temperature. They designed two new methods of input as well as using the Xbox controller.


Results:
Subjects actually preferred the two methods of control that were come up with in the experimental group, rather than the control group that used the xbox controller. They stated that they liked the sensors that they could control the most.  


Contents:
The authors seek to establish a new gaming paradigm that uses physiological interaction with the user. They attempt to find which sensors are preferable to users. As well as how users react to having to learn new controls over using ones that already exist.


Discussion:
As an avid video game player this was interesting to me. The ability to have another method of interacting with a game is intriguing. But I really dont feel like it would ever become anything more than a gimmick. I know personally when im playing a video game im pretty focused on what i am doing. It just seems to me that any type of interference on the part of a sensor would just ruin a lot of immersion in the game and be kind of disruptive overall.

Reading #16: Classroom-Based Assistive Technology

References:
Classroom-Based Assistive Technology:  Collective Use of Interactive Visual Schedules by Students with Autism by Meg Cramer, Sen H. Hirano, Monica Tentori, Michael T. Yeganyan, Gillian R. Hayes.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Meg Cramer is a graduate student ad UC Irvine.
Sen Hirano is also a graduate student and UC Irvine.
Monica Tentori is a professor of computer sciences at UABC Mexico.
Michael T. Yeganyan is a Informatics researcher at UC Irvine.
Gillian R. Hayes is a professor in Informatics at UC Irvine.


Summary:
Hypothesis:
vSked can off advantages to assisting students with Autism than other currently existing technologies.


Methods:
The first step was to ask teachers their opinion on the system. It was difficult to interview the actual students as they did not have the best communication skills. The effectiveness of the whole system was based on consistency, predictability, teacher awareness, behavior. The system would allow the student to pick a goal or reward and work towards it. As the students progress the teachers will award them tokens which go towards achieving that goal or reward.


Results:
Overall the results are very positive. Overall focus by the students increased quite a bit and their need for teacher assistance declined. Images that closely resembled the subject matter really helped students understand the content. The self updating schedule helped students to better keep track of their activities. Students also collaborated with each other. Trying to see what other students were doing with their devices.


Contents:
The paper introduces a new technology called vSked. It is aimed to help students with autism have less trouble in a learning environment. The paper describes the various tests and results that went on over a 5 week period. It also discusses the feedback from the teachers, which is all overwhelmingly positive.


Discussion:
I thought this paper was really awesome. One of my favorite topics in computer science is using technology to help people with disabilities. The application of a lot of the touch screen and computer technologies i think were really positive. They really don't develop any new technologies in the paper per se, but they are adapting technologies that currently exist into a new form of device that hasn't really been seen before.

Monday, October 17, 2011

Reading #20 : The Aligned Rank Transform

References:
The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures by Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins.

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Jacob Wobbrock is an associate professor in the department of computer science at the University of Washington.
Leah Findlater is a researcher for the Information School.
Darren Gergle is an associate professor at Northwestern University.
James Higgens is a professor at Kansas State University.


Summary:
Hypothesis:
That we could use the Aligned Rank Transform to better observe and change data in nonparametric tests.


Methods:
The four states of the Aligned Rank Transform are as follows, computing residuals, computing estimated effects, computing aligned responses and performing as full-factorial ANOVA.


Results:
In the study the researches attempted to show that the ART system was superior by redoing several studies that had already been conducted. The first test showed how ART uncovers interaction effects, the second case showed how the ART can free analysts from the distributional assumptions of ANOVA, and the final one demonstrated nonparametric testing.


Discussion:
This paper was really uninteresting. While i feel like the stuff that was accomplished during this experiment was probably useful, i really could not get into it. The material was too dense and its applications to the real world were not really evident. Overall it was just hard for me to see how and why this would be useful, though im sure it will be.

Reading #19 : Reflexivity in Digital Anthropology

References:
Reflexivity in Digital Anthropology by Jennifer A. Rode.  



Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bio:
Jennifer Rode is an assistant professor at Drexel's School of Information


Summary:
Hypothesis:
That anthropologists can apply ethnographies to HCI and contribute to the field.


Methods:
There wasnt really an experiment per se. More of a talk about hypothetical situations


Results:
The author basically says that rather than observing technology, they can observe the role that technology plays in society. The author describes several methods of writing in an ethnography. These are Positivist, Reflexivity, Realistic, Confessional and Impressionistic.


Content:
Dr Rode talked about the various ways that ethnography can be applied to the field of computing. She also discusses the various styles of writing, and the advantages and disadvantages. Finally she ties it together by using examples of how ethnographies have been used in HCI.


Discussion:
This paper was horribly boring. When reading about experiments it is at least interesting to observe their results and the process by which they conducted the experiment. This was simply a very long argument for ethnographies, which presented very little empirical data. Overall i dont feel like i took anything away from this paper.

Thursday, October 6, 2011

Reading #17 : Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment

References:
Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment by Andrew Raij, Santosh Kumar, Animikh Ghosh, and Mani Srivastava.   Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Andrew Raij is a post doctoral student at the University of Memphis.
Santosh Kumar is a professor at the University of Memphis.
Animikh Ghosh is a research associate at Infosys Labs.
Mani Srivastava is a professor at UCLA.


Summary:
Hypothesis:
That the wearing of sensors on ones body can, in fact, be a privacy concern.


Methods:
The authors had two groups, both of which filled out a survey before hand. They groups were then observed for several days and then shown some results of those observations. One group had no sensor, the other group was asked to wear their sensor for 3 days. At the end of the experiment the subjects were asked to fill out another survey about their feelings on the subject.


Results:
People, understandable, care the least about privacy on data that is not relevant to them. The group being watched also showed more privacy concerns than those who weren't. People who wore the sensors said that they were afraid of people knowing where they were at any given time. They had concerns about who the data that was collected would be shared with.


Contents:
The authors in this paper address the need for increased privacy. They establish an experiment and obtain groups of subjects to test a users feeling when they are tracked by a senors. They observe how this makes users feel and how they change to adapt to it.


Discussion:
This paper i think is really relevant to the world in general as it is getting smaller. It becomes more and more difficult to keep things private now than it was even ten years ago. Everything is done online which means a lot of personal information is out there for someone to potentially take. This is a neat extrapolation of that onto getting other types of private information such as location. Overall a very relevant paper.

Tuesday, October 4, 2011

Gang Leader For a Day

I honestly really enjoyed this book. This is something that is so far from anything i would ever do it is fascinating to me. The way that he described the various ethical delima was very interesting. It really put into perspective how things like the way you treat people or conduct your research and be in serious violation of ethics. While this is an extreme example, it is still a valid one. Seeing the author grow throughout the book really helped me empathize with the situation. In learning about the community it was interesting to hear about how tight knit they are. When the author messes up something or does something wrong it got around quickly and all the community members were very protective.

Reading #15: Madgets: Actuating Widgets on Interactive Tabletops

References:
Madgets: Actuating Widgets on Interactive Tabletops by Malte Weiss, Florian Schwarz, Simon Jakubowski, and Jan Borchers.  


Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.


Author Bios:
Malte Weiss is a PhD student at Media Computing Group.
Florian Schwarz is an assistant professor at the University of Pennsylvania.
Simon Jakubowski is a Research Scientist at AlphaFix.
Jan Borchers is a professor at Aachen University.


Summary:
Hypothesis:
Tabletops that use Madgets allow for much easier interaction and actuation control.


Methods:
Madgets seek to be lightweight while also being easy to use. Sensing is done via visual tracking and the widget controls are transparent. The computer tracks all the elements of the system and finds paths between positions.


Results:
Overall it was successful. It allowed for widgets to be easily built and mapped onto the system. It allowed for rapid change of a system and for easier experimentation. 


Contents:
The description of the project followed the introduction. All of the elements of the project are described, both physical and computational. It then tells how all the pieces fit together.


Discussion:
I didnt really care for this article. It just didnt seem something that would really catch my interests as being the next big thing. While it is a neat idea for building prototypes of various systems, i feel like that could be easily accomplished in a different way that would be more effective.

Reading #14 : Tesla Touch: Electrovibration for Touch Surfaces

References:
Tesla Touch: Electrovibration for Touch Surfaces by Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison.  


Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.


Author Bios:
Olivier Bau has a PhD in Computer Science.
Ivan Poupyrev is a Researcher at Walt Disney Research.
Ali Israr is a Researcher at Walt Disney Research.
Chris Harrison is a PhD student at Carnegie Mellon.


Summary
Hypothesis:
Electrovibration can offer a better method of touch interaction than tactile interfaces currently used.


Method:
Subjects were asked to touch and then describe and answer questions about Tesla-touch surfaces. The subjects spent different amounts of time in the detection thresholds and frequency and amplitude detection. Intensities also varied.


Results:
High frequency surfaces were perceived as smoother to the subjects. (wood feeling). Amplitude has a corresponding relationship with frequency. Turning up amplitude at high frequencies increased 'smoothness'. 


Content:
The researchers were attempting to find ways to get better tactile feedback from a surface. They recruited various test subjects and had them feel various surfaces, testing amplitude and frequency ranges. They then recorded the ranges that they observed and how they felt differently. They also studied the minimum and maximum frequencies that users could feel a difference.


Discussion:
A neat paper overall. I like all the things that they are doing with touch surfaces. It seems that this could be useful in the future when touch screen devices become more advanced. The ability to feel different textures could offer up a whole new world of possibilities on what a mobile or touch screen device could be capable of.