Tuesday, November 22, 2011

Reading #32: Taking Advice from Intelligent Systems, The Double Edged Sword of Explanations

References:
Taking Advice from Intelligent Systems, The Double Edged Sword. by Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen represent IBM Research in this paper.

This paper was presented at IUI 2011.

Summary:
Hypothesis:
Users make different choices if advice from an intelligent system is present.

Methods:
The first step is to create an intelligent system. It was known as Network Intrusion Management Benefiting from Learned Expertise or NIMBLE. The researchers recruited participants who had 3 years of experience in the cybersecurity field. They then completed several timed trials on the system. NIMBLE was able to offer assisting advice, and the users actions were observed.

Results:
It was observed that there is a correlation between the availability of information and the correctness of the answer. But it was dependent on if the answer was correct. If the system presented the users with a selection of answers, all of which are wrong, the users will most likely go with the systems suggestion.

Contents:
The researchers designed an intelligent system designed to offer assistance during tasks. The behavior of the users and their interaction with the system were observed and recorded.

Discussion:
I didnt really find this paper interesting. I dont think that knowing if users accept a systems help or not is really that useful, although im sure it actually is. The greater purpose of the did not really stick out to me so i found it hard to maintain interest.

Reading #31: Identifying emotional states using keystroke dynamics

References:
Identifying emotional states using keystroke dynamics by Clayton Epp, Michael Lippold, and Regan L. Mandryk.  



Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Clayton Epp is currently a software engineer for a private consulting company.
Michael Lippold is a masters student at the University of Saskatchewan.
Regan L. Mandryk  is an Assistant Professor at the University of Saskatchewan.


Summary:
Hypothesis:
Keystrokes can tell a lot about a persons emotions.


Methods:
Keystrokes were recorded from the users, then a questionnaire was posed to them. They data that was gathered in the survey and the keystrokes of the users were used to determine if there was any relationship between the two.


Results:
It was discovered that there is roughly an 80% chance that they would be able to estimate the users emotions based on keystrokes. Keystroke delay and duration were also factors that were observed in addition to key stroke order.


Contents:
Here the researchers attempt to discern if the users emotional state can be derived from keystroke information. They perform various typing tests and then administer a survey to collect data on the users emotional state.


Discussion:
This has some really cool potential applications. You could use the information to create a program that would change based on how angry, sad, bored you were. This would, i think, create a whole new user experience and could be a very positive impact on the computer industry.

Reading #30: Life "modes" in social media

References:
Life "modes" in social media by Fatih Kursat Ozenc and Shelly D. Farnham.  



Presented at CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Fatih Kursat Ozenc is a professor at Carnegie Mellon University.
Shelly D. Farnham is currently a researcher at Microsoft Research.


Summary:
Hypothesis:
People organize their lives based on various forms of social interaction.


Methods:
The first step was to interview the various participants in the study for two hours each. They were asked to draw out various pictures of various aspects of their social life. This intended to help the users visually represent the way they interact with others and patters were observed.


Results:
Most users drew their life in the form of a 'social meme' map, with the user at the center and various circles enclosing them. They also noted that the closer that someone was to someone the more communication channels existed between them. 


Contents:
Here the researchers are studying the different ways that people classify their social lives. The subjects were interviewed and asked to visually represent their social lives, and then commonalities were looked for.


Discussion:
A pretty cool paper, i think social networking is something that needs to be explored. With facebook social networking is kind of the hot thing right now and any advance in that field would affect a lot of people. 

Reading #29: Usable Gestures for Blind People, Understanding Preference and Performance

References:
Usable gestures for blind people: understanding preference and performance by Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner
 

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:Shaun K. Kane is currently an Assistant Professor at the University of Maryland.
Jacob O. Wobbrock is currently an Associate Professor at the University of Washington.
Richard E. Ladner is currently a Professor at the University of Washington.


Summary:
Hypothesis:
That devices can be optimized to accommodate the different needs of blind people.


Methods:
Both blind and sighted people had to create gestures based on a description given by the proctor. Each person made 2 gestures. The second study delt with performance. Both the sighted and blind participants were asked to execute a series of tasks. They rated the easiness of various types of actions. 


Results:
On average the blind persons gestures had more strokes than the sighted. The edge also found more use in the gestures of the blind people. The performance turned out to be pretty much the same. Blind people also tended to prefer multi touch gestures as well as larger gestures.


Contents:
The researchers attempt to explore the possibility that blind users have different needs than sighted users. They do this through two tests. One has the blind and sighted participants create gestures to perform tasks and observes the difference between them. The second tested the differences in performance between sighted and blind subjects.


Discussion:
This was a neat paper. I think one of the best fields of computer science are the ones that are using technology to help people who have disabilities. Being able to create programs and devices that are made to help people who would otherwise have difficulty with them is one of the best applications of computer science i can think of.

Tuesday, November 15, 2011

Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

References:
Sensing cognitive multitasking for a brain-based adaptive user interface by Erin Treacy Solov, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob 


Author Bios:
Erin Reacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz and Rober J.K. Jacob all attend Tufts University.
Angelo Sassaroli is studying Biomedical engineering at Tufts University

Sergio Fantini is studying Biomedical Engineering at Tufts University.
Paul Schermerhorn is studying Computer Science at Indiana University.
Audrey Girouard studs at Queen's University and their School of Computing.



Summary:
Hypothesis:
That it is possible to detect and adapt to changes in multitasking by a user.


Methods:
The researches classified three different types of multitasking, branching, dual-task, and delay. This is basically the same kinds of categories that were used in previous multitasking papers. They set a task that the subjects were required to accomplish with the help of a robot. It would run several different kinds of tests, each designed to test different forms of multitasking. 


Results:
A lot of interesting data regarding levels of hemoglobin in the brain was collected. Ultimately they determined that different types of multitasking can in face be detected, which means that they can also be adapted to.


Contents:
This paper described the different ways that a human can attempt to mitigate multiple tasks to themselves. Several participants were given a task to perform with a robot and observed. The hemoglobin levels in their brain were recorded at given points and the results were analyzed.


Discussion:
A really neat paper with some cool applications. This type of thing could really revolutionize the ways that we write programs. Being able to adapt to different situations and tailor a program directly to those needs i think really could do wonders for efficiency. 

Reading #26: Embodiment in brain-computer interaction

References 
Embodiment in brain-computer interaction by Kenton O’Hara, Abigail Sellen, Richard Harper.  



Author Bios:
Kenton O'hara is a senior researcher at Microsoft Research.
Abigail Sellen is a Principal Researcher at Microsoft Research.
Richard Harper is a Principal Researcher at Microsoft Research.


Summary:
Hypothesis:
That studying the whole bodies interaction with computers can better help us understand key aspects of brain computer interaction.


Methods:
In this paper we bring back the Mindflex game. The various subjects took the game home for a week to play in a setting that could act as a control. The researchers analyzed these results and used them to explain some results based on gestures and body language. The four groups consisted of four members each and were chosen by a 'team captain'.


Results:
Several results were recorded. Body orientation was found to play a large part in the game. Changing body position was pretty strongly correlated with attempting to do various tasks. It was also found that people believed concentrating on moving the balls different ways helped them move it. Spectator participation played a part as well, either positive or negative based on if they spectator was encouraging or discouraging success. 


Contents:
Here we identify a better need to understand the way that brains and computers interact. We test this with the Mindflex game. This game is sent home with several people and they way they play and interact with it is observed and recorded. Several patterns emerged. 


Discussion:
A neat paper, but i felt it was just a rehash of some of the earlier papers. No new technologies were developed for this paper, it simply observed the Mindflex game in a different way than the previous paper. While this could be helpful i really didnt personally find it interesting.

Thursday, November 10, 2011

Reading #28: Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments

References:
Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments by Andrew Bragdon, Eugene Nelson, Yang Li and Ken Hinckley


Paper presented at CHI 2011.


Author Bios:
Andrew Bragdon is a PhD computer science student at Brown University.
Eugene Nelson is a professor at Brown University.
Yang Li is a senior researcher at Google Research.
Ken Hinckley is a principal researcher at Microsoft Research.



Summary:
Hypothesis:
That gesture techniques used during various distracting activities such as walking can be more effective than the current soft-button paradigm. 


Methods:
The first tests they did were with activating gesture mode on the mobile device. They tried a series of methods such as a rocker button on the side of the phone and a large soft button placed on the screen. Users found both of these difficult to do while looking away from the phone. The final solution was to mount a hard button at the top of the phone. The distinct feel and location of the button made it easy for the users to locate and press without looking down. This button was needed only for free drawn gestures. Bezel gestures activate gesture mode when the user begins a gesture at the edge of a screen.  They also tested different kinds of gestures, paths and marks. They tested these different gestures they had users interact with a device while doing various activities and under different levels of distraction. They tested sitting and walking with moderate distraction levels, and sitting with attention-saturating tasks.


Results:
The surprising result in this paper was the users not only preferred gestures when looking away from a phone, but half the users also preferred them when using the phone normally. Specifically when looking away from the phone the users preferred bezel gestures, citing one of the reasons as the elimination of the button which they saw an "an extra step." Under distracting conditions bezel gestures significantly out preformed soft buttons.


Contents:
The researchers began by testing the various gesture methods in a controlled environment. They came to the conclusion that users preferred bezel gestures as their gesture method. They next tested the users ability to preform tasks while looking away from the phone, under distracting conditions and while walking and sitting. Bezel gestures were widely preferred in this instance. In the attention-saturating test users were asked to sit and preform a task with the device while being actively distracted from that task. Bezel gestures, once again, out performed soft buttons.


Discussion:
This paper was really pretty interesting. Mobile devices are designed to be used on the move, so it makes sense to research methods to do this more effectively. Being able to do tasks on a mobile device while not having to devote your full attention to it could greatly increase a users productivity. 

Tuesday, November 8, 2011

Reading #25: Twitinfo: aggregating and visualizing microblogs for event exploration

References:Twitinfo: aggregating and visualizing microblogs for event exploration by 
Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, Robert C. Miller.  


Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.





Author Bios:


Michael Bernstein is a graduate student at MIT.


Osama Badar is a member of CSAIL at MIT.


David Karger is also a member of CSAIL.


Samuel Madden is an associate professor at MIT.


Robert Miller is an associate professor at MIT.





Summary:


Hypothesis:

TwitInfo can provide a useful tool for summarizing and searching twitter.


Methods:
The subjects used TwitInfo to research several different events via tweets. They gathered information about which interface objects were useful and which werent. The system was able to judge whether the overall opinion of a topic was positive or negative.


Results:
Overall a success, TwitInfo organized most things correctly. It was easily able to keep track of events from sporting matches. Almost all participants were easily able to construct the story based on the information from the feeds.


Contents:
This article studies TwitInfo, a platform for constructing events based on tweets about them. It discussed the different trials that the subjects went through to test the system and recorded the results.


Discussion:
I really dont get twitter, but this is a pretty neat application of it. Being able to essentially make a timeline of peoples general ramblings about arbitrary events could really be useful when attempting to keep up with a currently occurring story such as a sports match.

Saturday, October 29, 2011

Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures

References:
Gesture avatar: a technique for operating mobile user interfaces using gestures by Hao Lu and Yang Li.  
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.

Author Bios:
Yang Li is a research scientist working for Google.
Hao Lu is a graduate student at the University of Washington.

Summary:
Hypothesis:
That users can create their own avatars to enable them to better provide precise touch input to small areas of the screen.

Methods:
The authors developed a system called the Gesture Avatar. This allowed for the user to create an avatar which was bound to a location on the screen. The first test that they conducted was having users select one letter out of a series by drawing an avatar. The second test was having users select a target on the screen. Both of these tests were also conducted with the user walking while attempting them. 

Results:
Gesture Avatar was slower than is competitor for large target sizes, but much faster for smaller sizes. Both of the systems increased with size, but SHIFT increased at a faster rate. 

Contents:
In this paper the researches test the Gesture Avatar system. They have users attempt to select small portions of a mobile device by drawing their own avatar on the screen and selecting it.

Discussion:
I feel like this could be really useful. I have often had problems with selecting small regions of my phones screen and something like this could really come in handy. It seems much more intuitive and easy than the current method of having to zoom in to select the link or object.

Sunday, October 23, 2011

Reading #23: User-defined Motion Gestures for Mobile Interaction

References
User-defined Motion Gestures for Mobile Interaction by Jaime Ruiz, Yang Li, Edward Lank.  

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Jamie Ruis is a doctoral student at the University of Waterloo.
Yang Li is a research scientist at Google.
Edward Land is an assistant professor at the University of Waterloo.


Summary:
Hypothesis:
That certain sets of gestures will be better for a user than others.


Methods:
The subjects were asked to design their own set of gestures to accomplish tasks. The researchers then analyzed these gestures and several were selected. These were then given to the subjects and they were given a set of tasks to perform with them. The results were recorded.


Results:
Many of the subjects designed gestures that were the same, or at least very similar, for the same task. These gestures usually mimicked some real life motion associated with the task. Opposite tasks were performed in opposite directions, etc. 


Contents:
The paper starts by having the subjects design gestures to complete tasks. Several of these are then selected by the researchers. They then return these gestures to the subjects and ask them to perform tasks with them, recording the results.


Discussion:
This paper is basically why i am taking this class. Mobile devices are the current big thing and things like this will greatly increase our productivity with them. Being able to create more user friendly and intuitive gestures will help to unveil many new uses for mobile devices and help streamline those uses that we already have for them.

Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays

References
Mid-air pan-and-zoom on wall-sized displays by Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay.  

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Mathieu Nancel is a PhD student at Université Paris-Sud XI.
Julie Wagner is a student at the insitu lab in Paris.
Emmanuel Pietriga is a researcher for INRIA.
Olivier Chapuis is a researcher at LRI.
Wendy Mackay is a research director at INRIA.


Summary:
Hypothesis:
That, one large touch displays, two handed gestures would be faster than one handed gestures, smaller gestures would be preferred over larger, fingers would provide more accuracy than a whole hand, and that circular motions would be preferred.


Methods:
The authors simply built a large touch display and tested different gestures on it. The panning and zooming test required users to move through several sets of rings while zooming in and out and panning. Different things such as target distance were varied through the experiment.


Results:
The found that two hands were in fact faster than one, although users seemed to prefer one handed gestures. Linear and 1 dimensional path control were faster than their counters, 2D and circular, which were said to be hard to use by users.


Contents:
The paper recorded the results of studying subjects using different forms of touch input to use a large touch display. The results were recorded based on performance time as well as user input.


Discussion:
I really like touch displays so i thought this paper was really neat. I hope to one day have a wall sized touch display in my home so i feel like this research is pretty relevant. Honestly though i was surprised by how different it is to interact with such a large scale touch display, rather than say your cell phone or tablet. There are a ton of new challenges that i would not have thought of and i found that really fascinating. 

Thursday, October 20, 2011

Reading #21: Human model evaluation in interactive supervised learning

References
Human model evaluation in interactive supervised learning by Rebecca Fiebrink, Perry R. Cook, and Daniel Trueman. Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems


Author Bios:
Rebecca Fiebrink is an assistant professor in Computer Science at Princeton University. 
Perry R. Cook is a professor at Princeton University.
Daniel Trueman is a professional musician.


Summary:
Hypothesis:
Finding the criteria for a model that is most important to a user will help develop better interactive machine learning systems.


Methods:
Several subjects were studied while they participated in some learning work with a system called The Wekinator. This allows for the subjects to train the system given certain input, usually gestures. The authors created three different studies to test this system. The first was to have several composers study the system and after using it for a set period of time, gave feedback to the researchers. The second experiment involved children from 1st to 4th year. They were told to create two interfaces on the machine, one was interaction based, the other duration based. The final experiment involved a cellist and teaching the machine to track and record the motions of the bow correctly.


Results:
Participants from the first study complained about the controls. Commenting that they were confusing to use and not intuitive. Cross validation was a feature that was only used in the latter two studies, and in those it was indicated to be of high importance. But participants from all studies used direct rather than cross validation most of the time.


Contents:
Here the authors observe how people interact with a machine learning system.  They discuss the different ways in which people work with the system, and which ones are most effective and why.


Discussion:
An interesting paper. I think something like this could definitely have cool applications down the line. Anything having to do with machine learning is something that i really feel can make a huge impact. Being able to designate tasks to a machine could lead to some cool developments in machines assisting humans.

Tuesday, October 18, 2011

Reading #18: Biofeedback Game Design

References:
Biofeedback Game Design: Using Direct and Indirect Physiological Control to Enhance Game Interaction by Lennart E. Nacke, Michael Kalyn, Calvin Lough, and Regan L. Mandryk.


Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Lennary E. Nacke is a professor at UOIT and holds a PhD in game development.
Michael Kalyn is a graduate student at the University of Saskatchewan.
Calvin Lough is a student at the University of Saskatchewan.
Regan L. Mandryk is an assistant professor at the University of Saskatchewan.


Summary:
Hypothesis:
That physiological input can help to enhance a users gaming experience and control.


Methods:
The first step towards testing users interaction with a game is to create one. So the authors started by creating a simple side scrolling platformer and added sensors to the equation. The sensors they had tested eye movement, electric activation of muscles, skin response, heart rate, breathing rate and body temperature. They designed two new methods of input as well as using the Xbox controller.


Results:
Subjects actually preferred the two methods of control that were come up with in the experimental group, rather than the control group that used the xbox controller. They stated that they liked the sensors that they could control the most.  


Contents:
The authors seek to establish a new gaming paradigm that uses physiological interaction with the user. They attempt to find which sensors are preferable to users. As well as how users react to having to learn new controls over using ones that already exist.


Discussion:
As an avid video game player this was interesting to me. The ability to have another method of interacting with a game is intriguing. But I really dont feel like it would ever become anything more than a gimmick. I know personally when im playing a video game im pretty focused on what i am doing. It just seems to me that any type of interference on the part of a sensor would just ruin a lot of immersion in the game and be kind of disruptive overall.

Reading #16: Classroom-Based Assistive Technology

References:
Classroom-Based Assistive Technology:  Collective Use of Interactive Visual Schedules by Students with Autism by Meg Cramer, Sen H. Hirano, Monica Tentori, Michael T. Yeganyan, Gillian R. Hayes.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Meg Cramer is a graduate student ad UC Irvine.
Sen Hirano is also a graduate student and UC Irvine.
Monica Tentori is a professor of computer sciences at UABC Mexico.
Michael T. Yeganyan is a Informatics researcher at UC Irvine.
Gillian R. Hayes is a professor in Informatics at UC Irvine.


Summary:
Hypothesis:
vSked can off advantages to assisting students with Autism than other currently existing technologies.


Methods:
The first step was to ask teachers their opinion on the system. It was difficult to interview the actual students as they did not have the best communication skills. The effectiveness of the whole system was based on consistency, predictability, teacher awareness, behavior. The system would allow the student to pick a goal or reward and work towards it. As the students progress the teachers will award them tokens which go towards achieving that goal or reward.


Results:
Overall the results are very positive. Overall focus by the students increased quite a bit and their need for teacher assistance declined. Images that closely resembled the subject matter really helped students understand the content. The self updating schedule helped students to better keep track of their activities. Students also collaborated with each other. Trying to see what other students were doing with their devices.


Contents:
The paper introduces a new technology called vSked. It is aimed to help students with autism have less trouble in a learning environment. The paper describes the various tests and results that went on over a 5 week period. It also discusses the feedback from the teachers, which is all overwhelmingly positive.


Discussion:
I thought this paper was really awesome. One of my favorite topics in computer science is using technology to help people with disabilities. The application of a lot of the touch screen and computer technologies i think were really positive. They really don't develop any new technologies in the paper per se, but they are adapting technologies that currently exist into a new form of device that hasn't really been seen before.

Monday, October 17, 2011

Reading #20 : The Aligned Rank Transform

References:
The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures by Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins.

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Jacob Wobbrock is an associate professor in the department of computer science at the University of Washington.
Leah Findlater is a researcher for the Information School.
Darren Gergle is an associate professor at Northwestern University.
James Higgens is a professor at Kansas State University.


Summary:
Hypothesis:
That we could use the Aligned Rank Transform to better observe and change data in nonparametric tests.


Methods:
The four states of the Aligned Rank Transform are as follows, computing residuals, computing estimated effects, computing aligned responses and performing as full-factorial ANOVA.


Results:
In the study the researches attempted to show that the ART system was superior by redoing several studies that had already been conducted. The first test showed how ART uncovers interaction effects, the second case showed how the ART can free analysts from the distributional assumptions of ANOVA, and the final one demonstrated nonparametric testing.


Discussion:
This paper was really uninteresting. While i feel like the stuff that was accomplished during this experiment was probably useful, i really could not get into it. The material was too dense and its applications to the real world were not really evident. Overall it was just hard for me to see how and why this would be useful, though im sure it will be.

Reading #19 : Reflexivity in Digital Anthropology

References:
Reflexivity in Digital Anthropology by Jennifer A. Rode.  



Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bio:
Jennifer Rode is an assistant professor at Drexel's School of Information


Summary:
Hypothesis:
That anthropologists can apply ethnographies to HCI and contribute to the field.


Methods:
There wasnt really an experiment per se. More of a talk about hypothetical situations


Results:
The author basically says that rather than observing technology, they can observe the role that technology plays in society. The author describes several methods of writing in an ethnography. These are Positivist, Reflexivity, Realistic, Confessional and Impressionistic.


Content:
Dr Rode talked about the various ways that ethnography can be applied to the field of computing. She also discusses the various styles of writing, and the advantages and disadvantages. Finally she ties it together by using examples of how ethnographies have been used in HCI.


Discussion:
This paper was horribly boring. When reading about experiments it is at least interesting to observe their results and the process by which they conducted the experiment. This was simply a very long argument for ethnographies, which presented very little empirical data. Overall i dont feel like i took anything away from this paper.