Tuesday, November 22, 2011

Reading #32: Taking Advice from Intelligent Systems, The Double Edged Sword of Explanations

References:
Taking Advice from Intelligent Systems, The Double Edged Sword. by Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen represent IBM Research in this paper.

This paper was presented at IUI 2011.

Summary:
Hypothesis:
Users make different choices if advice from an intelligent system is present.

Methods:
The first step is to create an intelligent system. It was known as Network Intrusion Management Benefiting from Learned Expertise or NIMBLE. The researchers recruited participants who had 3 years of experience in the cybersecurity field. They then completed several timed trials on the system. NIMBLE was able to offer assisting advice, and the users actions were observed.

Results:
It was observed that there is a correlation between the availability of information and the correctness of the answer. But it was dependent on if the answer was correct. If the system presented the users with a selection of answers, all of which are wrong, the users will most likely go with the systems suggestion.

Contents:
The researchers designed an intelligent system designed to offer assistance during tasks. The behavior of the users and their interaction with the system were observed and recorded.

Discussion:
I didnt really find this paper interesting. I dont think that knowing if users accept a systems help or not is really that useful, although im sure it actually is. The greater purpose of the did not really stick out to me so i found it hard to maintain interest.

Reading #31: Identifying emotional states using keystroke dynamics

References:
Identifying emotional states using keystroke dynamics by Clayton Epp, Michael Lippold, and Regan L. Mandryk.  



Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Clayton Epp is currently a software engineer for a private consulting company.
Michael Lippold is a masters student at the University of Saskatchewan.
Regan L. Mandryk  is an Assistant Professor at the University of Saskatchewan.


Summary:
Hypothesis:
Keystrokes can tell a lot about a persons emotions.


Methods:
Keystrokes were recorded from the users, then a questionnaire was posed to them. They data that was gathered in the survey and the keystrokes of the users were used to determine if there was any relationship between the two.


Results:
It was discovered that there is roughly an 80% chance that they would be able to estimate the users emotions based on keystrokes. Keystroke delay and duration were also factors that were observed in addition to key stroke order.


Contents:
Here the researchers attempt to discern if the users emotional state can be derived from keystroke information. They perform various typing tests and then administer a survey to collect data on the users emotional state.


Discussion:
This has some really cool potential applications. You could use the information to create a program that would change based on how angry, sad, bored you were. This would, i think, create a whole new user experience and could be a very positive impact on the computer industry.

Reading #30: Life "modes" in social media

References:
Life "modes" in social media by Fatih Kursat Ozenc and Shelly D. Farnham.  



Presented at CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:
Fatih Kursat Ozenc is a professor at Carnegie Mellon University.
Shelly D. Farnham is currently a researcher at Microsoft Research.


Summary:
Hypothesis:
People organize their lives based on various forms of social interaction.


Methods:
The first step was to interview the various participants in the study for two hours each. They were asked to draw out various pictures of various aspects of their social life. This intended to help the users visually represent the way they interact with others and patters were observed.


Results:
Most users drew their life in the form of a 'social meme' map, with the user at the center and various circles enclosing them. They also noted that the closer that someone was to someone the more communication channels existed between them. 


Contents:
Here the researchers are studying the different ways that people classify their social lives. The subjects were interviewed and asked to visually represent their social lives, and then commonalities were looked for.


Discussion:
A pretty cool paper, i think social networking is something that needs to be explored. With facebook social networking is kind of the hot thing right now and any advance in that field would affect a lot of people. 

Reading #29: Usable Gestures for Blind People, Understanding Preference and Performance

References:
Usable gestures for blind people: understanding preference and performance by Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner
 

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios:Shaun K. Kane is currently an Assistant Professor at the University of Maryland.
Jacob O. Wobbrock is currently an Associate Professor at the University of Washington.
Richard E. Ladner is currently a Professor at the University of Washington.


Summary:
Hypothesis:
That devices can be optimized to accommodate the different needs of blind people.


Methods:
Both blind and sighted people had to create gestures based on a description given by the proctor. Each person made 2 gestures. The second study delt with performance. Both the sighted and blind participants were asked to execute a series of tasks. They rated the easiness of various types of actions. 


Results:
On average the blind persons gestures had more strokes than the sighted. The edge also found more use in the gestures of the blind people. The performance turned out to be pretty much the same. Blind people also tended to prefer multi touch gestures as well as larger gestures.


Contents:
The researchers attempt to explore the possibility that blind users have different needs than sighted users. They do this through two tests. One has the blind and sighted participants create gestures to perform tasks and observes the difference between them. The second tested the differences in performance between sighted and blind subjects.


Discussion:
This was a neat paper. I think one of the best fields of computer science are the ones that are using technology to help people who have disabilities. Being able to create programs and devices that are made to help people who would otherwise have difficulty with them is one of the best applications of computer science i can think of.

Tuesday, November 15, 2011

Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

References:
Sensing cognitive multitasking for a brain-based adaptive user interface by Erin Treacy Solov, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob 


Author Bios:
Erin Reacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz and Rober J.K. Jacob all attend Tufts University.
Angelo Sassaroli is studying Biomedical engineering at Tufts University

Sergio Fantini is studying Biomedical Engineering at Tufts University.
Paul Schermerhorn is studying Computer Science at Indiana University.
Audrey Girouard studs at Queen's University and their School of Computing.



Summary:
Hypothesis:
That it is possible to detect and adapt to changes in multitasking by a user.


Methods:
The researches classified three different types of multitasking, branching, dual-task, and delay. This is basically the same kinds of categories that were used in previous multitasking papers. They set a task that the subjects were required to accomplish with the help of a robot. It would run several different kinds of tests, each designed to test different forms of multitasking. 


Results:
A lot of interesting data regarding levels of hemoglobin in the brain was collected. Ultimately they determined that different types of multitasking can in face be detected, which means that they can also be adapted to.


Contents:
This paper described the different ways that a human can attempt to mitigate multiple tasks to themselves. Several participants were given a task to perform with a robot and observed. The hemoglobin levels in their brain were recorded at given points and the results were analyzed.


Discussion:
A really neat paper with some cool applications. This type of thing could really revolutionize the ways that we write programs. Being able to adapt to different situations and tailor a program directly to those needs i think really could do wonders for efficiency. 

Reading #26: Embodiment in brain-computer interaction

References 
Embodiment in brain-computer interaction by Kenton O’Hara, Abigail Sellen, Richard Harper.  



Author Bios:
Kenton O'hara is a senior researcher at Microsoft Research.
Abigail Sellen is a Principal Researcher at Microsoft Research.
Richard Harper is a Principal Researcher at Microsoft Research.


Summary:
Hypothesis:
That studying the whole bodies interaction with computers can better help us understand key aspects of brain computer interaction.


Methods:
In this paper we bring back the Mindflex game. The various subjects took the game home for a week to play in a setting that could act as a control. The researchers analyzed these results and used them to explain some results based on gestures and body language. The four groups consisted of four members each and were chosen by a 'team captain'.


Results:
Several results were recorded. Body orientation was found to play a large part in the game. Changing body position was pretty strongly correlated with attempting to do various tasks. It was also found that people believed concentrating on moving the balls different ways helped them move it. Spectator participation played a part as well, either positive or negative based on if they spectator was encouraging or discouraging success. 


Contents:
Here we identify a better need to understand the way that brains and computers interact. We test this with the Mindflex game. This game is sent home with several people and they way they play and interact with it is observed and recorded. Several patterns emerged. 


Discussion:
A neat paper, but i felt it was just a rehash of some of the earlier papers. No new technologies were developed for this paper, it simply observed the Mindflex game in a different way than the previous paper. While this could be helpful i really didnt personally find it interesting.

Thursday, November 10, 2011

Reading #28: Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments

References:
Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments by Andrew Bragdon, Eugene Nelson, Yang Li and Ken Hinckley


Paper presented at CHI 2011.


Author Bios:
Andrew Bragdon is a PhD computer science student at Brown University.
Eugene Nelson is a professor at Brown University.
Yang Li is a senior researcher at Google Research.
Ken Hinckley is a principal researcher at Microsoft Research.



Summary:
Hypothesis:
That gesture techniques used during various distracting activities such as walking can be more effective than the current soft-button paradigm. 


Methods:
The first tests they did were with activating gesture mode on the mobile device. They tried a series of methods such as a rocker button on the side of the phone and a large soft button placed on the screen. Users found both of these difficult to do while looking away from the phone. The final solution was to mount a hard button at the top of the phone. The distinct feel and location of the button made it easy for the users to locate and press without looking down. This button was needed only for free drawn gestures. Bezel gestures activate gesture mode when the user begins a gesture at the edge of a screen.  They also tested different kinds of gestures, paths and marks. They tested these different gestures they had users interact with a device while doing various activities and under different levels of distraction. They tested sitting and walking with moderate distraction levels, and sitting with attention-saturating tasks.


Results:
The surprising result in this paper was the users not only preferred gestures when looking away from a phone, but half the users also preferred them when using the phone normally. Specifically when looking away from the phone the users preferred bezel gestures, citing one of the reasons as the elimination of the button which they saw an "an extra step." Under distracting conditions bezel gestures significantly out preformed soft buttons.


Contents:
The researchers began by testing the various gesture methods in a controlled environment. They came to the conclusion that users preferred bezel gestures as their gesture method. They next tested the users ability to preform tasks while looking away from the phone, under distracting conditions and while walking and sitting. Bezel gestures were widely preferred in this instance. In the attention-saturating test users were asked to sit and preform a task with the device while being actively distracted from that task. Bezel gestures, once again, out performed soft buttons.


Discussion:
This paper was really pretty interesting. Mobile devices are designed to be used on the move, so it makes sense to research methods to do this more effectively. Being able to do tasks on a mobile device while not having to devote your full attention to it could greatly increase a users productivity.