Thursday, September 29, 2011

Reading #13: Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces

References:
Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces by Andrew D. Wilson and Hrvoje Benko.  


Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.


Author Bio:
Andrew Wilson got his undergraduate degree from Cornell University and his PhD from MIT. 
Hrvoji Benko is a researcher at Adaptive Systems and Microsoft Research.


Summary:
Hypothesis:
How can we allow interaction and visualization in-depth with a user?


Methods:
The prototype was displayed at a convention for three-days, during which the authors observed the many users and their interactions with the device.


Results:
Six people seemed to be the cap for users. Many users led to problems such as them blocking off each other so that gestures could not be recognized by the camera. Holding objects also seemed to prove difficult for some users.


Contents:
LightSpace attempts to expand the already existing touch-interaction technology to a three dimensional environment. "Everything is a surface" is a theme that we see along with treating the entire room as a computer as well as noting that the body can, in face, be a display. 2D images are able to be projected into 3D and be interacted with by the users.


Discussion:
I thought this was really neat. Any type of display where you are interacting with computer created objects directly has always been fascinating to me. I believe that this system they created was, for the most part, successful. They seemed to be able to create an environment in which people could successfully interact with objects. The fact that the more users you get the more problems with obscuring cameras doesn't seem to be that big of and issue to me. It can be solved by simply adding more cameras, re-positioning already existing ones, things of that nature. Overall i would love to see something like this become something that is more heavily researched.

Tuesday, September 27, 2011

Reading #12: Enabling Beyond-Surface Interactions for Interactive Surface with an Invisible Projection

References:
Enabling Beyond-Surface Interactions for Interactive Surface with an Invisible Projection by Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Ju-Chun Ko, Home-Ru Lin, Mike Y. Chen, Jane Hsu, Yi-Ping Hung.  

Published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.

Author Bios:
Li-Wei Chan is a PhD student at Nation Taiwan University
Hsiang-Tao Wu, Hui-Shan Kao, and Home-Ru Lin are students at the National Taiwan University.
Ju-Chun Ko is also a PhD student at the National Taiwan University.
Mike Chen is a professor at the National Taiwan University.
Jane Hsu is a professor at National Taiwan University.
Yi-Ping Hung is a professor at National Taiwan University

Summary:
Hypothesis:
By using infared communcations it is possible to perform interactions beyond the surface of a display.

Methods:
In order to test this a custom table was built that contained a light projector, as well as a layer of glass and many infared cameras. It then had another layer on top of that, a kind of projected layer, that would allow the cleanest interaction between the user and the device.

Results:
Users tended to gravitate towards the static object. They used the infared system to simply preform smaller operations such as selection. It became a way for them to better view an object but was incomplete as far as the viewing of 3D objects is concerned, and many users stated they wish that the light would function more as a mouse.

Contents:
The first step was for the researchers to actually build their device. They upgraded a DLP projector into an IR projector and had their projection layer above the glass layer on the surface. The Diffused-Illumination method was used in tandim with the touch input method. They had three different devices, the i-m- lamp, i-m-camera, and i-m-flashlight, that corresponded to create the image displayed.

Discussion:
This was a neat paper. I think ultimately while the did not succeed in their attempt to create this 3D surface, they did make some important break throughs. As most devices are moving towards touch screens this type of thing could have huge applications in the future. The ability to interact with a 3D object would open up whole new worlds of what can be done on a touch screen device.



Reading #11: Multitoe

References: Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input by Thomas Augsten, Konstantin Kaefer, Rene Meusel, Caroline Fetzer, Dorian Kanitz, Thomas Stoff, Torsten Becker, Christian Holz, and Patrick Baudisch.  


Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.  


Author Bio:
Thomas Augsten is currently working on a masters degree at the University of Potsdam.
Konstantin Kaefer is also working on a masters from the University of Potsdam.
Christian Holz is working ot recieve his PhD from the University of Potsdam.
Patrick Baudisch is a professor in Computer Science at the Hasso Plattner Institute.



Rene Meusel, Caroline Fetzer, Dorian Kanitz, Thomas Stoff, and Torsten Becker are all students at the Hasso Plattner Institute.
Summary:
Hypothesis:
Can use a larger surface and incorporate feet as a way to interact we can overcome size limitations on touch screens.
Methods:
Firstly data was collected on how users use their feet. Buttons were pressed with the feet and various actions were performed as a result. Next we see that the users were asked to choose from a grid of buttons which one should be pressed based on the current position of the foot. Response to the location of the hotspot was varied and the location caused some issues. Finally users typed words with their feet on various keyboards.
Results:
Ultimately four different techniques stood out as the best ways of activating a button. Tapping, stomping, jumping and double tapping with jumping being the most successful. Next we found that most users, 18 of 20, felt that the arch of the foot should be included in selecting. Users disagreed about the position of the hotspot. And finally we saw a decrese in typing accuracy as the size of the buttons on the keyboard decreased.
Contents:
This paper attempts to solve some problems that exist with touch screen surfaces, such as restricted size. By encorportating a larger surface area as well as using feet as a basis for gestures we see some possible ways to solve this problem. We also see that different users use their feet different ways, which leads to additional problems.
Discussion:
Ultimately this paper was uninteresting to me. I don't think it really explored anything that ground breaking, especially considering that we just read a paper about foot gestures and touch surfaces. This doesnt really, to me, have any particularly interestings applications. Because much of the touch screen and computing in general seems to be moving to an 'on-the-go' type paradigm, the useage of a foot gesture system just doesnt seem as practical.

Thursday, September 22, 2011

Paper Reading #10 : Sensing foot gestures from the pocket

References:
Sensing foot gestures from the pocket by Jeremy Scott, David Dearman, Koji Yatani, and Khai N. Truong. 


Published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.
Author Bio:
Jeremy Scott studied Pharmacy and Toxicology at the University of Western Ontario.
David Dearman is currently in the process of getting a PhD in Computer Science from the University of Toranto.
Koji Yatani is a CS PhD student at the University of Toranto focusing on CHI.
Khai Truong is an assistant professor in computer science at the University of Toronto.


Summary:
Hypothesis:
We can use foot motion as a form of data input to a computer.


Methods:
Cameras were used to capture the various movements of subjects feet. Gestures tested included
  • Dorsiflexion: four targets placed between 10° and 40° inclusive
  • Plantar flexion: six targets placed between 10° and 60° inclusive
  • Heel & toe rotation: 21 targets (each), with 9 internal rotation targets placed between -10° and -90° inclusive, and 12 external rotation targets placed between 10° and 120° inclusive
Participants used a mouse to respond to prompts and to indicate the beginning and end of the foot motion

Results:
The side of the foot had the greatest accuracy, followed by the front and finally the back. Targets located at the center of the foots range of motion were able to be selected much more quickly than those at the edge and users seemed to prefer motions involving the heel.

Contents:
This used subjects to attempt to study ways that we can use gesture recognition with feet. This was then ported to a mobile app that the user could then carry around with them on a mobile device. One of the bigger issues was that it was difficult to distinguish between motions that were ment to be made and those made on accident.

Discussion:
This was a pretty neat article. We have ready a lot about gesture recognition but I personally would have never thought to apply it to feet. I think this could have neat application for people who are handicapped or without arms or hands to preform gestures. Using foot recognition would be a great way to be able to have these users still be able to interact well with the various gesture related devices. I would be interested to see how this information is applied in the future.

Paper Reading #9: Jogging Over a Distance Between Europe and Australia

References:
Jogging Over a Distance Between Europe and Australia by Florian Mueller, Frank Vetere, Martin Gibbs, Darren Edge, Stefan Agamanolis, Jennifer Sheridan. 

Published UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology

Authors:
Florian Mueller is a researcher at Stanford University.
Frank Vetere is a senior lecturer at the Univeristy of Melbourne.
Martin Gibbs is also a lecturer at the Universtiy of Melbourne.
Darren Edge is a researcher in the field of CHI for Microsoft, obtaining his degree from Cambridge.
Stefan Agamanolis is the director of a research institute at  Akron Children's Hospital.
Jennifer Sheridan is a senior consultant and director of user experiences at BigDog Interactive.

Summary:
Hypothesis:
Adding a social aspect and the ability to communicate over distances to jogging could make it more enjoyable.

Methods:
The subjects jogged from 25 to 45 minutes while communicating with a friend. They were then interviewed and asked various open ended questions to describe their experience

Results:
Ultimately this was deemed a success. Participants noted that they were more motivated to attempt to get better results when they knew they were competing with the person on the other end of the line. The positional audio and heartrate monitor greatly helped it feel like an experience where you were actually running with a friend.

Contents:
This paper attempted to explore the social aspects of running. The writers saw that physically running with another person is a great way to motivate onesself and they attempted to replicate that over distances. Overall they explored the different ways that running can create bonds between people. The paper was focusing mainly on making running more enjoyable, not making it easier or more effective.

Discussion:
As someone who has just, in the last few months, started running consistantly i can say that i really dont personally believe that this would help at all. I have never felt the urge to run along with someone, nor have a i ever felt that when i did run with another person that it was any more enjoyable. While this is a neat concept i dont believe it has any practical application. It seemed to me to be a kind of "do it because we can" type of thing rather than an attempt to develop something that was actually useful.

Thursday, September 15, 2011

Ethnography Report #0

Quidditch

Preconceptions:
So i must admit that Quidditch is not something that i have ever really thought about. I had one friend who graduated last semester that used to play all the time, but honestly i would never really listen. Going into it i guess i believed that for whatever reason the people playing it would no be the kinds of people who usually play sports. That they would be unathletic or something of that nature. I have no idea why i thought that as it makes no sense, but regardless thats just what my mind chose to believe. It turns out that is completely wrong. The sport, and it is a sport, requires a TON of physical activity. It is a lot of running coupled with pseudo tackleing. Its basically a combination of soccer and dodgeball, only on brooms. This leads into my next assumption which was that it was not a rough game. This is definitely not true. The two girls i talked to mostly about the club and game were two girls that were sitting out due to injuries recieved after only like 15 minutes of playing. That was rather unexpected for me. I also thought that it was much less of a big deal that it is. There is a ton of intercollegate play that goes on and a huge tournament (the World Cup) is held every year in new york. Overall i am now simply just open to the fact that it is a real sport and the people that play it take it very seriously.

Paper Reading #8: Gesture Search: A tool for fast mobile data access

Reference Information
Gesture Search: A tool for fast mobile data access by Yang Li.  

Published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.

Author Bio:
Yang Li is a researcher at Google who attended University of Washinton. He has a PhD in computer science that he got while in China.


Summary:
Hyphothesis:
That new ways to have users be able to interact with a touch surface would be much more effective than previously used ways, specifically that GUI-oriented touch input should have less variation than gestures . To be able to better recognize both actions with a pen or touch when drawing, as well as coming up with different ways to more effectively construct a keyboard.


Methods:
Because this is a test of comparison to currently existing technologies, the first step was to collect data on both. He would have users input various GUI interactions as well as gesture actions and then compared the results. He then deployed it over the android interface in an attempt to gather more data from a wider pool of users, as well as see how it transitioned over to a mobile device.


Results:
The results were, overall, pretty much what were expected from the paper. The GUI interations did far better than the gesture ones, although there were several aspects that, if expanded on, that Gestuer Search could evolve to become better than. It was also interesting that the data showed that 84% of searches involved only one gesture, and 98% had two gestures or less. This shows that searches could be done pretty easily, with minimal input from the users.


Contents:
Gesture Search is a way that users can better interact with touch devices both moble and something like a surface. It attempts to use gestures to gain information from the user rather than the typical GUI interactions that we currently use today. Ultimately this is suppose to be focused, i believe, on a type of text input system. This would be a great application to be able to essentially be able to write out what it is you're trying to say, rather than having to type it out. This was shown in the tests for the android users and the data that was gathered there that, while this is a cool concept, it still is not as well developed as the current GUI system.


Discussion:
As an android user, i could see how this software could be very useful. Many tasks on smartphones currently require a lot of input or typing, but having a way to simplify that process would be a huge advantage. It is about getting as much from your input as possible and using it most effectively. While the current state of smartphone UI seems to be pretty solid, something like this is pretty out of the box and has the potential to be something that could be more widely used.

Paper Reading #7: Performance Optimizations of Virtual Keyboards for Stroke-Based Text Entry on a Touch-Based Tabletop

Reference Information:
Performance Optimizations of Virtual Keyboards for Stroke-Based Text Entry on a Touch-Based Tabletop by Jochen Rick. 

Published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.

Author Bio:
Jochen Rick currently works at Saarland University as a faculty member. He has a PhD in computer science which he obtained from Georgia Tech.

Summary:
Hypothesis:
The layout of the keyboard plays a huge role in the effectiveness and speed of user input on a touch screen device.

Methods:
There were 8 test subjects who preformed various tasks such as drawing through various points, in order to help Dr. Rick obtain data on where to best place letters.

Results:
As expected, the QWERTY keyboard underperformed. It is noted that this is expected due to the fact that the keyboard was not designed with a swype style input in mind. His Hexagon OSK, Square OSK and OPTI II keyboard layouts provided a much better input speed.

Contents:
Obviously the purpose of this paper was to correct deficiencies in currently existing keyboard layouts. While the QWERTY keyboard has been great for 'normal' typing, it has some problems in a mobile or touch screen environment. He documents his data from the tests and explains how all of that information is applied to an algorithm to create a more intuitive keyboard layout.

Discussion:
This article was awesome. I use swype on my android phone currently and have always thought that they layout of the keyboard was not at all condusive to being able to easily enter text. A new layout would do wonders for quick and accurate text entry. A new arrangement of letters also means that it could be easier to resolve what a word is suppose to be, rather than simple narrowing it down to 4 or 5 possibilities, which is what currently happens. Overall a very interesting read and hopefully something that will see more devleopment in the future.

Monday, September 12, 2011

Turkit, human computation algorithms on mechanical turk

Reference Information:
TurKit: human computation algorithms on mechanical turk By Greg Little, Lydia B. Chilton, Max Goldman, Robert C. Miller.

 Published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Author Bios:
Greg Little works at CSAIL lab at MIT
Lyndia Chilton is a graduate student at the University of Washington
Max Goldman is a graduate student at MIT while also studying at the Israel Institute of Technology
Robert Miller works in the EECS department at MIT as an associate professor. 

Summary:
Hypothesis:
That Turkit will expand on the human computation device Mechanical Turk and make improvements to it.

 Methods:
The scripts were the first thing the paper discussed. TurKit extends Javascript and allows for programming for the MTurk platform. It also allows for crash-and-rerun programming, which helps in the debugging process. Next we hear them talk about the interface. The main interface is online and is how the user communicates with the device. 

Results:
There were many different things that were discovered after testing. Paragraph length played a big part in how people made improvements. The text recognition was a huge success, with near perfect end results. Turkit seems to be an effective psychophysics experimentation as calls to MTurk are embedded within a larger application. Much of the success of this experiment is due to the crash-and-rerun programming model. It made it much easier for users to make scripts, but had its flaws. Things such as unclear details hindered several users.

Contents:
TurKit is a toolkit that is designed to expand on human computation devices. It develops a method to write and test scripts that can stand up to the long amounts of time that they need to be run. It also adds functionality for incremental programming and print-line debugging. Overall it positively expands on the Mechanical Turk.

Discussion:
This was a relatively uninteresting paper to me. While i think what they did was neat, overall it didnt grab my attention. The improvement they made with their crash-and-rerun programming method seemed like an obvious addition to me. And while the actual implementation of it might have been impressive, the premise was still obvious. One of the things i did find interesting was the testing of blurry text recognition. The fact that after several iterations it became so accurate at guessing was really neat and i think probably has some kind of application to reCaptcha images or something of that nature.

Thursday, September 8, 2011

A Framework for Robust and Flexible Handling of Inputs with Uncertainty

Reference Information:
A Framework for Robust and Flexible Handling of Inputs with Uncertainty.
Julia Schwarz, Scott E. Hudson, Jennifer Mankoff, Andrew D. Wilson
Presented at UIST'10, October 3-6, 2010, New York, New York, USA



Authors:
Julia Schwarz is a PhD student at Carnegie Mellon's CHI lab.
Scott Hudson is a professor at Carnegie Mellons CHI lab
Jennifer Mankoff is an Associate Professor in Carnegie Mellons CHI lab.
Andrew Wilson is a senior researcher at Microsoft Research.


Summary:
Hypothesis:
Create a process to more effectively handle new input with uncertainty in a easily manipulative fashion.


Methods:
The authors demonstrated their framework through six different studies. The first half of these was designed to test discrepancies in what a user is interacting with via touch. The next two tried to create smarter text entry and the final one, improved UI for the motor impaired.


Results:
The selection of buttons was a success. The users were able to easily adapt to the new interface and any discrepancies in interactions was helped along by the system. The next set of tests showd that very little extra work would be needed to implement something like that text input. The final test showed a great decrease in the number of errors compared to the original.


Contents:
One of the big focuses of this paper is uncertainty, and how current systems do not manage it effectively or correctly. This is proved by the development of a system that seeks to manage these more effectively and several accompanying tests that prove it does. It does this in a way by listing each interaction in a list and then setting up a scoring system in order to determine how to correctly resolve the conflicts of input that arise.


Discussion:
I think this is a great paper. In particular i believe that the test on helping the motor impaired is a great example of the real life applications that this type of system could have. While i do believe that this paper was held back by the fact that it only looked at six different tests, I am convinced that if it were further elaborated and researched that this would be a great piece of technology. It really gave a good insight into how errors in input are handled in touchscreen environments and the problems that exist with those errors. Overall I think that Is would like to read further on the research done on this particular piece of technology.

Gestalt

Authors:
Kayur Patel is a computer science PhD student at University of Washington
Naomi Bancroft is a computer science undergraduate at the University of Washinton
Steven M. Drucker is a Microsoft Researcher
James Fogarty is an assistant professor at the University of Washington
Andrew Ko is also an assistant professor at the University of Washington
James Landay is a professor of computer science at the University of Washington

Summary:
Hypothesis:
The authors believe that they can create a system that applies machine language in a way that is different from traditional programming. It focuses on the learned behavior of a machine rather than describing the behavior of a program like traditional programming.

Methods:
The subjects had to create and run scripts that connected MATLAB and retrieved data from it. They also used the example of analyzing a movie review. They would then decipher a gesture mark. The two main points that were focused on in the paper were sentiment analysis as well as gesture recognition. Because one of the big focuses in this paper was debugging, errors were also hidden within the code and the subjects were suppose to remove them.

Results:
The Gestalt environment provided a much easier way to locate and fix bugs than MATLAB. They found it much easier to operate with the visualization scripting feature.

Contents:
This environment is designed to enhance machine learning. It supports the implementation of a classification pipeline as well as analysis of the data moving through that pipeline. Things such as sentiment and gesture analysis were a huge part in helping this platform be an improvement over the baseline. This is great because these are the two areas in which is has the most practical application in the real world.

Discussion:
These researches attempted, and succeeded at creating a system that would make machine learning debugging less tedious. Because such a large number of subjects preferred the new Gestalt over the baseline I believe they succeeded in their goal. They also have a great new resource in the fields of sentiment analysis and gesture recognition. I think a more expansive paper on this system would be a great thing to read. That way I could get a better idea of a few of the more specific details that went into creating the system and not be limited to simple bug testing.

Tuesday, September 6, 2011

Pen + Touch = New Tools

Authors:
Ken Hinckley: Microsoft Researcher
Koji Yatani: Graduate student of University of Toronto
Michel Pahud: Microsoft Researcher
Nicole Coddington: Senior designed at HTC, previously with Microsoft
Jenny Rodenhouse: Works on Microsoft XBox
Andy Wilson: Microsoft Researcher
Hrvoje Benko: Researcher from Microsoft
Bill Buxton: Microsoft Researcher

Summary:
Hypothesis:
The test here was to attempt to see if a user reacted positively to attempting to interact with both a pen and touch on a screen. This would allow for many more possible features and ways to manipulate a touch screen interface

Methods:
Subjects were required to past storyboard clippings together on a notebook to make a file storyboard. These actions were observed and many of the methods used in this process were taken into account when creating the project. This was implemented on a Microsoft Surface

Results:
Many different issues were found that were not really expected. One of the main issues that was found was the way that users would hold clippings in a non-dominant hand or the hand along with the pen. This presented some difficulty in how to best replicate that process in an intuitive way. Initially the users were confused about how to do certain tasks, but once shown they seemed to pick it up easily. Stapling was a very popular feature. Tucking the pen under the hand is something that did not translate well to the surface. Users would have problems with the pen and the best way to place it when it was not actively being used.

Contents:
The researches in this paper attempted to combine both pens and touch commands. This allowed for them to explore many different options for touch commands. It also was a good study into how users would respond to being required to use commands and motions that were foreign to them, and how quickly they would adapt to their use. Because the system was designed off of interactions with humans and notebooks it very strongly resembles a sheet of paper in how it operates. Users can move between pages, cut out sections of a page or even 'staple' a page so that it could not be turned.

Discussion:
Here we find a great example of innovation in the use of touch interfaces. Here we attempted to recreate, digitally, a medium of information collection that has been used through the ages, and several things were observed. First off is that there are great benefits to being able to a piece of technology like this. Many things such as erasing and moving sections of text become much easier. Organizing notes and being space efficient also was a much easier task to accomplish on the platform and i believe could one day be something that is more common on tablet PCs. If this type of technology were to take off then we would have a much better availability of online notes from professors. We would also be able to collaborate on problems and projects much easier if this were adapted to, say, Google docs style simultaneous editing. Many different possibilities exist for this project and hopefully it will become more common and the years go on.

Hands-On Math

Reference Information:
Hands-On Math: A page-based multi-touch and pen desktop for technical work and problem solving by Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, Hsu-Sheng Ko

Authors:
All of the authors are associated with Brown university.

Summary:
Hypothesis:
Can we use Computer Algebra Tools to help users learn and interact more efficiently.

Methods:
Undergraduates from Brown were recruited. They then used the interface, which is designed on a Microsoft Surface, to attempt to perform some calculations like derivatives and graphing as well as using various techniques for manipulating individual pages. They also tested gestures. Under-the-rock menus are those which only appear when necessary as well as various types of gestures with the pen, such as that which is needed to delete a page, were all things that users tried.

Results:
Overall the feedback was good. They indicated that they would be interested in using technology like this, but only if it were to come in a portable form such as a tablet. Many students would attempt to create their own way of preforming certain tasks before being showing a new, more effective way. They would then be able to easily pick up completing the task that way. One of the biggest advantages was the ability to manipulate math problems. Many students enjoyed the idea of a step-by-step learning process when doing their calculations.

Contents:
Hands-On Math attempts to try to combine the conveniences of paper and pen with some of the technologies available to us through CAS. This would allow a user to get answers more quickly than they would otherwise, but also is easy to use and user friendly. The user is able to interact with either whiteboard, which simply has an open area for drawing or manipulating text and equations. Or a paged system. This allows the user to be more effective with space management and organization. The pen tool is combined with various gestures in an attempt to create user friendly and intuitive motions and controls.


Discussion:
This is a really great piece of technology. One of the biggest barriers in the past of attempting to do things such as write or solve equations on a virtual surface has been a feeling of awkwardness. They delay from input or the lack of space. All of these are factors in contributing to the continued use of pen and paper over electronic notetaking platforms. This project could help to eliminate these flaws and provide a new system for taking notes and doing mathematical calculations. Many different levels of users could benefit from this system. Older students would be able to take advantage of more advanced features such as plotting graphs and making tables, while younger students would be able to more easily interact with things they are learning about vis pictures or videos.

Thursday, September 1, 2011

Imaginary Interfaces

Reference Information:
Imaginary Interfaces by: Sean Gustafson, Daniel Bierwirth and Patrick Baudisch


Authors:
All the authors currently conduct research as the Hasso Plattner Institute in Germany.


Summary:
  • Hypothesis: This paper has one quantitative hypothesis. This was that "participants would perform fewer Graffiti recognition errors than reported by Ni and Baudisch.  The main point of this is that the user does not need a screen in order to interact with a device. Therefore we could have imaginary screens that represent the screen to a user which would allow for interaction without any physical device being touched and that we could do this was a relatively low error rate. 
  • Methods: They constructed various imaginary interfaces and chose various students and people from off of campus. The three chosen tests were Graffiti, Repeated Drawing and Multi-Stroke Drawing. While the initial test setup was not enough to correctly capture user input gestures, after adjusting various things they were able to analyze their results. The tests were things such as a user needing to draw graffiti or finding points on a grid. Each test lined up very well with the hypothesis.
  • Results: Ultimately the team concluded that they were pretty close to correct with their hypotheses. They had about a 5.5% error in recognizing user input. Much lower than other methods. Fingertips were concluded to be the most accurate locations to use. And there is a very large different between rotation and stay as far as conditions go in the experiment.
  • Contents: The removal of a screen entirely from the whole computer system creates some unique issues. When you do not have a tactile surface on which to conduct your user input/output many different issues arise. Environmental issues as well as other users could very negatively impact this type of interface. Ultimately this desires to create a technology that is not only useful but also very unintrusive to the user on a daily basis.
Discussion:
This whole article was amazing. The concept of being able to interact with a device without the physical touching of it is something that has been immortalized in science fiction in various forms for years. While this is still seemingly in its early developmental stages it can have some great uses in the future. One of the faults i believe is that at this time this is a far less practical system that the current use of surfaces to interact. While this may even evolve to become cheaper and more practical than what we use currently, will it evolve at a rate at which it can outpace surface interaction becoming even cheaper and convenient? It seems like it is a race of sorts. We also have a loss of that tactile feedback that users seem to love so much. Without the avaliablity to alert the user of something via a vibration or something of that sort we would have to result to various visual cues. Far less effective than something tactile. Ultimately this is going to come down to getting a device that can consistently receive and correctly interpret user input, as well as being cheap enough to distribute to the masses should this be applied to some kind of mobile device.

On Computers

Reference Information:
Minds, Brains, and Programs by John R. Searle
AristotleOn Plants, The Revised Oxford Translation, Edited by Jonathan Barnes, Volume Two


Author Bio:
John Searle is a professor of psychology at the University of California, Berkeley and was born in Denver Colorado. He attended both University of Wisconsin-Madison as well as Oxford for his education.In 1959 he became the first tenured professor to join the free speech movement. His early works involved "Speech Acts" and studying what creates the rules of languages. He then later moved on to studying Artificial Intelligence and analyzing what consciousness would mean for a machine.


Summary:

  • Hypothesis: That computers do not have the capability to 'learn' or 'understand' a task or skill.
  • Methods: In using his analogy of the Chinese Room, Dr. Searle creates for us an environment where we, the reader, as well as many different pieces of paper, take the part of the computer processor. By analyzing the process by which we would translate incoming Chinese symbols into English output or visa verse we are left to determine whether this is enough to consider ourselves as 'knowing' Chinese.
  • Results: He is able to construct a solid argument against a computers understanding of a task. 
  • Contents: Dr. Searle proved that, although you can construct a system where you put in an input and get consistently correct output, that does not necessarily mean that the system understands the task.
Discussion:

This article was a very interesting read. The point was extremely well presented in a very solid analogy and I completely agree with Searle and really enjoyed the way he presented his argument. Whatever happens ultimately computers are created by humans. They will always simply be following instructions that we give them. The concept of imparting sentience upon another being is something outside of the realm of artificial intelligence, I think. It has less to do with receiving input and producing output correctly, and more to do with creating a whole environment for information to be received and interpreted like the human brain does, which is a very abstract concept. If we were to assemble a human body and give it instructions, would that still be a machine? Or would it be a person? These types of questions begin to crop up and the line between independent thought and instruction sets gets blurred. While we can use our current approach to simulate understanding and learning, it is not truly either of those things. One of the big weaknesses in this paper i believe was that it did not address future advances in technologies and where that would leave us in terms of an intelligent machine. He simply touches on the fact that with our current technologies, we cannot create a learning machine. While fine for the purpose of his argument it would have helped to strengthen his argument to provide some kind of counter example of some kind of hypothetical machine that really does understand tasks. Artificial Intelligence is the next biggest field in computer science and so much can be accomplished by teaching a computer to learn and understand a task. This paper should have huge impacts on how people approach making AI in the future.