Monday, September 12, 2011

Turkit, human computation algorithms on mechanical turk

Reference Information:
TurKit: human computation algorithms on mechanical turk By Greg Little, Lydia B. Chilton, Max Goldman, Robert C. Miller.

 Published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Author Bios:
Greg Little works at CSAIL lab at MIT
Lyndia Chilton is a graduate student at the University of Washington
Max Goldman is a graduate student at MIT while also studying at the Israel Institute of Technology
Robert Miller works in the EECS department at MIT as an associate professor. 

Summary:
Hypothesis:
That Turkit will expand on the human computation device Mechanical Turk and make improvements to it.

 Methods:
The scripts were the first thing the paper discussed. TurKit extends Javascript and allows for programming for the MTurk platform. It also allows for crash-and-rerun programming, which helps in the debugging process. Next we hear them talk about the interface. The main interface is online and is how the user communicates with the device. 

Results:
There were many different things that were discovered after testing. Paragraph length played a big part in how people made improvements. The text recognition was a huge success, with near perfect end results. Turkit seems to be an effective psychophysics experimentation as calls to MTurk are embedded within a larger application. Much of the success of this experiment is due to the crash-and-rerun programming model. It made it much easier for users to make scripts, but had its flaws. Things such as unclear details hindered several users.

Contents:
TurKit is a toolkit that is designed to expand on human computation devices. It develops a method to write and test scripts that can stand up to the long amounts of time that they need to be run. It also adds functionality for incremental programming and print-line debugging. Overall it positively expands on the Mechanical Turk.

Discussion:
This was a relatively uninteresting paper to me. While i think what they did was neat, overall it didnt grab my attention. The improvement they made with their crash-and-rerun programming method seemed like an obvious addition to me. And while the actual implementation of it might have been impressive, the premise was still obvious. One of the things i did find interesting was the testing of blurry text recognition. The fact that after several iterations it became so accurate at guessing was really neat and i think probably has some kind of application to reCaptcha images or something of that nature.

No comments:

Post a Comment