Artificial artificial intelligence – our experience with Mechanical Turk

So Amazon Web Services (AWS) has a pretty neat service called the Mechanical Turk. The name of the service is inspired by this story where a human was hidden inside a chess playing “robot” which impressed crowds during the late 18th century. Amazon’s service doesn’t hide the fact that its powered by humans but the end result is the same – humans performing tasks in lieu of a computer.
The task we needed “turked” was the transcription of various text labels that were on top of overlays on a particular large image. The goal of all of this was a Google Maps product so we all ready had the overlays loaded into a Google Maps UI.  To recap:

  • We had a Google Maps application which loaded an image layer with about 3000 discrete overlays.
  • We had lat/lng coordinates for all of these overlays.
  • Each of these images had a text label that uniquely described it.
  • We wanted the Turks to transcribe these labels.

Next, we began to investigate how the Mechanical Turk service actually works. The process is delightfully simple. The idea is to break the tasks out into discrete and reproducible actions called human intelligence tasks (HITs). This pattern aligned well with our problem because all of the overlays were independent and we had coordinates for each of them. Amazon displays each of these HITs inside a template that the “requester” constructs.

The templates are HTML pages (Amazon allows javascript) which plug-in variables for each HIT when the tasks go live. They also capture the data that you want to save from the task – in our case the labels. Designing templates is pretty straightforward and javascript is a nice touch.

For our task, we embeded Google Maps with our image overlaid and used HIT variables to display 3 markers per task. This is where we hit our first and only snag. Because of directory restrictions on Google Maps API keys our HIT template kept generating javascript errors because it was being executed on un-predictable domain names. We didn’t think this was a huge deal so we plowed ahead as planned.
We ran the first set of HITs with a 2 cent/HIT payment and a minimum approval rating of 95%.

The results were less than stellar – within an hour or so we received this email:

A strange error message pops up for your/these HITS.
“The Google Maps API key used on this web site was registered for a different web site. You can generate a new key for this web site at http://code.google.com/apis/maps/.”
Also, the mouseover that displays at screen bottom is “javascript: void (0)”…and no label appears below the marker either.  I do NOT have any javascript blockers operational, so no problem there.
Also, I would appreciate it if you NOT deny me credit for this HIT, as the technical error message was not my fault and not explained or warned in the instructions.
I would like to do several, possibly many of these HITS, if you can tell me what to do to overcome this problem.
Thanks,
[REDACTED]

People were obviously not getting the instructions and were scared away by the JS error and fear of HIT rejection. One of the problems with the service is that you can’t modify your HIT template once a set has been published. You have to cancel the batch and re-start after your edits. At this point, we decided to cancel the run and modify the instructions. With the new instructions people seemed to “get it” and were generally more willing to forgive the JS error.

At the end of the run, the accuracy of the Turks was around 90% or so on transcribing the labels. We got about 25 man hours of work accomplished in about 42 hours of actual time at a cost of about $25. All told we were really impressed with the service as well as the Turks themselves. We definitely recommend the service for any discrete and repeatable tasks.

Things we learned:

  • Be EXTREMELY clear in your instructions – try to be as un-ambiguity as possible.
  • The Turks live and die by their approval rating so be nice (we just accepted every task).
  • Unexpected popups and JavaScript errors probably scare people away so try to avoid them.
  • Obviously higher reward rates are going to attract more Turks – something to take into account.

We’ve also recently become fascinated by other things that could be played out or experimented with the Turks. Notably it seems like an ideal venue to experiment with some game theory topics.

Timelapse Twitter+Election map

This is an update to our Twitter+Election ’08 mashup that was over at Setfive Election HQ

Well as everyone saw last Tuesday night, Obama won the election by a pretty significant margin and has all ready taken steps to announce his transition agenda.

Anyway, at the end of our run, we captured 11021 tweets with the breakdown being 2501 for McCain and 8520 for Obama. Since we had been generating maps all day we decided to take snapshots at 5 minute intervals so that we could watch the progression of the map. The timelapse map is embedded below:

We hope everyone had a good election experiance – we had a lot of fun building this mashup. Now to find the next big thing…

Guestimating the election with twitter

We were sitting around tonight and decided to whip something together to leverage twitter to get some real time election information.

It is ugly and open to bias but we’re hopping it might show something interesting.

We’re also planning to take snapshots of the map and assemble a time lapse for Wednesday.

See the map live at: http://election.setfive.com

Update at 4.40 EST:

So we’ve captured about 6000 tweets and the map is basically all blue. Just to clarify – we never intended this to be a serious vizualization or estimation of how the election is progressing. The project was soley meant to be a fun peak at how information spreads across Twitter.

Anyway, a couple of people have been asking about our methodoly so I’ll try and explain a bit.

We are using the Twitter Search API to run searches that we thought would indicate that someone just voted or intends to vote for either John McCain or Barrack Obama. Next, we apply some heuristics to the tweets to make sure they really are “just voted” tweets. If the tweet passes through the heuristics we record it for whichever candidate and then record the “from_user_id” to ensure a single user can’t blow up the vote totals.

In order to geolocate a user we are using the twittervision API I get the impression that the twittervision API just scrapes user profiles but I can’t verify this. We probably could have avoided using their API and just scraped ourselves but one less thing to deal with at 4am is always good.

The graph colors are calculated by taking the larger vote total (red vs blue) and then determining in percent, how much larger this is than the total number of votes for that state:

$totalScore=$state[‘redscore’]+$state[‘bluescore’];
$percentage=($state[‘redscore’]-$state[‘bluescore’])/$totalScore;

Anyway, there are defitley other entertaining things to do with twitter – we just haven’t thought of them yet. – Ashish

Joins, for better and for worse

Recently I’ve been having one hell of a time with SQL Joins. It isn’t that I don’t understand how to use them, its that they have came to haunt me in completely opposite instances.

First: An application I had written a while ago that was suppose to be kept small, and very simple which continually was enlarged and became very big was running slow. I upgrade the person to our faster server believing it was just that our server that is meant for static sites couldn’t handle it. While it did greatly reduce the load time of the site(1/6) it was still slow for me. I looked into it and found that after all the alterations of the application which we had done, that one page had 1600 queries at its current state.

Before you think down upon our development abilities and our code efficiency let me put everything in perspective. The project was started as a very simple CRUD only database, that we did basically free for the a client who was a friend. Shortly after it continually was being expanded upon, and the friend could only afford minimal costs, so he asked us to just “hack” it together as it was a prototype. Well about a month later after development, we had possible one of the most hacked together prototypes for him. He never wanted to redo it from scratch so we could architect it correctly due to the cost. For the prototype, it worked fine and very fast. However the prototype turned into a beta test for the friend. He added much more information to the database, and the inefficient queries began to show. Now the application is loaded with information, and the inefficient queries are terrible. We have told him that if he plans to use this we should rewrite it from scratch as many other cool features could be used.

So back to the problem. Well we attempted to add JOINs to the program which should cut the number of queries by at least 1/10th. However when I added the left joins via Criteria/Propel I found that it was adding the joins twice. The reason they needed to be left joins is because the foreign keys were not required,