jQuery: binding DOM events to objects

A few days ago I was working on a project for a client that basically involved allowing a non-technical end user to build arbitrarily complex boolean queries using a UI. The user could basically click, configure, and drag/drop queries to build expressions like (A AND B OR C) AND (Z OR X). They would also need the ability to edit the pieces in-line, toggle ANDs to ORs, and so on.

Anyway, not to difficult to represent with a data structure but the complexity was going to be in tying the UI to the data structure with callbacks and events. Usually, I would of used a single $(“a”).click() handler to handle all the edit, delete, and configuration clicks in the UI but that quickly devolves into a disastrous mess of if statements and hasClass() checks.

Hoping to avoid this, I started thinking about cleaner ways to implement this and realized if I actually bound events to the individual objects they were going to effect the resulting code would be much cleaner. The UI elements were going to be dynamically generated each time the data structure was updated anyway so binding $.click() events on the <a> tags to their corresponding objects wouldn’t be to much extra work.

All in all, things worked out pretty well. The final code is much easier to follow and there isn’t a gigantic if() block which is impossible to trace.

I can’t share the actual implementation I used but I threw together an example which outlines the technique. Check out the JSFiddle at http://jsfiddle.net/fN46q/4/

Looking at the code, the Board object is an Array that in turn contains Piece objects to form a 3×3 grid.

The Piece objects individually supply a render function to display themselves and then contain a click() function which handles their $.click() events.

The key line which makes this work is:

   $(td).find("a").bind("click", {piece: this[i][j]}, this[i][j].click);

What the second parameter does is add the object into the Event object that is passed to the event handler. More info is available on the $.bind() documentation but looking at the click() function in the Piece object you can see where it comes in to play:

 click: function( e ){
    e.data.piece.val = !e.data.piece.val;
    e.data.piece.board.render();
    return false;
 }

So effectively, the data structure is now linked to the DOM.

As always, thoughts, comments and are feedback welcome!

Getting started with Hadoop, Hive, and Sqoop

I apologize for the buzzword heavy title but it was the best I could do. I couldn’t find a good quick start explaining how to get started with Hive so I thought I’d share my experiences.

Anyway, a client of ours came to us needing to analyze a dataset that was about ~200 million rows over 6 months and is currently growing at about 10 million rows a week and increasing. From a reporting standpoint, they were looking to run aggregate counts and group bys over the data and then display the results on charts. Additionally, they were also looking to select subsets of the data and use them later – basically SELECT * FROM table WHERE x AND y AND z.

Obviously, doing the calculations in real time was out of the question so we knew we were looking for a solution that would be easy to use, support the necessary requirements and that would predictably scale with the increasing generation rate of data.

On the surface, MySQL looks like a decent approach but it presents a couple of issues pretty quickly:

  • In order for the SUM, GROUP BY, and COUNT queries to be at all useful the MySQL tables would have to be heavily indexed. Unfortunately, due to the write heavy workload of the app this would mean having to copy data into an indexed MySQL database before running any reports.
  • Even with indexes, MySQL was pretty awful at selecting subsets of the data from a performance perspective.
  • And probably the biggest issue with MySQL is that it doesn’t scale linearly in the sense that if the data is growing at 500 million rows a week you can’t simply “throw more hardware” at it and be done with it.

With requirements in hand we hit the Internet and finally arrived at Hive running on top of Hadoop. Per Wikipedia,

From our perspective, this stack fits our requirements nicely since it doesn’t rely on keeping a second “reporting” MySQL database available, it will handle both sum/count/group by and selecting subets, and probably most importantly it will allow us at least in the near term to scale with the increasing rate of data generation.

“Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google’s MapReduce and Google File System (GFS) papers.”

To grossly over simplify, Hadoop provides a framework that allows you to break up a data intensive task into discrete pieces, run the pieces in a distributed fashion, and then combine the results giving you the results of the completed task. The quintessential example of a task that can be parallelized in this fashion is sorting a *really* big list since the list can be sorted in pieces and then the results can be combined at the end. See Merge Sort

The second piece of the tool chain is Hive. Again via Wikipedia,

“Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.”

Basically, Hive is a tool that leverages the Hadoop framework to provide reporting and query capabilities using a syntax similar to SQL.

That just leaves Sqoop, the app with a funny name and no Wikipedia entry. Sqoop was originally developed by Cloudera and basically serves as an import tool for Hadoop. For my purposes, it allowed me to easily import the data from my MySQL database into Hadoop’s HDFS so I could use it in Hive.

The rest of this post walks you through setting up Hadoop+Hive and analyzing some MySQL data.

Now that you know the players, lets figure out what we’re actually trying to do.

  1. We want to start a Hadoop cluster to use Hive on.
  2. Load our data from a MySQL database into this Hadoop cluster.
  3. Use Hive to run some reports on this data.
  4. Warehouse the results of this data in MySQL so we can graph it (not that exciting).

Starting the cluster

Theres actually one more tool you’ll need to get this to work – Apache Whirr. Whirr is actually really cool, it lets you automatically start cluster services (Hadoop, Voldermont, etc.) at a handful of cloud platforms (AWS, Rackspace, etc.)

NOTE: We exclusively use AWS for our hosting so everything described here is specific to AWS.

Fisrt, download the latest copy of Whirr – http://www.fightrice.com/mirrors/apache//incubator/whirr/ to your local machine. Whirr should work everywhere but these directions will match up against Linux/OSX the best.

The first thing you’ll need is a Whirr configuration file describing the cluster you want to build. Create a file called hadoop.properties and paste in the following:

whirr.cluster-name=hadoop
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,2 hadoop-datanode+hadoop-tasktracker
whirr.hadoop-install-function=install_cdh_hadoop
whirr.hadoop-configure-function=configure_cdh_hadoop

whirr.provider=aws-ec2
whirr.identity=[REPLACE THIS WITH YOUR AWS ID]
whirr.credential=[REPLACE THIS WITH YOUR AWS SECRET]

whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-68ad5201
whirr.location-id=us-east-1

whirr.private-key-file=~/.ssh/id_rsa
whirr.public-key-file=~/.ssh/id_rsa.pub

There isn’t a ton going on in the file but you’ll need to switch out the credential lines for your AWS credentials. Also, you’ll need to double check that the ssh paths are accurate for your account.

The next step, is to actually launch the cluster. To do this run this command – double check the path to your hadoop.properties file is accurate:

./bin/whirr launch-cluster --config hadoop.properties

Just give it a few minutes, you’ll see a bunch of debug info scrolling across your terminal and hopefully a success message once its done. At this point, you’ll have a fully built Hadoop cluster with 3 nodes as described in your properties file ( 1 hadoop-namenode+hadoop-jobtracker,2 hadoop-datanode+hadoop-tasktracker ).

You can see all your nodes by checking out your Whirr cluster directory.

cat ~/.whirr/hadoop/instances

Prepping and loading the cluster

Now that the cluster is up, you’ll need to prep it and then load your data with Sqoop.

One of the most irritating “gotchas” I stumbled across was that Whirr adds the firewall rules necessary for Hadoop to its AWS security group.

Before you do anything, open your EC2 control panel and modify the new Whirr security group (#jcloud-something) so that all of your nodes can connect to each other on port 3306 (MySQL)

The next step is to install mysql-client across the entire cluster since Sqoop uses mysqldump to get at your data. You could manually ssh into every machine but Whirr provides a convenient “run-script” command to do just that.

Create a file called “prepCluster.sh” and put “sudo apt-get -q -y install mysql-client” in it. Then make sure the paths are right and run,

./bin/whirr run-script --script prepCluster.sh --config hadoop.properties

Once its done, you’ll see the aptitude output from all your nodes as they downloaded the MySQL client.

The next step is to install mysql-server, hive, and sqoop on the jobtracker. Doing this is pretty straightforward, look at the .whirr/hadoop/instances file from above and copy the namenode hostname.

Next, ssh in to that machine using your current username as the username. Once you’re in, just run the following to install everything:

sudo apt-get -q -y install sqoop
sudo apt-get -q -y install hadoop-hive
sudo apt-get -q -y install screen

NOTE: You’ll also need the MySQL ConnectorJ library so that Sqoop can connect to MySQL. Download it here and place it in “/usr/lib/sqoop/lib/”

Once everything is done installing, you’ll most likely want to move your MySQL data directories from their default location onto the /mnt partition since it’s much larger. Check out this article for a good walk through. Don’t forget to update AppArmour or MySQL won’t start. Once MySQL is setup, load the data you want to crunch.

Now, you’ll need to use Sqoop to load the data from the MySQL database into Hadoop’s HDFS. While logged into the jobtracker node you can just run the following to do that. You’ll need to swap out the placeholders in the command and change the u/p.

sqoop-import-all-tables --connect jdbc:mysql://[IP of your jobtracker]/[your db_name] --username root --password root --verbose --hive-import

Once it completes, Sqoop will have copied all your MySQL data into Hadoop’s HDFS file system and initialized Hive for you.

Crunching the data

Run “hive” on the jobtracker and you’ll be ready to start crunching your data.

Check out the Hive language manual for more info on exactly what queries you can write.

Once you’ve narrowed down how to write your queries, you can use Hive’s “INSERT OVERWRITE LOCAL DIRECTORY” command to output the results of your query into a local directory.

Then, the next step would be to TAR up these results and use scp to copy the results back to your local machine to analyze or warehouse.

Shutting it down

The final thing you’ll need to do is shut down the cluster. Whirr makes it pretty easy:

./bin/whirr destroy-cluster --config hadoop.properties

Give it a few minutes and Whirr will shutdown the cluster and clean up the EC2 security group as well.

Anyway, hope this walk through proves useful for someone. As always, feedback, questions, and comments are all more than welcome.

Upload directly to S3 with SWFUpload

I was working on an application earlier today that required allowing a user to upload a large file (several hundred MB) which would eventually be stored on Amazon S3. After reviewing the requirements, I realized it made sense to just upload the file directly to S3 instead of having to first stage the file on a server and then use PHP to push the file to S3.

Amazon has a nice walk through of using a plain HTML form to upload a file directly to S3 here.

I had all ready been using SWFUpload to upload files to the server so I decided to look into using it to uploading directly to S3. After some head banging, I finally got it to work – here’s the quick n dirty.

  1. Download SWFUpload 2.5
  2. Get SWFUpload ready to use in your project. Copy the SWF file somewhere accessible and include their swfupload.js Javascript file. More info here
  3. Setup an S3 bucket. You’ll need to set the policy to allow uploads from your own user (its the default).
  4. Place a crossdomain.xml file in the root of your S3 bucket. This file “authorizes” flash player to upload files into this host. The content of the file is below.
  5. Initialize the SWFUpload object (example below).
  6. Before beginning the upload, you need to set the appropriate postParams in the SWFUpload object. This is really the “magic” of this process. Example is below.
  7. Start the upload with startUpload()

Thats it! It’s pretty straight forward once you have things going. As an FYI, you can put SWFUpload into “debug” mode by adding debug: true as a property to the initialization object. You can also debug the responses from Amazon by using a packet sniffer like Wireshark.

crossdomain.xml

You probably want to make this file a little less permissive. More details here. Also note, there are differences in the implementation of the file between various versions of Flash player.

Initialize SWFUpload

Set SWFUpload postParams

The HMAC signature MUST be calculated on the server because it uses your S3 secret. You MUST keep that value secret in order to maintain the security of your S3 buckets. I’m using Don Schonknecht’s S3 PHP library to calculate the HMAC signatures but you could just as easily do it in straight PHP.

Drupal 7: Batch insert nodes with Drush

Well D7 has been out for a little while now and we finally got a chance to use it on a site this week.

Anyway, this site is one of the heavier Drupal sites we’ve done and it involved loading ~200+ nodes of data just to set things up. This presented two problems, how to batch load data and then how to load custom content types with several custom fields.

The Drupal module documentation has example code for adding a node with drupal_exeucte here but it doesn’t deal with how to set custom fields on your content type. On top of this, drupal_execute has been renamed to drupal_form_submit in Drupal 7 and the function signature has changed a bit.

Anyway, I dug around a bit and finally managed to get this working. You’ll obviously need Drush installed for the following code to work but you could rip it out and use it outside a Drush command. I was looking to basically replicate the “load-data” task from Symfony so that I could seed my Drupal database with Nodes at any point so I chose to make this a Drush command.

Here’s what you need:

– You’ll need a module to hold the Drush task. I used Module Builder to generate my scaffolding.
– Create a file named [modulename].drush.inc in your module directory
– Here is the code I’m using for [modulename].drush.inc Replace “cm” with the name of your module:

Thats about it.

With Firebug, it’s really easy to see the field names and the values that you can set by just looking at a form to create whatever type of node you want.

drupal_form_submit can also be used to “submit” any other type of form in Drupal.

An open question is how to “fill out” an ImageField field via the command line since nothing is actually going to be uploaded.