tru.ly releases first free age verification service

Last week was a big week over at tru.ly!

We launched our lightweight social verification service that allows partner sites to verify that their visitors are 18+ or 21+. We’re hoping that this will replace current age solutions and allow sites to safely show and monetize 18+ and 21+ content.

Check out the awesome write up The Next Web did (Thanks Courtney!).

Anyway, theres a live demo at https://tru.ly/social-api-demo/ and the demo video is below.

Fixing blank CCK Location fields in Views

Recently, we inherited a Drupal 6 site via a client of ours and ran into a pretty irritating bug with the Location module.

The site had been configured to allow users to create profiles using Node Profile along with the Location to allow users to input their street addresses.

Anyway, the issue was that when we created a View that included Location fields the fields were always rendering as blank even when we confirmed there was data in the database. A bit of poking around lead to this issue.

It turns out that due to an optimization in CCK or Views that the tables that have the data for the location fields are not getting JOIN’ed in when the view is executed. Unfortunately, the patch provided on the issue doesn’t work on the latest 3.x release of the Location module.

The fix that worked for us is #14 (copied below)

/**
* Preprocess hook for location().
*/
function yourtheme_preprocess_location(&$variables) {
  if (!isset($variables['location']['name']) && isset($variables['location']['lid'])) {
    $variables['location'] = array_merge($variables['location'], location_load_location($variables['location']['lid']));
    template_preprocess_location($variables);
  }
}

Basically, you’ll need to add the above snippet to a template.php file in your theme and change the name to reflect the theme you’re using. What this function does is basically pre-process the location fields to pull in the data so that the View will work properly.

Anyway, enough blogging it’s football time.

Getting started with Hadoop, Hive, and Sqoop

I apologize for the buzzword heavy title but it was the best I could do. I couldn’t find a good quick start explaining how to get started with Hive so I thought I’d share my experiences.

Anyway, a client of ours came to us needing to analyze a dataset that was about ~200 million rows over 6 months and is currently growing at about 10 million rows a week and increasing. From a reporting standpoint, they were looking to run aggregate counts and group bys over the data and then display the results on charts. Additionally, they were also looking to select subsets of the data and use them later – basically SELECT * FROM table WHERE x AND y AND z.

Obviously, doing the calculations in real time was out of the question so we knew we were looking for a solution that would be easy to use, support the necessary requirements and that would predictably scale with the increasing generation rate of data.

On the surface, MySQL looks like a decent approach but it presents a couple of issues pretty quickly:

  • In order for the SUM, GROUP BY, and COUNT queries to be at all useful the MySQL tables would have to be heavily indexed. Unfortunately, due to the write heavy workload of the app this would mean having to copy data into an indexed MySQL database before running any reports.
  • Even with indexes, MySQL was pretty awful at selecting subsets of the data from a performance perspective.
  • And probably the biggest issue with MySQL is that it doesn’t scale linearly in the sense that if the data is growing at 500 million rows a week you can’t simply “throw more hardware” at it and be done with it.

With requirements in hand we hit the Internet and finally arrived at Hive running on top of Hadoop. Per Wikipedia,

From our perspective, this stack fits our requirements nicely since it doesn’t rely on keeping a second “reporting” MySQL database available, it will handle both sum/count/group by and selecting subets, and probably most importantly it will allow us at least in the near term to scale with the increasing rate of data generation.

“Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google’s MapReduce and Google File System (GFS) papers.”

To grossly over simplify, Hadoop provides a framework that allows you to break up a data intensive task into discrete pieces, run the pieces in a distributed fashion, and then combine the results giving you the results of the completed task. The quintessential example of a task that can be parallelized in this fashion is sorting a *really* big list since the list can be sorted in pieces and then the results can be combined at the end. See Merge Sort

The second piece of the tool chain is Hive. Again via Wikipedia,

“Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.”

Basically, Hive is a tool that leverages the Hadoop framework to provide reporting and query capabilities using a syntax similar to SQL.

That just leaves Sqoop, the app with a funny name and no Wikipedia entry. Sqoop was originally developed by Cloudera and basically serves as an import tool for Hadoop. For my purposes, it allowed me to easily import the data from my MySQL database into Hadoop’s HDFS so I could use it in Hive.

The rest of this post walks you through setting up Hadoop+Hive and analyzing some MySQL data.

Now that you know the players, lets figure out what we’re actually trying to do.

  1. We want to start a Hadoop cluster to use Hive on.
  2. Load our data from a MySQL database into this Hadoop cluster.
  3. Use Hive to run some reports on this data.
  4. Warehouse the results of this data in MySQL so we can graph it (not that exciting).

Starting the cluster

Theres actually one more tool you’ll need to get this to work – Apache Whirr. Whirr is actually really cool, it lets you automatically start cluster services (Hadoop, Voldermont, etc.) at a handful of cloud platforms (AWS, Rackspace, etc.)

NOTE: We exclusively use AWS for our hosting so everything described here is specific to AWS.

Fisrt, download the latest copy of Whirr – http://www.fightrice.com/mirrors/apache//incubator/whirr/ to your local machine. Whirr should work everywhere but these directions will match up against Linux/OSX the best.

The first thing you’ll need is a Whirr configuration file describing the cluster you want to build. Create a file called hadoop.properties and paste in the following:

whirr.cluster-name=hadoop
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,2 hadoop-datanode+hadoop-tasktracker
whirr.hadoop-install-function=install_cdh_hadoop
whirr.hadoop-configure-function=configure_cdh_hadoop

whirr.provider=aws-ec2
whirr.identity=[REPLACE THIS WITH YOUR AWS ID]
whirr.credential=[REPLACE THIS WITH YOUR AWS SECRET]

whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-68ad5201
whirr.location-id=us-east-1

whirr.private-key-file=~/.ssh/id_rsa
whirr.public-key-file=~/.ssh/id_rsa.pub

There isn’t a ton going on in the file but you’ll need to switch out the credential lines for your AWS credentials. Also, you’ll need to double check that the ssh paths are accurate for your account.

The next step, is to actually launch the cluster. To do this run this command – double check the path to your hadoop.properties file is accurate:

./bin/whirr launch-cluster --config hadoop.properties

Just give it a few minutes, you’ll see a bunch of debug info scrolling across your terminal and hopefully a success message once its done. At this point, you’ll have a fully built Hadoop cluster with 3 nodes as described in your properties file ( 1 hadoop-namenode+hadoop-jobtracker,2 hadoop-datanode+hadoop-tasktracker ).

You can see all your nodes by checking out your Whirr cluster directory.

cat ~/.whirr/hadoop/instances

Prepping and loading the cluster

Now that the cluster is up, you’ll need to prep it and then load your data with Sqoop.

One of the most irritating “gotchas” I stumbled across was that Whirr adds the firewall rules necessary for Hadoop to its AWS security group.

Before you do anything, open your EC2 control panel and modify the new Whirr security group (#jcloud-something) so that all of your nodes can connect to each other on port 3306 (MySQL)

The next step is to install mysql-client across the entire cluster since Sqoop uses mysqldump to get at your data. You could manually ssh into every machine but Whirr provides a convenient “run-script” command to do just that.

Create a file called “prepCluster.sh” and put “sudo apt-get -q -y install mysql-client” in it. Then make sure the paths are right and run,

./bin/whirr run-script --script prepCluster.sh --config hadoop.properties

Once its done, you’ll see the aptitude output from all your nodes as they downloaded the MySQL client.

The next step is to install mysql-server, hive, and sqoop on the jobtracker. Doing this is pretty straightforward, look at the .whirr/hadoop/instances file from above and copy the namenode hostname.

Next, ssh in to that machine using your current username as the username. Once you’re in, just run the following to install everything:

sudo apt-get -q -y install sqoop
sudo apt-get -q -y install hadoop-hive
sudo apt-get -q -y install screen

NOTE: You’ll also need the MySQL ConnectorJ library so that Sqoop can connect to MySQL. Download it here and place it in “/usr/lib/sqoop/lib/”

Once everything is done installing, you’ll most likely want to move your MySQL data directories from their default location onto the /mnt partition since it’s much larger. Check out this article for a good walk through. Don’t forget to update AppArmour or MySQL won’t start. Once MySQL is setup, load the data you want to crunch.

Now, you’ll need to use Sqoop to load the data from the MySQL database into Hadoop’s HDFS. While logged into the jobtracker node you can just run the following to do that. You’ll need to swap out the placeholders in the command and change the u/p.

sqoop-import-all-tables --connect jdbc:mysql://[IP of your jobtracker]/[your db_name] --username root --password root --verbose --hive-import

Once it completes, Sqoop will have copied all your MySQL data into Hadoop’s HDFS file system and initialized Hive for you.

Crunching the data

Run “hive” on the jobtracker and you’ll be ready to start crunching your data.

Check out the Hive language manual for more info on exactly what queries you can write.

Once you’ve narrowed down how to write your queries, you can use Hive’s “INSERT OVERWRITE LOCAL DIRECTORY” command to output the results of your query into a local directory.

Then, the next step would be to TAR up these results and use scp to copy the results back to your local machine to analyze or warehouse.

Shutting it down

The final thing you’ll need to do is shut down the cluster. Whirr makes it pretty easy:

./bin/whirr destroy-cluster --config hadoop.properties

Give it a few minutes and Whirr will shutdown the cluster and clean up the EC2 security group as well.

Anyway, hope this walk through proves useful for someone. As always, feedback, questions, and comments are all more than welcome.

Adding a task/command in Symfony2

I recently took the Symfony2 plunge and started working on a little fun side project (more on that later).

Anyway, this particular project involves sending out daily text messages using the rather awesome Twilio API so I decided to use a Symfony2 task for this. The documentation on how to actually add your own task is a bit sparse so I figured I’d share.

The process is actually pretty straight forward:

  1. In your bundle create a directory named “Command” (without the quotes).
  2. Create a file that extends ContainerAwareCommand
  3. Create a protected function configure – “protected function configure()” to allow you to configure the name of your task and add any options or arguments you might need.
  4. Create a protected function execute – “protected function execute(InputInterface $input, OutputInterface $output)” to actually do whatever needs to be done.
  5. Thats it! Now you can run app/console and you’ll see your task.

Here is the code for mine:

Running Java apps from the crontab

Earlier this week I was completely dumbfounded by a PHP script that launched a Java app that seemed to work fine when it was run from the command line but kept failing when it was run from a cron.

The Java app in question was “ec2-describe-group” out of the Amazon EC2 API Tools package.  Basically, the ec2-describe-group tool hits the EC2 API and returns information about your account’s currently configured security groups.

The issue I was having was that when the PHP script was launched from a cron ec2-describe-group would keep returning an empty string, but when the script was launched from the CLI ec2-describe-group behaved normally.

After some poking around, I found this StackOverflow post which points out that most the environment variables your shell has aren’t available in a cronjob.

With that in mind, I tried adding JAVA_HOME as well as EC2_HOME to my crontab. Doing this is pretty straight forward, just add these two lines above any of your scheduled jobs:

EC2_HOME=/opt/ec2-api-tools-1.3.36506
JAVA_HOME=/etc/java-config-2/current-system-vm

Unfortunately, this still didn’t resolve the issue. On a whim, I decided to check what type of file ec2-describe-group actually is and discovered that its a Bash script not a Java JAR. Looking at the Bash, the file is actually just executing “EC2_HOME/bin/ec2-cmd DescribeGroups” but it utilizes other environment variables that my cron didn’t have.

For simplicity’s sake, I decided to just switch the PHP script to run ec2-cmd directly and finally everything started working as expected.