Blog

Ramblings on code, startups, and everything in between

(Note: This is a guest post from our friends at Panoply)

Cloud-based data services are all the rage these days for many good reasons, and AWS (Amazon Web Services) is the current king of cloud-based data service providers, as this analysis carried out by StackOverflow indicates.

Two popular AWS cloud computing services for data analytics and BI are Amazon Redshift and Amazon Athena, both of which are useful for delivering actionable insights that drive better decision making from your data. However, with a dizzying amount of information available on both services, it’s a challenge to recognize what to look out for when choosing a cloud-based data service to meet your needs.

In this post, you’ll get a broad overview of cloud-based data warehousing, and you’ll come to understand the main differences between Amazon Redshift and Amazon Athena (also see this post by Panoply on the subject).

When you’re finished reading, you’ll know which service you should choose between Athena and Redshift. The comparison can also teach you what to look for in more general terms when considering any cloud-based data solution currently available.

A Data Warehouse in the Cloud

Traditional on-premise data warehouses are used for analyzing an organization’s historical data in one unified repository, pulling data from many different source systems, such as operational databases. Physical data warehouses are complex and expensive to build and maintain, though.

Cloud-based data warehouse services offer a much cheaper and easier way to use a data warehouse without needing any physical resources on site. Cloud-based providers host the necessary physical resources “in the cloud” while you simply pay for using the service.

Some examples of data warehouses in the cloud are:

  • Amazon Redshift—in Redshift, you provision resources and manage those resources similar to how you would in a traditional data warehouse, without the need to host physical computing resources on-site—AWS hosts them in the cloud instead as “clusters”. You simply connect your operational data sources to the cloud and move the data into Redshift for use with analytics and BI tools. Redshift uses a customized version of PostgreSQL.
  • Amazon Athena—Athena provides an interactive query service that makes it possible to directly analyze data stored in Amazon S3, which is the cloud storage service provided by AWS. You query data in Athena with standard ANSI SQL. In contrast to Redshift, Athena takes a serverless approach to data warehousing because you don’t need to provision resources or manage the infrastructure.
  • Google BigQuery—BigQuery is a serverless cloud-based data analytics platform that enables querying of very large read-only datasets, and it works in conjunction with Google Storage.
  • Azure SQL Data Warehouse—this Microsoft offering is a scalable and fully managed cloud-based data warehouse service.

You could write a book comparing all four of these services, so we’re going to hone in on both Amazon Redshift and Amazon Athena below.

Amazon Athena vs. Amazon Redshift – Feature Comparison

  • Initialization Time: Amazon Athena is the clear winner here because you can immediately begin querying data stored on Amazon S3. Redshift, on the other hand, requires you to prepare a cluster of computing resources and load data into the tables you create.
  • User-defined Functions: Amazon Redshift supports user-defined functions (UDFs), which are procedures that accept parameters, perform an action, and return the result of that action as a value. Amazon Athena has no support for UDFs.
  • Data Types: Amazon Athena supports more complex data types, such as arrays, maps, and structs, while Redshift has no support for such complex data types.
  • Performance: For basic table scans and small aggregations, Amazon Athena outperforms Redshift. However, for complex joins and larger aggregations, Redshift is a better option.
  • Cost: Athena’s cost is based on the amount of data scanned in each query, which means it’s important to compress and partition data. Since Amazon Athena queries data on S3, the total cost of S3 data storage combined with Athena query costs gives the full price. Redshift’s cost depends on the type of cloud instances used to build your cluster, and whether you want to pay as you use (on demand) or commit to a certain term of usage (reserved instances).



Athena’s cost is $5 per terabyte of data scanned, while Redshift’s hourly costs range from $0.250 to $4.800 per hour for a DC instance, and $0.850 to $6.800 per hour for a DS instance.

AWS Redshift Spectrum, Athena, S3

Redshift Spectrum is a powerful feature that enables data querying in Redshift directly from S3. With Spectrum you can create a read-only external table, with its data located in a specified S3 path, and immediately begin querying that data without inserting it into Redshift. You can also join the external tables with tables that already reside on Redshift.

Querying data in S3: sounds familiar, right? That’s because Amazon Athena performs a similar function—it’s an S3 querying service. It’s important to note, however, that Spectrum is not an integration between Redshift and Athena—Redshift queries the relevant data on its own from S3 without the help of Athena.

If you are already an Amazon Redshift user, it makes sense to opt for Spectrum over Athena because of the convenience. However, if you aren’t currently using Redshift, it’s best to choose Athena over Spectrum because your investment in computing resources might go underutilized in Redshift. For your current analytics needs, Athena is likely to do the job—you can always invest in a Redshift+Spectrum combination later on when it’s needed to handle lots of data.

Athena vs. Redshift – Which Should You Choose?

There is no widespread consensus on whether Amazon Athena is better than Redshift or vice versa—both services suit different uses.

  • Amazon Athena is much quicker and easier to set up than Redshift, and this querying service outperforms Redshift on all basic table scans and small aggregations. The accessibility of Athena makes it better suited to running quick ad hoc queries.
  • For complex joins, larger aggregations, and very large datasets, Redshift is better due to its computational capacity and highly scalable infrastructure.
  • The conclusion, therefore, is that Redshift is likely the cloud-based data warehouse platform of choice if you have lots of data and many complex queries to run. Amazon Athena is the recommended cloud-based service for analyzing your data in all other cases, and it’s suitable for small- to medium-sized organizations.

Closing Thoughts

Cloud-based data warehouses are quickly replacing traditional on-premise data warehouses because of their convenience, lower cost, and scalability.

Amazon Athena and Amazon Redshift take differing approaches to cloud-based data analytics services—Redshift requires resource provisioning and infrastructural management while Athena abstracts operational management away from users and allows direct querying of data stored on Amazon S3.

Amazon Spectrum provides separation of storage and compute in Redshift by allowing you to directly query data in S3, similar to Athena. Spectrum is useful if you already use Redshift, but you shouldn’t base your decision on Athena versus Redshift on the Spectrum feature.

The relevant comparison between Amazon Athena and Redshift relates to how they perform, what they cost, which tools they support, their usability, their accessibility, supported data types, and user-defined functions. You should base your end decision between Redshift and Athena on these factors, prioritizing the most important aspects of each service for your particular business. Maybe you prefer Athena’s effortless accessibility? Or maybe you’d rather the control and scalability you get in Redshift?

When weighing up any potential cloud-based data warehouse, always consider the above factors, instead of just choosing the most affordable solution.

Posted In: General

Despite how important they are, MySQL indexes are a bit of a dark art. Sure everyone knows indexes are important but details on how they’re implemented and when they’ll be used are hard to come by. Beyond regular indexes, MySQL’s composite indexes are especially opaque in regards to how and when they’ll be used. As the name suggests composite indexes are an index constructed across two columns versus a regular index on a single column. So when might a composite index come in handy? Let’s take a look!

We’ll look at a table “client_order” that captures some fictional orders from our fictional clients:

And we’ll fill it up with 5 million fictional orders with dates spanning the last 10 years. You can grab the data from https://setfive-misc.s3.amazonaws.com/client_order.sql.gz if you want to follow along locally.

To get started, let’s figure out the total amount spent for a couple of clients:
https://gist.github.com/adatta02/f675b2c7b0659ab960d791b44ee02861

~1.5 seconds to calculate the sums and according to the EXPLAIN MySQL had to use a temporary table and a filesort. Will an index help here? Lets add one and find out.

~0.2 seconds and looking at the EXPLAIN we’ve cut down the number of rows MySQL has to look at to 424, much better. OK great, but now what if we’re only interested in looking at data from Christmas Eve in 2016?

(Note: Details on why we’re querying with full timestamps below)

As you can see, MySQL is still using the client_id index but we’re left still scanning 281,308 rows even though only 335 are actually relevant to us. So how do we fix this? Enter, the composite index! Let’s add one on (client_id, created_at) and see if it helps our query:

It helps but we’re clearly still looking a lot more rows than we need. So what gives? It turns out the order of the composite index is actually critically important since that dictates how MySQL assembles the b-tree for the index. Let’s flip the order of our index and try again:


And there you go! MySQL only has to look at 1360 rows as expected.

So what’s up with having to query with the full timestamps vs. just using DATE(created_at)? It turns out MySQL can’t use datetime indexes when you apply functions to the column you’re querying on. And beyond that, even certain ranges cause MySQL to not select indexes that would work fine:

Which then leads to the unintuitive conclusion that if you actually needed to implement any sort of aggregation by day you’d be better off adding a “date” column calculated from the “created_at” and indexing on that:

Anyway, as always comments and feedback welcome!

Posted In: Big Data

Tags:

When Amazon Web Services rolled out their version 4 signature we started seeing sporadic errors on a few projects when we created pre-authenticated link to S3 resources with a relative timestamp. Trying to track down the errors wasn’t easy. It seemed that it would occur rarely while executing the same exact code. Our code was simply to get a pre-authenticated URL that would expire in 7 days, the max duration V4 signatures are allowed to be valid. The error we’d get was “The expiration date of a signature version 4 presigned URL must be less than one week”. Weird, we kept passing in “7 days” as the expiration time. After the error occurred a couple of times over a few weeks I decided to look into it.

The code throwing the error was located right in the SignatureV4 class. The error is thrown when the end timestamp minus the start timestamp for the signature was greater than a week. Looking through the way the timestamps were generated it went something like this:

  1. Generate the start timestamp as current time for the signature assuming one is not passed.
  2. Do a few other quick things not related to this problem.
  3. Do a check to insure that the end minus start timestamp is less than a week in seconds.

So a rough example with straight PHP could of the above steps for a ‘7 days’ expiration would be as follows:

Straight forward enough, right? the problem lies when a second “rolls” between generating the `$start` and the end timestamp check. For example, if you generate the `$start` at `2017-08-20 12:01:01.999999`. Let’s say this gets assigned the timestamp of `2017-08-20 12:01:01`. Then the check for the 7 weeks occurs at `2017-08-27 12:01:02.0000` it’ll throw an exception as duration between the start and end it’s actually now for 86,401 seconds total. It turns outs triggering this error is easier than you’d think. Run this script locally:

That will throw an exception within a few seconds of running most likely.

After I figured out the error, the next step was to submit a issue to make sure I’m not misunderstanding how the library should be used. The simplest fix for me was to generate the end expiration timestamp before generating the start timestamp. After I made the PR, Kevin S. from AWS pointed out that while this fixed the problem, the duration still wasn’t guaranteed to always be the same for the same relative time period. For example, if you created 1000 presigned URLs all with ‘+7 days’ as the valid period, some may be 86400 in duration others may be 86399. This isn’t a huge problem, but Kevin made a great point that we could solve the problem by locking the relative timestamp for the end based on the start timestamp. After adding that to the PR it was accepted. As of release 3.32.4 the fix is now included in the SDK.

Posted In: Amazon AWS, PHP

Tags: , , , , , ,

I was recently out with a friend of mine who mentioned that he was having a tough time scraping some data off a website. After a few drinks we arrived at a barter, if I could scrape the data he’d buy me some single malt scotch which seemed like a great deal for me. I assumed I’d make a couple of HTTP requests, parse some HTML, grab the data and dump it into a CSV. In the worst case I imagined having to write some custom code to login to a web app and maybe sticky some cookies. And then I got started.

As it turned out this site was running one of the most sophisticated anti-scraping/anti-robot packages I’ve ever encountered. In a regular browser session everything looked normal but after a half dozen or so programmatic HTTP requests I started running into their anti-robot software. After poking around a bit it, the blocks they were deploying were a mix of:

  • Whitelisted User Agents – Following a few requests from PHP cURL the site started blocking requests from my IP that didn’t include a “regular” user agent.
  • Requiring cookies and Javascript – I thought this was actually really clever. After a couple of requests the site started quietly loading an intermediate page that required your browser to run Javascript to set a cookie and then complete a POST request to a URL that included a nonce in order to view a page. To a regular user, this was fairly transparent since it happened so quickly but it obviously trips up a client HTTP client.
  • Soft IP rate limits – After a couple of dozen requests from my IP I started receiving “Solve this captcha” pages in order to view the target content.

Taken all together, it’s a pretty sophisticated setup for what’s effectively a niche social networking site. With the “requires Javascript” requirement I decided to explore using Electron for this project. And turns out, it’s a perfect fit. For a quick primer, Electron is an open source project from GitHub that enables developers to build cross platform desktop applications by merging nodejs and Chrome. Developers end up writing Javascript that can leverage the nodejs ecosystem while also using Chrome’s browser internals to render windows and widgets. Electron helps in this use case because it provides a full Chrome browser that’s scriptable and has access to node’s system level modules. For completeness, you could implement all of this in a Chrome extension but in my experience extensions have more complicated non-privileged to privileged communication and lack access to node so you can’t just fire off a “fs.writeFileSync” to persist your results.

With a full browser environment, we now need to tackle the IP restrictions that cause captchas to appear. At face value, like most people, I assumed solving captchas with OCR magic would be easier than getting new IPs after a couple of requests but it turns out that’s not true. There weren’t any usable “captcha solvers” on npm so I decided to pursue the IP angle. The idea would be to grab a new IP address after a few requests to avoid having to solve a captcha which would require human intervention. Following some research, I found out that it’s possible to use Tor as a SOCKS proxy from a third party application. So concretely, we can launch a Tor circuit and then push our Electron HTTP requests through Tor to get a different IP address that your normal Internet connection.

Ok, enough talk, show me some code!

I setup a test “target page” at http://code.setfive.com/scraper_demo/ which randomly shows “content you want” and a “please solve this captcha”. The github repository at https://github.com/adatta02/electron-scraper-skeleton has all the goodies, a runnable Electron application. The money file is injected.js which looks like:

To run that locally, you’ll need to do the usual “npm install” and then also run a Tor instance if you want to get a new IP address on every request. The way it’s implemented, it’ll detect the “content you want” and also alert you when there’s a captcha by playing a “ding!” sound. To launch, first start Tor and let it connect. Then you should be able to run:

Once it loads, you’ll see the test page in what looks like a Chrome window with a devtools instance. As it refreshes, you’ll notice that the IP address is displays for you keeps updating. One “gotcha” is that by default Tor will only get a new IP address each time it opens a conduit, so you’ll notice that I run “killall” after each request which closes the Tor conduit and forces it to reopen.

And that’s about it. Using Tor with the skeleton you should be able to build a scraper that presents a new IP frequently, scrapes data, and conveniently notifies you if human input is required.

As always questions and comments are welcomed!

Posted In: Javascript

Tags: , ,

It has become a bi-weekly ritual. The professor spent too much time on the course material again and is left mumbling through a complex project description during the 11th hour of class. All the while, you’re off somewhere else. As you sling your backpack over your shoulder, you catch the only words you’ll need to hear: “You can download the syllabus along with the source code from the CS department’s website,” they say. Great! You hustle back to study location of choice, open your laptop, and extract the project files. After the obligatory knuckle crack, you look down at the method stubs spelled out for you. “All I have to do is fill-in these functions?” you think to yourself. And as you’re getting familiar with the project structure, a couple flicks of the scroll wheel reveal hundreds, sometimes thousands of lines of unexplained boilerplate code.

You eventually finish up the assignment and push it to the CS department’s server for grading. Without fail, someone raises their hand during the next class asking the instructor if they could explain what some of that boilerplate code was for, at which point the student is usually told to refer to the language documentation to figure it out for themselves. And for the most part, this makes perfect sense. After all, you’re there to learn about some of the more complex topics in computer science, not to write setter and getter methods all day. That’s what your data structures class was for.

But I would like to share with you the first few months of my experience as a Jr. Software Engineer and compare it to my time as an undergraduate student. You might be not-so-surprised to hear I have spent more time writing code similar to the boilerplate stuff mentioned above than I have perfecting the space and time complexity of my pioneering solution to The Traveling Salesman problem.

As an undergraduate student, I was an ace at avoiding merge conflicts in repositories where I was the only contributor. I could even run a build script with the best of ‘em. Nobody ever really told me how to use version control systems to manage a collaborative project with tens of thousands of lines of code strewn across a mess of files and directories. And if, for some reason, those same build scripts broke or a merge conflict popped up on a group project? Well, I was pretty much at the mercy of Stack Overflow.
At Setfive, when I was tasked with setting up a relational database schema for my first real project, I wasn’t really sure where to begin. There was no syllabus to refer to and no professor to schedule office hours with. While I was aware of relational database software such as MySQL and NodeJS, I had never really written a query, so I certainly didn’t know the difference between an inner and outer join. And while coordinating all those AJAX calls and setting up the Symfony bundle configs was a little confusing at first, I think I’m starting to learn how to apply my undergraduate education to these real-world projects.

So far, I have found that industry-level programming helps hone a much more practical skill set than academic programming. Don’t get me wrong, I learned a ton in college, and I know the concepts taught are not only important to a fundamental understanding of the field of computer science, but also have profound and meaningful applications elsewhere, such as in operating systems, machine learning, and so on. But when I look back on the things I have learned in such a short period of time over these past few months, it gets me excited for the road ahead. I owe an enormous thanks to Setfive for bringing me on as an entry-level software developer and advising me with patience.

Posted In: General