Ramblings on code, startups, and everything in between
(Note: This originally appeared on Codeburst)
So what are a couple of these features? We’ll be looking at code from Setfive’s CloudWatch Autowatch an AWS cloud monitoring tool written in TypeScript. Since TypeScript is evolving so quickly it’s worth noting that these examples were run on version 2.4.2. Anyway, enough talk lets code!
If you have any experience with Java you’ve probably encountered the dreaded NullPointerException when you tried to deference a variable holding a “null” value. It’s certainly a pain and has been derided as a “billion dollar mistake” by its creator. To tackle NullPointerException bugs TypeScript 2.0 introduced the concept of non-nullable types. It’s “opt-in” via the “strictNullChecks” compiler flag which you can set your tsconfig.json
Consider this simple sample:
By default, the TypeScript compiler will compile that code since “null” is assignable to string but you’ll get an error about half the time you run it. Now, if we set “strictNullChecks: true” and run the compiler we’ll get an error:
partyguests.ts(5,9): error TS2322: Type ‘null’ is not assignable to type ‘string’.
Since the compiler can infer that at least one code path in the function produces a null which is now incompatible with an array. An example of this in the Autowatch code are the checks to ensure that PutMetricAlarmInput instances aren’t created with null dimensions. At line 462 for example.
Most programming languages with a Hindley–Milner type system have some functionality to perform a “pattern match” over a type. In practice, that allows a developer to make decisions about what to do with a set of objects based on the concrete type vs. their abstract signatures. For example, in Scala:
However, with TypeScript’s “never” type it’s possible to have the compiler guarantee an exhaustive match for us. We could do something like the following:
The important part is the call to “assertNever” which the compiler will error on if it detects is a reachable code path. We can confirm this if we add “BulkMessage” to the “MyNotification” type but not the if:
If you run the TypeScript compiler against that you’ll get an error highlighting that your if isn’t exhaustive since it’s hitting the “never”:
match2.ts(19,24): error TS2345: Argument of type ‘BulkMessage’ is not assignable to parameter of type ‘never’.
It’s certainly not as elegant as the Scala example but it does the job. You can see a real example in Autowatch starting at line 168 where we used it to guarantee exhaustive matching on the available AWS services.
Marking a class property “readonly” signals to the compiler that code shouldn’t be able to modify the value after initialization. Although “readonly” may sound similar to marking a property as “private” it actually enhances the type system in important ways. First, “readonly” properties make it possible to more faithfully represent immutable data. For example, a HTTP Request has a “url” which will never change after the request has started. And by marking properties as “readonly” as opposed to private you’re still able to return literal objects with matching properties.
Let’s look at an example:
If you run that through the TypeScript compiler you’ll get an error advising that the property is readonly:
readonly.ts(6,5): error TS2540: Cannot assign to ‘url’ because it is a constant or a read-only property.
Now, if you try and mark the url property as private and create a literal HttpRequest you’ll notice you’ll get an error:
But if you switch it back to “readonly” it’ll work as expected.
You can see real world usage of this in Autowatch, where we marked properties in our Config class as readonly since they should never change once the object has been constructed.
Well that’s three pretty cool features of the TypeScript type system that should help you be more productive and write better code. If you found these interesting, there’s several other interesting type related features that landed in 2.3+ versions of TypeScript that are worth checking out.
Posted In: General
(Note: This is a guest post from our friends at Panoply)
Cloud-based data services are all the rage these days for many good reasons, and AWS (Amazon Web Services) is the current king of cloud-based data service providers, as this analysis carried out by StackOverflow indicates.
Two popular AWS cloud computing services for data analytics and BI are Amazon Redshift and Amazon Athena, both of which are useful for delivering actionable insights that drive better decision making from your data. However, with a dizzying amount of information available on both services, it’s a challenge to recognize what to look out for when choosing a cloud-based data service to meet your needs.
In this post, you’ll get a broad overview of cloud-based data warehousing, and you’ll come to understand the main differences between Amazon Redshift and Amazon Athena (also see this post by Panoply on the subject).
When you’re finished reading, you’ll know which service you should choose between Athena and Redshift. The comparison can also teach you what to look for in more general terms when considering any cloud-based data solution currently available.
Traditional on-premise data warehouses are used for analyzing an organization’s historical data in one unified repository, pulling data from many different source systems, such as operational databases. Physical data warehouses are complex and expensive to build and maintain, though.
Cloud-based data warehouse services offer a much cheaper and easier way to use a data warehouse without needing any physical resources on site. Cloud-based providers host the necessary physical resources “in the cloud” while you simply pay for using the service.
Some examples of data warehouses in the cloud are:
You could write a book comparing all four of these services, so we’re going to hone in on both Amazon Redshift and Amazon Athena below.
Athena’s cost is $5 per terabyte of data scanned, while Redshift’s hourly costs range from $0.250 to $4.800 per hour for a DC instance, and $0.850 to $6.800 per hour for a DS instance.
Redshift Spectrum is a powerful feature that enables data querying in Redshift directly from S3. With Spectrum you can create a read-only external table, with its data located in a specified S3 path, and immediately begin querying that data without inserting it into Redshift. You can also join the external tables with tables that already reside on Redshift.
Querying data in S3: sounds familiar, right? That’s because Amazon Athena performs a similar function—it’s an S3 querying service. It’s important to note, however, that Spectrum is not an integration between Redshift and Athena—Redshift queries the relevant data on its own from S3 without the help of Athena.
If you are already an Amazon Redshift user, it makes sense to opt for Spectrum over Athena because of the convenience. However, if you aren’t currently using Redshift, it’s best to choose Athena over Spectrum because your investment in computing resources might go underutilized in Redshift. For your current analytics needs, Athena is likely to do the job—you can always invest in a Redshift+Spectrum combination later on when it’s needed to handle lots of data.
There is no widespread consensus on whether Amazon Athena is better than Redshift or vice versa—both services suit different uses.
Cloud-based data warehouses are quickly replacing traditional on-premise data warehouses because of their convenience, lower cost, and scalability.
Amazon Athena and Amazon Redshift take differing approaches to cloud-based data analytics services—Redshift requires resource provisioning and infrastructural management while Athena abstracts operational management away from users and allows direct querying of data stored on Amazon S3.
Amazon Spectrum provides separation of storage and compute in Redshift by allowing you to directly query data in S3, similar to Athena. Spectrum is useful if you already use Redshift, but you shouldn’t base your decision on Athena versus Redshift on the Spectrum feature.
The relevant comparison between Amazon Athena and Redshift relates to how they perform, what they cost, which tools they support, their usability, their accessibility, supported data types, and user-defined functions. You should base your end decision between Redshift and Athena on these factors, prioritizing the most important aspects of each service for your particular business. Maybe you prefer Athena’s effortless accessibility? Or maybe you’d rather the control and scalability you get in Redshift?
When weighing up any potential cloud-based data warehouse, always consider the above factors, instead of just choosing the most affordable solution.
Posted In: General
Despite how important they are, MySQL indexes are a bit of a dark art. Sure everyone knows indexes are important but details on how they’re implemented and when they’ll be used are hard to come by. Beyond regular indexes, MySQL’s composite indexes are especially opaque in regards to how and when they’ll be used. As the name suggests composite indexes are an index constructed across two columns versus a regular index on a single column. So when might a composite index come in handy? Let’s take a look!
We’ll look at a table “client_order” that captures some fictional orders from our fictional clients:
And we’ll fill it up with 5 million fictional orders with dates spanning the last 10 years. You can grab the data from https://setfive-misc.s3.amazonaws.com/client_order.sql.gz if you want to follow along locally.
To get started, let’s figure out the total amount spent for a couple of clients:
~1.5 seconds to calculate the sums and according to the EXPLAIN MySQL had to use a temporary table and a filesort. Will an index help here? Lets add one and find out.
~0.2 seconds and looking at the EXPLAIN we’ve cut down the number of rows MySQL has to look at to 424, much better. OK great, but now what if we’re only interested in looking at data from Christmas Eve in 2016?
(Note: Details on why we’re querying with full timestamps below)
As you can see, MySQL is still using the client_id index but we’re left still scanning 281,308 rows even though only 335 are actually relevant to us. So how do we fix this? Enter, the composite index! Let’s add one on (client_id, created_at) and see if it helps our query:
It helps but we’re clearly still looking a lot more rows than we need. So what gives? It turns out the order of the composite index is actually critically important since that dictates how MySQL assembles the b-tree for the index. Let’s flip the order of our index and try again:
And there you go! MySQL only has to look at 1360 rows as expected.
So what’s up with having to query with the full timestamps vs. just using DATE(created_at)? It turns out MySQL can’t use datetime indexes when you apply functions to the column you’re querying on. And beyond that, even certain ranges cause MySQL to not select indexes that would work fine:
Which then leads to the unintuitive conclusion that if you actually needed to implement any sort of aggregation by day you’d be better off adding a “date” column calculated from the “created_at” and indexing on that:
Anyway, as always comments and feedback welcome!
Posted In: Big Data
When Amazon Web Services rolled out their version 4 signature we started seeing sporadic errors on a few projects when we created pre-authenticated link to S3 resources with a relative timestamp. Trying to track down the errors wasn’t easy. It seemed that it would occur rarely while executing the same exact code. Our code was simply to get a pre-authenticated URL that would expire in 7 days, the max duration V4 signatures are allowed to be valid. The error we’d get was “The expiration date of a signature version 4 presigned URL must be less than one week”. Weird, we kept passing in “7 days” as the expiration time. After the error occurred a couple of times over a few weeks I decided to look into it.
The code throwing the error was located right in the SignatureV4 class. The error is thrown when the end timestamp minus the start timestamp for the signature was greater than a week. Looking through the way the timestamps were generated it went something like this:
So a rough example with straight PHP could of the above steps for a ‘7 days’ expiration would be as follows:
Straight forward enough, right? the problem lies when a second “rolls” between generating the `$start` and the end timestamp check. For example, if you generate the `$start` at `2017-08-20 12:01:01.999999`. Let’s say this gets assigned the timestamp of `2017-08-20 12:01:01`. Then the check for the 7 weeks occurs at `2017-08-27 12:01:02.0000` it’ll throw an exception as duration between the start and end it’s actually now for 86,401 seconds total. It turns outs triggering this error is easier than you’d think. Run this script locally:
That will throw an exception within a few seconds of running most likely.
After I figured out the error, the next step was to submit a issue to make sure I’m not misunderstanding how the library should be used. The simplest fix for me was to generate the end expiration timestamp before generating the start timestamp. After I made the PR, Kevin S. from AWS pointed out that while this fixed the problem, the duration still wasn’t guaranteed to always be the same for the same relative time period. For example, if you created 1000 presigned URLs all with ‘+7 days’ as the valid period, some may be 86400 in duration others may be 86399. This isn’t a huge problem, but Kevin made a great point that we could solve the problem by locking the relative timestamp for the end based on the start timestamp. After adding that to the PR it was accepted. As of release 3.32.4 the fix is now included in the SDK.
I was recently out with a friend of mine who mentioned that he was having a tough time scraping some data off a website. After a few drinks we arrived at a barter, if I could scrape the data he’d buy me some single malt scotch which seemed like a great deal for me. I assumed I’d make a couple of HTTP requests, parse some HTML, grab the data and dump it into a CSV. In the worst case I imagined having to write some custom code to login to a web app and maybe sticky some cookies. And then I got started.
As it turned out this site was running one of the most sophisticated anti-scraping/anti-robot packages I’ve ever encountered. In a regular browser session everything looked normal but after a half dozen or so programmatic HTTP requests I started running into their anti-robot software. After poking around a bit it, the blocks they were deploying were a mix of:
With a full browser environment, we now need to tackle the IP restrictions that cause captchas to appear. At face value, like most people, I assumed solving captchas with OCR magic would be easier than getting new IPs after a couple of requests but it turns out that’s not true. There weren’t any usable “captcha solvers” on npm so I decided to pursue the IP angle. The idea would be to grab a new IP address after a few requests to avoid having to solve a captcha which would require human intervention. Following some research, I found out that it’s possible to use Tor as a SOCKS proxy from a third party application. So concretely, we can launch a Tor circuit and then push our Electron HTTP requests through Tor to get a different IP address that your normal Internet connection.
Ok, enough talk, show me some code!
I setup a test “target page” at http://code.setfive.com/scraper_demo/ which randomly shows “content you want” and a “please solve this captcha”. The github repository at https://github.com/adatta02/electron-scraper-skeleton has all the goodies, a runnable Electron application. The money file is injected.js which looks like:
To run that locally, you’ll need to do the usual “npm install” and then also run a Tor instance if you want to get a new IP address on every request. The way it’s implemented, it’ll detect the “content you want” and also alert you when there’s a captcha by playing a “ding!” sound. To launch, first start Tor and let it connect. Then you should be able to run:
Once it loads, you’ll see the test page in what looks like a Chrome window with a devtools instance. As it refreshes, you’ll notice that the IP address is displays for you keeps updating. One “gotcha” is that by default Tor will only get a new IP address each time it opens a conduit, so you’ll notice that I run “killall” after each request which closes the Tor conduit and forces it to reopen.
And that’s about it. Using Tor with the skeleton you should be able to build a scraper that presents a new IP frequently, scrapes data, and conveniently notifies you if human input is required.
As always questions and comments are welcomed!