At the beginning of last year we tried something different and deployed an application on Amazon’s Elastic Container Service (ECS). This is in contrast to our normal approach of deploying applications directly on EC2 instances. So from a devops perspective using ECS involved two challenges, working with Docker containers and using ECS as a service. We’ll focus on ECS here since enough ink has been spilled about Docker.

For a bit of background, ECS is a managed services that allows you to run Docker containers on AWS cloud infrastructure (EC2s). Amazon extended the abstraction with “Fargate for ECS” which launches your Docker containers on AWS managed EC2s so you don’t have to manage or maintain any underlying hardware. With Fargate you define a “Task” consisting of a Docker image, # of vCPUs, and amount of RAM which AWS uses to launch a Docker container on a EC2 which you don’t have any access to. And then naturally if you need to provision additional capacity you can just tick the 1 to 2 and AWS will launch an additional container.

The Setup

The app we deployed on ECS is one we inherited from another team. The app is a consumer facing Vert.x (Java) web app that provides a set of API endpoints for content focussed consumer sites. Before taking over the app we had learned that it had some “unique” (aka bugs) scaling requirements which was one of the motivators to use ECS. On the AWS side, our setup consisted of a Fargate ECS cluster connected to an application load balancer (ALB) which handled SSL termination and health checks for the ECS Tasks. In addition, we connected CircleCI for continuous integration and continuous deployment. We’re usingecs-deploy to handle CD on ECS. ecs-deploy handles creating a new ECS task definition, bringing up a new container with that definition, and cycling out the old container if everything went well. So a year in here are some takeaways of using Fargate ECS.

Reinforces cattle, not pets

There’s a cloud devops mantra that you should treat your servers like a herd of cattle, not the family pets. The thinking being that, especially for horizontally scalable cloud servers, you want the servers to be easy to bring up and not “special” in any way. When using EC2s you can convince yourself that you’re adopting this philosophy but eventually something will leak through. Sure you use a configuration management tool but during an emergency someone will surely manually install something. And your developers aren’t supposed to use the local disk but at some point someone will rely on files in “/tmp” always being there.

In contrast, deploying on ECS reinforces thinking of individual servers as disposable because the state of your container is destroyed every time you launch a new task. So each time you deploy new code a new container will be launched without retaining any previous state. At the server level, this dynamic actually makes it impossible to make ad hoc server changes since it’s impossible to SSH to your ECS so any changes would have to present in your Dockerfile. Similarly at the app level since your disks don’t persist between deployments you’d quickly stop writing anything important just to disk.

Logs are…challenging

When using Fargate on ECS the only way to access output from your container is through a CloudWatch log group through the CloudWatch UI. At first look this is great, you can view your logs right in the AWS console UI without having to SSH into any servers! But as time goes on you’ll start to miss being able to see and manipulate logs in a regular terminal.

The first stumbling block is actually the UI itself. It’s not uncommon for the UI to be a couple of minutes delayed which ends up being a significant pain point when “shit is broken”. Related to the delays, it seems like sometimes logs are available in the ECS Task view before CloudWatch which ends up being confusing to members of the team as they debugged issues.

Additionally, although the UI has search and filtering capabilities they’re fairly limited and difficult to use. Compounding this, there frustratingly isn’t an easy way to download the log files locally. This makes it difficult to use common Linux command line tools to parse and analyze logs which you’d normally be able to do. It is possible to export your CloudWatch logs to S3 via the console and then download them locally but the process involves a lot of clicks. You could automate this via the API but it feels like something you shouldn’t have to build since for example the load balancer automatically delivers logs into S3.

You’ll (probably) still need an EC2

The ECS/container dream is that you’ll be able to run all of your apps on managed, abstract infrastructure where all you have to worry about is a Dockerfile. This might be true for some people but for typical organizations you’re probably going to need to run an EC2. The biggest pain points for us were running scheduled tasks (crontabs) and having the flexibility to run ad hoc commands inside AWS.

It is possible to run scheduled tasks on ECS but after experimenting with it we didn’t think it was a great fit. Instead, the approach we took was setting up Jenkins on an EC2 to run scheduled jobs which consisted of running some command in a Docker container. So ultimately our scheduled jobs shared Docker images with our ECS tasks since the images are hosted by AWS ECR. Because of this, the same CircleCI build process that updates the ECS task will also update the image that Jenkins runs so the presence of Jenkins is mostly transparent to developers.

Not having an “inside AWS” environment to run commands is one of the most limiting aspects of using ECS exclusively. At some point an engineer on every team is going to find themselves needing to run a database dump or analyze some log files both of which will simply be orders of magnitude faster if run within AWS vs. over the internet. We took a pretty typical approach here with an EC2 configured via Ansible in an autoscale group.

Is it worth it?

After a year with ECS+Fargate I think it’s definitely a good solution for a set of deployment scenarios. Specifically, if you’re dealing with running a dynamic set of web frontends that are easily containerized it’ll probably be a great fit. The task scaling is “one click” (or API call) as advertised and it feels much snappier than bringing up a whole EC2, even with a snapshotted AMI. One final dimension to evaluate is naturally cost. ECS is billed roughly at the same rate as a regular EC2 but if you’re leaving tasks underutilized you’ll be paying for capacity you aren’t using. As noted above, there are some operational pain points with ECS but on the whole I think it’s a good option to evaluate when using AWS.

Posted In: Amazon AWS

Tags: , ,

Last week my girlfriend Diane was looking for some help scraping pollen data from a couple of sites. The code was simple enough to hammer out but how was I going to deliver it? Diane is fairly tech savy but even so asking her to install nodejs and run a command line app was going to be a bit much. After considering options like a Java Swing app or Qt+nodejs I decided to give Electron a shot. Just want to see the code? It’s available here, pollen-scraper.

Electron is a cross platform application runtime which basically “runs” code inside a Chrome browser alongside nodejs. In practice you can use nodejs libraries with your favorite JavaScript framework too build applications that run anywhere that Chrome will. Several popular companies including Slack and Spotify have desktop clients powered by Electron. Pretty much perfect for my use case. So what was using Electron for the first time like?

Getting started is easy

One of the frustrating aspects of “enterprise” cross platform frameworks is that it takes a long time to even get something up on the screen. Between complex build systems and custom layout languages, it generally takes awhile to get something on the screen using something like Qt or Swing. With Electron getting started was as simple as cloning https://github.com/electron/electron-quick-start-typescript, firing off an “npm install”, and after a “npm start” I had a working cross platform UI on the screen. Additionally, since Electron leverages web technologies it was also straightforward to add Bootstrap and AngularJS to the project.

The NodeJS ecosystem

As mentioned above, Electron applications can use any nodejs library which makes the environment incredibly powerful right out of the box. For example, I was able to leverage turfjs along with Google’s geocoder to find the closest city to an arbitrary zip code in Japan. Being able to tap into the npm/nodejs ecosystem also makes it possible to deliver high value applications quickly since you’re able to focus on business differentiators not plumbing.

Debugging

Since its built on Chrome, you get access to Chrome’s DevTools within Electron. And you can also enable remote debugging with a launch flag to make it possible to connect to your Electron instance remotely. In addition, for production apps you’d be able to drop in something like Rollbar to track JavaScript errors on the client side to help you debug and resolve issues for clients in the wild.

All in all, my first foray with Electron was a pretty positive experience. With a couple of beers and an afternoon of work I was able to deliver a cross platform application which saved my girlfriend’s team a dramatic amount of time.

Posted In: Javascript

Note: This post originally appeared on Codeburst.

In functional programming parlance “partial application” of a function involves reducing the number of arguments it accepts (it’s “arity”) by some N, returning a new function. Concretely, consider a function with the following signature:

Logger.log(level, dateFormat, msg)

With partial application we’d be able to do something like:

const info = partial(Logger.log, “info”, “ISO8601”);

And then subsequently be able to call our new “info” function like:

info(“Application started”)

To output an “info” message with ISO8601 date formatting.

So how can we accomplish this in JavaScript? Well you could use Lodash but that’s not really exciting.

Using Function.arguments

The classical functional programming approach would be to use the Function.arguments property to dynamically create a new partially applied function. Running with the example above, you’d end up with an implementation that looks like, (Run it on JSFiddle):

Pulling it apart, its straightforward. Save a reference to a list of the arguments that you want to “fill in”, create a new function for the partial, inside this new function combine the saved arguments with the arguments the partial is called with and execute the original function.

This works but is there a cleaner way?

Function.bind

Although it’s normally used for setting the “this” value a function will be invoked with, it’s possible to use bind() for partial application. If you check out the Function.bind docs you’ll notice that in addition to setting “this” it’s able to set the arguments for the function its operating on. By leveraging this along with Function.apply we’ll be able to cook of partial functions. The implementation ends up being something like (On JSFiddle):

Well that’s about it for partially applied functions. If you’re feeling adventurous and want to head down the functional programming check out the related topic, currying.

Posted In: Javascript

Tags:

Note: This post originally appeared on Codeburst.io

One of the biggest differences between Angular 1.5+ and 2 is the latter’s approach to dependency injection. Back on Angular 1.5 there were a couple of ways to configure dependency injection but they relied on naming things consistently. If you needed the “$http” service you had to specify that by specifying a configuration explicitly mentioning “$http”. And unfortunately, it was also possible to do this with implicit annotations which would cause minification to break your code.

Angular2 with TypeScript dramatically improves on this by introducing a type based DI system where you just correctly specify the type of an injectable you want and Angular will handle the rest. But since TypeScript compiles to JavaScript and in doing so wipes any type information how can this DI system work? It’s not magic so lets take a look!

TypeScript Decorators

The first piece of the puzzle is the @Injectable TypeScript decorator which marks a class as available for DI. TypeScript decorators are a language feature which enables developers to use annotations to modify the behavior of classes, methods, and properties at run time. Within Angular, the @Injectable annotation is “class annotation” which Angular uses to register information about the target class with the framework. The relevant framework code is in the packages/core/src/di namespace with the most interesting files being:

Reading through the code is a bit challenging but the overall idea is that the framework keeps track of classes that have been annotated with @Injectable and then has a “get” method to correctly construct instances of those classes. OK but what about that type information?

reflect-metadata

The second piece of the DI puzzle is the reflect-metadata package and TypeScript’s “emitDecoratorMetadata” option. When used together they will cause the TypeScript compiler to output metadata for classes that have been annotated with a class decorator. And most importantly this metadata is accessible at runtime by userland JavaScript.

Concretely, the Angular DI system uses this metadata to introspect the arguments that the constructor a class marked @Injectable requires. And then naturally using that information the DI framework can automatically construct a class on demand for you.

An example DI system

Finally what we’ve all been waiting for, some sample code! In order to compile, you’ll need to enable the experimentalDecorators and emitDecoratorMetadata compiler flags, install the reflect-metadata package, and also use a recent version of TypeScript.

If you compile and run it you should get the following output:

Car { ex: [ Engine { maker: ‘Tesla’, displacement: 500 } ] }

So as expected we retrieved a Car without manually constructing an Engine and the Engine constructor was correctly invoked since the class properties were set.

Couple of things of note:

  • I created the “Newable” type alias to represent a constructor
  • The Inject decorator calls the Injector class in order to create some encapsulation
  • On line 10, the Reflect.getOwnMetadata(“design:paramtypes”, originalConstructor); call retrieves constructor information for the class that the decorator has been applied to.
  • Line 18 uses bind() to modify the class constructor to use the Injector to retrieve instances of the required classes

And that’s about it. After creating this example it’s clear that TypeScript’s decorators and the reflect-metadata are both powerful tools and I’m excited to explore what else they could enable.

Interested in adopting Angular or TypeScript at your organization? We’d love to chat.

Posted In: General

Note: This post originally appeared at Codeburst

At Setfive Consulting we’ve become big fans of using TypeScript on the frontend and have recently begun adopting it for backend nodejs projects as well. We’ve picked up a couple of tips while setting up these projects that we’re excited to share here!

Directory structure

For most nodejs projects any directory layout will work and what you pick will be a matter of personal preference. TypeScript is similar but in order to get the TypeScript compiler to generate JavaScript code into a “dist/” you’ll need to write your code inside a separate directory like “src/” within your project. So you’ll want a layout like the following:

And the compiler will produce JavaScript code in “dist/” from your TypeScript sources in “src/”.

Setup tsconfig.json

As you can see above you’ll want a tsconfig.json file to configure the behavior of the TypeScript compiler. tsconfig.json is a special JSON configuration file that automatically sets various flags for you when you run “tsc” with it present. You can see an exhaustive list of the available options at here. We’ve been using the following as a solid starting point:

From a build perspective this will configure a couple of things for you:

  • sourceMaps are enabled so you’ll be able to use node’s DevTools integration to view TypeScript sources alongside your JavaScript
  • The compiler will output into a “dist/” folder
  • And it’ll compile all of your source files under the “src/” directory

ts-node and nodemon

One of the stumbling blocks to using TypeScript with nodejs is the required compilation step. At face value, it seems like the required workflow would be to edit a TypeScript file, run the compiler, and then run the generated JavaScript on node. Thankfully, ts-node and nodemon make that a reality you wont have to suffer.

ts-node is basically a wrapper around your nodejs installation that will allow you to run TypeScript files directly, without invoking the compiler. Their Readme highlights how it works:

TypeScript Node works by registering the TypeScript compiler for the .ts, .tsx and – when allowJs is enabled – .js extensions. When node.js has a file extension registered (the require.extensions object), it will use the extension internally with module resolution. By default, when an extension is unknown to node.js, it will fallback to handling the file as .js (JavaScript).

So with ts-node you’ll be able to run something like “ts-node src/index.ts” to run your code.

nodemon is the second piece of the puzzle. It’s a node utility that will monitor your source files for changes and automatically restart a node process for your. Perfect for building express or any server apps! We’ve been using the following nodemon.json config file:

And then you’ll be able to just invoke “nodemon” from the root of your project.

Remember “@types/” packages

Since you’re writing nodejs code chances are you’re going to want to leverage JavaScript libraries. TypeScript can interoperate with JavaScript but in order for the compiler to compile without errors you’ll need to provide “.d.ts” typings for the libraries you’re using. For example, trying to compile the following:

import * as _ from "lodash";
console.log(_.range(0, 10).join(","));

Will result in a TypeScript error:

src/index.ts(1,20): error TS7016: Could not find a declaration file for module ‘lodash’. ‘/home/ashish/Downloads/node_modules/lodash/lodash.js’ implicitly has an ‘any’ type.
Try `npm install @types/lodash` if it exists or add a new declaration (.d.ts) file containing `declare module ‘lodash’;`

Even though the output JavaScript file was successfully generated.

The “.d.ts” files are type definitions for a JavaScript library describing the types used, function signatures, and any other type information the compiler might need.

Several popular JavaScript libraries, like moment, have begun shipping the typings files within their main project but many others, like lodash, haven’t. In order to get libraries that don’t have the “.d.ts” files within their main project to work you’ll have to install their respective “@types/” packages. The “@types/” namespace is maintained by DefinitelyTyped but the definitions themselves have been written by contributors. Installing “@types/” packages is easy:

npm install — save @types/lodash

And now the compiler will run without any errors.

Off to the races!

At this point you should have a solid foundation for a TypeScript powered nodejs project. You’ll be able to take advantage of TypeScript’s powerful type system, nodejs’ enormous library ecosystem, and enjoy a easy to use “save and reload” workflow. And as always, I’d love any feedback or other tips!

Thinking about adopting TypeScript at your organization? We’d love to chat.

Posted In: TypeScript

Tags: ,