JavaScript or TypeScript for Front End?

Have you ever tried anything other than JavaScript or Typescript for front end and what was the result?

I tried Scala.js a couple of years ago, before TypeScript was popular. The results were mixed, at best. My experience was not recent, but, if I recall correctly, my two biggest stumbling blocks were tooling support and interop with standard JavaScript libraries.

On the tooling side, the major challenge was that scala.js needs to be compiled via the JVM based Scala compiler in order to produce JavaScript. That made integrating with JavaScript build tools like Gulp or Grunt fairly difficult. On top of that, I was sort of stuck writing scala.js in Eclipse since it had the best Scala plugin support. What that meant is that I had to give up IntelliJ’s awesome web development capabilities.

Integrating with third party libraries is a struggle for all of the “compile to JavaScript” solutions. A few years back, scala.js had a pretty rough integration story. Scalajs still relies on typed “facades,” Write facade types for JavaScript APIs to for integration. The problem faced is that there is not “any” type of Scala where you can skirt needing actual type information.
These two issues highlight how well TypeScript is implemented in regards to tooling and 3rd party library support. Today, (2019) I think it that TypeScript is really the only effective option for compiling to JavaScript.

AWS: Looking back on a year with AWS Elastic Container Service

At the beginning of last year we tried something different and deployed an application on Amazon’s Elastic Container Service (ECS). This is in contrast to our normal approach of deploying applications directly on EC2 instances. So from a devops perspective using ECS involved two challenges, working with Docker containers and using ECS as a service. We’ll focus on ECS here since enough ink has been spilled about Docker.

For a bit of background, ECS is a managed services that allows you to run Docker containers on AWS cloud infrastructure (EC2s). Amazon extended the abstraction with “Fargate for ECS” which launches your Docker containers on AWS managed EC2s so you don’t have to manage or maintain any underlying hardware. With Fargate you define a “Task” consisting of a Docker image, # of vCPUs, and amount of RAM which AWS uses to launch a Docker container on a EC2 which you don’t have any access to. And then naturally if you need to provision additional capacity you can just tick the 1 to 2 and AWS will launch an additional container.

The Setup

The app we deployed on ECS is one we inherited from another team. The app is a consumer facing Vert.x (Java) web app that provides a set of API endpoints for content focussed consumer sites. Before taking over the app we had learned that it had some “unique” (aka bugs) scaling requirements which was one of the motivators to use ECS. On the AWS side, our setup consisted of a Fargate ECS cluster connected to an application load balancer (ALB) which handled SSL termination and health checks for the ECS Tasks. In addition, we connected CircleCI for continuous integration and continuous deployment. We’re usingecs-deploy to handle CD on ECS. ecs-deploy handles creating a new ECS task definition, bringing up a new container with that definition, and cycling out the old container if everything went well. So a year in here are some takeaways of using Fargate ECS.

Reinforces cattle, not pets

There’s a cloud devops mantra that you should treat your servers like a herd of cattle, not the family pets. The thinking being that, especially for horizontally scalable cloud servers, you want the servers to be easy to bring up and not “special” in any way. When using EC2s you can convince yourself that you’re adopting this philosophy but eventually something will leak through. Sure you use a configuration management tool but during an emergency someone will surely manually install something. And your developers aren’t supposed to use the local disk but at some point someone will rely on files in “/tmp” always being there.

In contrast, deploying on ECS reinforces thinking of individual servers as disposable because the state of your container is destroyed every time you launch a new task. So each time you deploy new code a new container will be launched without retaining any previous state. At the server level, this dynamic actually makes it impossible to make ad hoc server changes since it’s impossible to SSH to your ECS so any changes would have to present in your Dockerfile. Similarly at the app level since your disks don’t persist between deployments you’d quickly stop writing anything important just to disk.

Logs are…challenging

When using Fargate on ECS the only way to access output from your container is through a CloudWatch log group through the CloudWatch UI. At first look this is great, you can view your logs right in the AWS console UI without having to SSH into any servers! But as time goes on you’ll start to miss being able to see and manipulate logs in a regular terminal.

The first stumbling block is actually the UI itself. It’s not uncommon for the UI to be a couple of minutes delayed which ends up being a significant pain point when “shit is broken”. Related to the delays, it seems like sometimes logs are available in the ECS Task view before CloudWatch which ends up being confusing to members of the team as they debugged issues.

Additionally, although the UI has search and filtering capabilities they’re fairly limited and difficult to use. Compounding this, there frustratingly isn’t an easy way to download the log files locally. This makes it difficult to use common Linux command line tools to parse and analyze logs which you’d normally be able to do. It is possible to export your CloudWatch logs to S3 via the console and then download them locally but the process involves a lot of clicks. You could automate this via the API but it feels like something you shouldn’t have to build since for example the load balancer automatically delivers logs into S3.

You’ll (probably) still need an EC2

The ECS/container dream is that you’ll be able to run all of your apps on managed, abstract infrastructure where all you have to worry about is a Dockerfile. This might be true for some people but for typical organizations you’re probably going to need to run an EC2. The biggest pain points for us were running scheduled tasks (crontabs) and having the flexibility to run ad hoc commands inside AWS.

It is possible to run scheduled tasks on ECS but after experimenting with it we didn’t think it was a great fit. Instead, the approach we took was setting up Jenkins on an EC2 to run scheduled jobs which consisted of running some command in a Docker container. So ultimately our scheduled jobs shared Docker images with our ECS tasks since the images are hosted by AWS ECR. Because of this, the same CircleCI build process that updates the ECS task will also update the image that Jenkins runs so the presence of Jenkins is mostly transparent to developers.

Not having an “inside AWS” environment to run commands is one of the most limiting aspects of using ECS exclusively. At some point an engineer on every team is going to find themselves needing to run a database dump or analyze some log files both of which will simply be orders of magnitude faster if run within AWS vs. over the internet. We took a pretty typical approach here with an EC2 configured via Ansible in an autoscale group.

Is it worth it?

After a year with ECS+Fargate I think it’s definitely a good solution for a set of deployment scenarios. Specifically, if you’re dealing with running a dynamic set of web frontends that are easily containerized it’ll probably be a great fit. The task scaling is “one click” (or API call) as advertised and it feels much snappier than bringing up a whole EC2, even with a snapshotted AMI. One final dimension to evaluate is naturally cost. ECS is billed roughly at the same rate as a regular EC2 but if you’re leaving tasks underutilized you’ll be paying for capacity you aren’t using. As noted above, there are some operational pain points with ECS but on the whole I think it’s a good option to evaluate when using AWS.

An afternoon with Electron

Last week my girlfriend Diane was looking for some help scraping pollen data from a couple of sites. The code was simple enough to hammer out but how was I going to deliver it? Diane is fairly tech savy but even so asking her to install nodejs and run a command line app was going to be a bit much. After considering options like a Java Swing app or Qt+nodejs I decided to give Electron a shot. Just want to see the code? It’s available here, pollen-scraper.

Electron is a cross platform application runtime which basically “runs” code inside a Chrome browser alongside nodejs. In practice you can use nodejs libraries with your favorite JavaScript framework too build applications that run anywhere that Chrome will. Several popular companies including Slack and Spotify have desktop clients powered by Electron. Pretty much perfect for my use case. So what was using Electron for the first time like?

Getting started is easy

One of the frustrating aspects of “enterprise” cross platform frameworks is that it takes a long time to even get something up on the screen. Between complex build systems and custom layout languages, it generally takes awhile to get something on the screen using something like Qt or Swing. With Electron getting started was as simple as cloning https://github.com/electron/electron-quick-start-typescript, firing off an “npm install”, and after a “npm start” I had a working cross platform UI on the screen. Additionally, since Electron leverages web technologies it was also straightforward to add Bootstrap and AngularJS to the project.

The NodeJS ecosystem

As mentioned above, Electron applications can use any nodejs library which makes the environment incredibly powerful right out of the box. For example, I was able to leverage turfjs along with Google’s geocoder to find the closest city to an arbitrary zip code in Japan. Being able to tap into the npm/nodejs ecosystem also makes it possible to deliver high value applications quickly since you’re able to focus on business differentiators not plumbing.

Debugging

Since its built on Chrome, you get access to Chrome’s DevTools within Electron. And you can also enable remote debugging with a launch flag to make it possible to connect to your Electron instance remotely. In addition, for production apps you’d be able to drop in something like Rollbar to track JavaScript errors on the client side to help you debug and resolve issues for clients in the wild.

All in all, my first foray with Electron was a pretty positive experience. With a couple of beers and an afternoon of work I was able to deliver a cross platform application which saved my girlfriend’s team a dramatic amount of time.

Creating partially applied functions in Javascript

Note: This post originally appeared on Codeburst.

In functional programming parlance “partial application” of a function involves reducing the number of arguments it accepts (it’s “arity”) by some N, returning a new function. Concretely, consider a function with the following signature:

Logger.log(level, dateFormat, msg)

With partial application we’d be able to do something like:

const info = partial(Logger.log, “info”, “ISO8601”);

And then subsequently be able to call our new “info” function like:

info(“Application started”)

To output an “info” message with ISO8601 date formatting.

So how can we accomplish this in JavaScript? Well you could use Lodash but that’s not really exciting.

Using Function.arguments

The classical functional programming approach would be to use the Function.arguments property to dynamically create a new partially applied function. Running with the example above, you’d end up with an implementation that looks like, (Run it on JSFiddle):

Pulling it apart, its straightforward. Save a reference to a list of the arguments that you want to “fill in”, create a new function for the partial, inside this new function combine the saved arguments with the arguments the partial is called with and execute the original function.

This works but is there a cleaner way?

Function.bind

Although it’s normally used for setting the “this” value a function will be invoked with, it’s possible to use bind() for partial application. If you check out the Function.bind docs you’ll notice that in addition to setting “this” it’s able to set the arguments for the function its operating on. By leveraging this along with Function.apply we’ll be able to cook of partial functions. The implementation ends up being something like (On JSFiddle):

Well that’s about it for partially applied functions. If you’re feeling adventurous and want to head down the functional programming check out the related topic, currying.

TypeScript decorators & Angular2 Dependency Injection

Note: This post originally appeared on Codeburst.io

One of the biggest differences between Angular 1.5+ and 2 is the latter’s approach to dependency injection. Back on Angular 1.5 there were a couple of ways to configure dependency injection but they relied on naming things consistently. If you needed the “$http” service you had to specify that by specifying a configuration explicitly mentioning “$http”. And unfortunately, it was also possible to do this with implicit annotations which would cause minification to break your code.

Angular2 with TypeScript dramatically improves on this by introducing a type based DI system where you just correctly specify the type of an injectable you want and Angular will handle the rest. But since TypeScript compiles to JavaScript and in doing so wipes any type information how can this DI system work? It’s not magic so lets take a look!

TypeScript Decorators

The first piece of the puzzle is the @Injectable TypeScript decorator which marks a class as available for DI. TypeScript decorators are a language feature which enables developers to use annotations to modify the behavior of classes, methods, and properties at run time. Within Angular, the @Injectable annotation is “class annotation” which Angular uses to register information about the target class with the framework. The relevant framework code is in the packages/core/src/di namespace with the most interesting files being:

Reading through the code is a bit challenging but the overall idea is that the framework keeps track of classes that have been annotated with @Injectable and then has a “get” method to correctly construct instances of those classes. OK but what about that type information?

reflect-metadata

The second piece of the DI puzzle is the reflect-metadata package and TypeScript’s “emitDecoratorMetadata” option. When used together they will cause the TypeScript compiler to output metadata for classes that have been annotated with a class decorator. And most importantly this metadata is accessible at runtime by userland JavaScript.

Concretely, the Angular DI system uses this metadata to introspect the arguments that the constructor a class marked @Injectable requires. And then naturally using that information the DI framework can automatically construct a class on demand for you.

An example DI system

Finally what we’ve all been waiting for, some sample code! In order to compile, you’ll need to enable the experimentalDecorators and emitDecoratorMetadata compiler flags, install the reflect-metadata package, and also use a recent version of TypeScript.

If you compile and run it you should get the following output:

Car { ex: [ Engine { maker: ‘Tesla’, displacement: 500 } ] }

So as expected we retrieved a Car without manually constructing an Engine and the Engine constructor was correctly invoked since the class properties were set.

Couple of things of note:

  • I created the “Newable” type alias to represent a constructor
  • The Inject decorator calls the Injector class in order to create some encapsulation
  • On line 10, the Reflect.getOwnMetadata(“design:paramtypes”, originalConstructor); call retrieves constructor information for the class that the decorator has been applied to.
  • Line 18 uses bind() to modify the class constructor to use the Injector to retrieve instances of the required classes

And that’s about it. After creating this example it’s clear that TypeScript’s decorators and the reflect-metadata are both powerful tools and I’m excited to explore what else they could enable.

Interested in adopting Angular or TypeScript at your organization? We’d love to chat.