Bitcoin: Thoughts, ideas, and opportunities

In 2009, an engineer using the pseudonym Satoshi Nakamoto introduced a new digital, open source, currency to the world called “Bitcoin”. Since then, the buzz, excitement, and adoption of bitcoins has grown at a phenomenal rate, with the total value of bitcoins in circulation hitting $1 billion as of April 2013. Talk of a bubble has coincided with the buzz, but both have also served to highlight the tremendous potential in the Bitcoin ecosystem.

Great, so what is it really?

Paraphrasing from Wikipedia:

Bitcoin is a decentralized digital currency based on an open-source, peer-to-peer internet protocol. […] Bitcoins can be exchanged through a computer or smartphone locally or internationally without an intermediate financial institution. […]

Bitcoin is not managed like typical currencies: it has no central bank or central organization. Instead, it relies on an Internet-based peer-to-peer network. The money supply is automated and given to servers or “bitcoin miners” that confirm bitcoin transactions as they add them to a decentralized and archived transaction log approximately every 10 minutes.

I’m going to butcher any explanation I try to provide but Quora has several good writeups on what Bitcoins are and how they work. A reasonable analogy though, is think about “Bitcoins” as a type of precious metal that has no intrinsic value but is able to store value, transfer value between parties, and be mined to find new pieces.

The key takeaways about how Bitcoins work are:

  • Unlike traditional currencies, no central bank controls the amount of currency in circulation. Instead, the total amount of possible Bitcoins is algorithmically limited by the protocol itself.
  • Transactions between parties are relatively anonymous and done in real time. Contrast this with other digital payments like credit cards or wire transfers where there is little anonymity and a settling period.
  • As of now, there’s absolutely no regulation as far as reporting, security, or anything else you’d associate with traditional financial institutions.
  • Currently, it’s all digital – there’s no practical way to “print” a Bitcoin and use it like a dollar bill

So what’s going to be disruptive?

There are dozens if not hundreds of startups playing in the Bitcoin space and several “large” companies have also started adopting Bitcoins as an alternative form of payment. Several high profile investors including YCombinator and Union Square Ventures have also made significant Bitcoin related invetments, signalling that savvy investors believe a potential market exists. Personally, I’m excited about companies attacking the following:

Hassle free, instant, and international money transfer. Paypal was supposed to allow us to do this, but even in 2013 its difficult and expensive to transfer people money – let alone internationally. If someone can address the current concerns of fraud and money laundering using Bitcoin they’ll have an instant winner on their hands.

Catapult in legalized online gambling. In the last few years, there’s been an ongoing discussion about if America should legalize online gambling and if so, how it should be done. Because of the decentralized, anonymous nature of Bitcoin, it’s conceivable that the emergence of a large enough Bitcoin powered gambling market would force the US government to legalize online gambling in order to get access to the additional tax revenues. This would obviously be a boon for struggling social game companies like Zynga, but also provide an enormous opportunity for new startups.

Provide a viable alternative to credit cards. Recently, there’s been a lot of action in the payments space including innovative companies like Square and Roam which are aiming to alter how consumers pay at the point of sale. Unfortunately, almost all of these new payment solutions still rely on existing credit card infrastructure to complete the payment which leaves them at the whim of interchange fees. Boston based LevelUp is tackling this issue but at the end of the day they’re still getting hit by the fees somewhere. An innovative Bitcoin company which provides a viable alternative to this existing credit card infrastructure would open the door to radical innovation in the payments space by eliminating hard interchange fees.

What might go wrong?

There’s a huge amount of opportunity in the Bitcoin space and the ecosystem is just starting to build out. With the Bitcoin boom of 2012 already inked on the Wikipedia entry, the next few months are going to be a critical time for the nascent currency. The single largest to Bitcoin is ultimately going to be regulation from sovereign governments. The US Department of Treasury has already issued several statements regarding Bitcoin and its clear the government isn’t happy with the situation as it stands now.

Regardless of what regulation is passed, Bitcoins are technologically different from previous virtual currency and the ecosystem has already demonstrated its resilience and innovation.

A modest proposal for how we can fix WordPress

Last month, a post started making the rounds on the internet decrying the dire state of the WordPress codebase. James highlights several legitimate gripes but unfortunately he muddles the discussion by mixing major problems with otherwise minor concerns. Another problem, is that James’ post considers the issues purely from a technical perspective but ultimately business concerns are going to motivate a drastic change in WordPress. Looking at James’ post and from my own run ins with WordPress, the biggest problems with WordPress as it exists now are:

Broken data model: As James pointed out, as people have started to use WordPress like a CMS with by adding things like custom post types and plugins it’s clear that the underlying data model is too rigid to properly support this need. The result of this, is that developing custom database functionality is notably difficult and it limits the types of extensions that can be built purely inside WordPress.

Limited separation of concerns: Throughout WordPress, globals are heavily used, templates are free to interact with the database, and there’s generally no concept of MVC separation. Apart from being confusing, this makes it difficult to effectively reason about the behavior of a WordPress install, making “smart” caching impossible. Additionally, it makes it difficult for dedicated “frontend” developers to work on templating since they’re often left juggling PHP code. Both of these issues ultimately make running a WordPress site more expensive since you’ll need more resources for operations and development.

No OO/Encapsulation, No Namespaces, No API: Owing to its PHP4+ heritage, the core WordPress code is entirely procedural and because of this every function touches the global namespace. Also, WordPress doesn’t have native support for serving as an API backend and exposing its data in different formats or interacting with non-browser clients. The OO and namespace issue is largely technical but it makes it difficult to develop modular WP components or mixin off the shelf PHP packages. The lack of a robust API makes it impossible to use a single WordPress installation to serve content on the web but also serve as a service for mobile clients, which ultimately limits its utility.

So how do we fix it?

“Fixing” an application as large as WordPress is obviously a herculean undertaking, especially because of the need to balance the existing ecosystem with the need for a clean, strong foundation. The reality is that modernizing WordPress is ultimately going to require a full rewrite but I think it could be strategically orchestrated to win community support for the backwards compatability breaks.

Without further ado, here’s how I would do it:

June ’13 – Release Twig for WordPress

Twig for WordPress is a fully featured implementation of the Twig templating engine for WordPress. It allows developers to write WordPress themes in Twig instead of mixing PHP and HTML. Along with Twig, the plugin includes modern template caching techniques like partial page rebuilds and ESI support. In order to leverage Twig and its related benefits, developers have to write their themes with reasonably strict View/Controller separation since variables must be explicitly passed to Twig templates.

Theme designers are initially hesitant but once they see how much easier tracing the structure of Twig templates is versus straight PHP they’re converts. Developers are also fans since they enjoy being able to make the necessary page variables available in a template and then hand pages off to be themed. Benchmarks are done. Hackathons are sponsored. Themes are converted to Twig.

September ’13 – Release Doctrine Power Tools (DPT)

Leveraging Doctrine2, DPT enhances WordPress by augmenting it with the Doctrine2 ORM and associated “power tools”. This allows developers to seamlessly create new MySQL tables and then automatically generate administration CRUD for those tables. In addition, custom plugin code can choose to leverage Doctrine2 to interact with the new tables. With DPT, WordPress administrators are also able to design custom forms to insert data into custom tables and then filter and export the data in these tables.

Developers familiar with ORMs are immediately excited and after they try it out they’re hooked. They start evangelizing DPT in the community because it takes the drudgery out of creating custom database functionality in WordPress. Enterprise users slowly get wind of it and adopt it as well since it empowers their marketing team to do more without involving developers. WordPress has an ORM. Everything isn’t being stuffed into wp_posts.

January ’14 – Release WP Paladin Alpha

WP Paladin is a PHP 5.3+ object oriented rewrite of WordPress with an additional “compatibility layer” which provides compatible legacy plugins and themes access to the normal WordPress API. From a user perspective, Paladin has the same installation procedure, Admin UI, and basic functionality as stock WordPress. Additionally, a handful of the most popular plugins have been ported to be 100% compatible with Paladin. Technically, WP Paladin shares several key Symfony2 components with Drupal 8, notably the HttpKernel, which allows interoperability with other apps using HttpKernel. It also supports Twig templating and ships with the Doctrine2 ORM and DPT.

Since Twig for WP was released in June ’13, dozens of Twig themes have become available and early adopters are eagerly experimenting with Paladin. Although it currently only supports a handful of traditional WordPress plugins, it’s faster, easier to develop on, and plays well with others. The excitement is palpable. Blog posts are written. SXSW tickets are bought.

March ’14 – Release WP Paladin Beta

WP Paladin Beta is similar to the initial release except the “compatibility layer” has been removed, critical bugs have been fixed, and the platform is significantly more stable. The beta version is launched at an exclusive SXSW party to a frenzied mob. It’s taken a little less than a year, but the WordPress codebase has been modernized and new features have added. Additionally, the most popular WordPress themes and plugins have been ported to be compatible with the Paladin codebase.

The download counter hits 1 million by the end of SXSW. Congratulations are in order. VC Money is raised.

Great, count me in!

Doing something like this would certainly be a great way to “earn your stripes” but ultimately it’s going to end up burning thousands of man hours for an unknown payoff.

But who knows, maybe a company with deep pockets, talented engineers, and a disposition for risk will give it the ole college try.

AWS: What are the key Amazon Web Services components?

Over the last couple of years, the popularity of the “cloud computing” has grown dramatically and along with it so has the dominance of Amazon Web Services (AWS) in the market. Unfortunately, AWS doesn’t do a great job of explaining exactly what AWS is, how its pieces work together, or what typical use cases for its components may be. This post is an effort to address this by providing a whip around overview of the key AWS components and how they can be effectively used.

Great, so what is AWS? Generally speaking, Amazon Web Services is a loosely coupled collection of “cloud” infrastructure services that allows customers to “rent” computing resources. What this means is that using AWS, you as the client are able to flexibly provision various computing resources on a “pay as you go” pricing model. Expecting a huge traffic spike? AWS has you covered. Need to flexibly store between 1 GB or 100 GB of photos? AWS has you covered. Additionally, each of the components that makes up AWS is generally loosely coupled meaning that they can work independently or in concert with other AWS resources.

Since AWS components are loosely coupled, you’d be able to mix and match only what you need but here is an overview of the key services.

Route53

What is it? Route53 is a highly available, scalable, and feature rich domain name service (DNS) web service. What a DNS service does is translate a domain name like “setfive.com” into an IP address like 64.22.80.79 which allows a client’s computer to “find” the correct server for a given domain name. In addition, Route53 also has several advanced features normally only available in pricey enterprise DNS solutions. Route53 would typically replace the DNS service provided by your registrar like GoDaddy or Register.com.

Should you use it? Definitely. Allow it isn’t free, after last year’s prolonged GoDaddy outage it’s clear that DNS is a critical component and using a company that treats it as such is important.

Simple Email Service

What is it? Simple Email Service (SES) is a hosted transactional email service. It allows you to easily send highly deliverable emails using a RESTful API call or via regular SMTP without running your own email infrastructure.

Should you use it? Maybe. SES is comparable to services like SendGrid in that it offers a highly deliverable email service. Although it is missing some of the features that you’ll find on SendGrid, its pricing is attractive and the integration is straightforward. We normally use SES for application emails (think “Forgot your password”) but then use MailChimp or SendGrid for marketing blasts and that seems to work pretty well.

Identity and Access Management

What is it? Identity and access management (IAM) provides enhanced security and identity management for your AWS account. In addition, it allows you to enable “multi factor” authentication to enhance the security of your AWS account.

Should you use it? Definitely. If you have more than 1 person accessing your AWS account using IAM will allow everyone to get a separate account with fine grained permissions. Multi factor authentication is also critically important since a compromise at the infrastructure level would be catastrophic for most businesses. Read more about IAM here.

Simple Storage Service

What is it? Simple storage service (S3) is a flexible, scalable, and highly available storage web service. Think of S3 like having an infinitely large hard drive where you can store files which are then accessible via a unique URL. S3 also supports access control, expiration times, and several other useful features. Additionally, the payment model for S3 is “pay as you go” so you’ll only be billed for the amount of data you store and how much bandwidth you use to transfer it in and out.

Should you use it? Definitely. S3 is probably the most widely used AWS service because of its attractive pricing and ease of use. If you’re running a site with lots of static assets (images, CSS assets, etc.), you’ll probably get a “free” performance boost by hosting those assets on S3. Additionally, S3 is an ideal solution for incremental backups, both data and code. We use S3 extensively, usually for hosting static files, frequently backing up MySQL databases, and backing up git repositories. The new AWS S3 Console also makes administering S3 and using it non-programmatically much easier.

Elastic Compute Cloud

What is it? Elastic Compute Cloud (EC2) is the central piece of the AWS ecosystem. EC2 provides flexible, on-demand computing resources with a “pay as you go” pricing model. Concretely, what this means is that you can “rent” computing resources for as long as you need them and process any workload on the machines you’ve provisioned. Because of its flexibility, EC2 is an attractive alternative to buying traditional servers for unpredictable workloads.

Should you use it? Maybe. Whether or not to use EC2 is always a controversial discussion because the complexity it introduces doesn’t always justify its benefits. As a rule of thumb, if you have unpredictable workloads like sporadic traffic using EC2 to run your infrastructure is probably a worthwhile investment. However, if you’re confident that you can predict the resources you’ll need you might be better served by a “normal” VPS solution like Linode.

Elastic Block Store

What is it? Elastic block store (EBS) provides persist storage volumes that attach to EC2 instances to allow you to persist data past the lifespan of a single EC2. Due to the architecture of elastic compute cloud, all the storage systems on an instance are ephemeral. This means that when an instance is terminated all the data stored on that instance is lost. EBS addresses this issue by providing persistent storage that appears on instances as a regular hard drive.

Should you use it? Maybe. If you’re using EC2, you’ll have to weigh the choice between using only ephemeral instance storage or using EBS to persist data. Beyond that, EBS has well documented performance issues so you’ll have to be cognizant of that while designing your infrastructure.

CloudWatch

What is it? CloudWatch provides monitoring for AWS resources including EC2 and EBS. CloudWatch enables administrators to view and collect key metrics and also set a series of alarms to be notified in case of trouble. In addition, CloudWatch can aggregate metrics across EC2 instances which provides useful insight into how your entire stack is operating.

Should you use it? Probably. CloudWatch is significantly easier to setup and use than tools like Nagios but its also less feature rich. We’ve had some success coupling CloudWatch with PagerDuty to provide alerts in case of critical service interruptions. You’ll probably need additional monitoring on top of CloudWatch but its certainly a good baseline to start with.

Anyway, the AWS ecosystem includes several additional services but these are the ones that I felt are key to getting started on AWS. We haven’t had a chance to use it yet but Redshift looks like it’s an exciting addition which will probably make this list soon. As always, comments and feedback welcome.

Tips: A Framework for Communicating Your Ideas to Engineers

One of the hardest things to do when building a software product is effectively communicating your ideas to your development team. In addition, these issues are usually magnified when you’re early on in your product process. Generally, communication issues between stakeholders and developers fall into “macro details”. “expected behavior”, or “user interface concerns”.

Despite being an oxymoron, “macro details” captures the types of errors that occur at the architecture level. For example, imagine you were building a calendaring system for intramural sports teams where each team manages their games for the season using the calendar. A “macro detail” would be that each “team” owns a single “calendar”, a potential error then would be the development team coming back with a solution where every “player” on a “team” owns a “calendar”. Following from that, the “expected behaviors” of the system would capture issues like “what happens if two teams try to schedule a game for the same day?” or “can a player be on more than one team?”. A defect then would be that the stakeholders expected the system to not allow multiple games on the same day but the system allowed it. Finally, “user interface” concerns would typically encompass how the various components of the application are organized and what the different screens are.

Great, so how do you avoid these problems? What follows below is a loose framework for mitigating these issues, it certainly isn’t original but rather a synthesis of techniques that I’ve seen help power effective communication.

Oh hey, from 10,000 ft.

For a totally new product, getting started is often the hardest part. Usually, the entrepreneur or executive who is building the product has been frantically planning, talking about, and “living” the product for sometime before the project actually kicks off. The result of this, is that the stakeholder intimately knows the ins and outs of the project but the rest of the team won’t necessarily be aligned with the primary stakeholder. Because of this, starting by writing a 1 – 2 sentence “10,000 ft. view” is usually a great first step. Using the sports calendering example from above, you might end up with:

I want to build a calendaring application to allow intramural (think old man kickball) sports teams to collaboratively plan their games for the season.

Who, What, Where?

With the “big idea” down on paper, the next step is to start defining the primary objects in the system and how they are all related. At this stage, other people normally develop “user personas”, which describe typical users of the system, and then elaborate on how each persona will use the system. In my experience, this approach is effective but inexperienced stakeholders tend to get caught up in the details and ultimately have a difficult time reaching the right level of abstraction. Looking at the calendaring app, some of the objects you’d end up listing would include:

  • Teams
    • Composed of players
    • Own a single calendar
    • Linked to a single league
  • Players
    • Primary actor in the system
    • Each user will be represented by a player and own a single profile
    • Linked to a single team
  • Calendars
    • Owned by a single team
    • Contain games on specific days
  • Leagues

    • Comprised of multiple teams

Anyone that is familiar with Unified Modeling Language will notice the format is similar to how UML uses nouns and verbs but its a bit different. I’ve always had better luck using natural language to write out the relationships between entities and then converting that into an entity relationship model and then RDMS tables.

Rules, Rules, Rules

Once the primary entities are defined, the next step is to write out the business logic to minimize the confusion around how the system should work. There are formal structures for this as well, including behavior driven development, but in my experience its easier to kickstart the conversation with natural language specifications and then introduce BDD later in the development cycle. Anyway, running with the example you’d end up writing out something like:

  • Users (Players)
    • Can create accounts
    • Can create games on a calendar
    • Can join “Teams”
    • Can’t join two Teams
  • Teams

    • Created by “Players”
    • Can’t be deleted after there’s at one member
  • Calendars

    • Automatically created when a team is created
    • Won’t allow two games on the same day
    • Can’t be deleted
    • Will only allow users to edit their own events/games

Like with the entity relationships, I usually start with natural language and keep the definition of “business logic” pretty loose so that I can capture as much information with the stakeholder as possible and then formalize from there.

Draw me a map

Finally, the last piece is usually communicating what the UI of the application should be. Depending on the makeup of the team, the amount of input that business stakeholders have in this respect might be minimal. However, if there isn’t a dedicated UI/UX resource available UI mockups will definitely be a valuable asset in helping communicate the vision of the application.

There’s several tools for developing UI mockups including Balsamiq, OmniGraffle and Mockflow. Generally, any tool you’re comfortable with will work fine but the key concern should be not “lowering” the designs too far to allow people to concentrate on the ideas, not the specific implementation. I’ve always been partial to Balsamiq since its attractively priced, easy to use, and the final mockups are more like sketches than a finished design.

Anyway, sticking with the calendaring project you might end up making a screen that looks like this:

Like everything else, the definition of UI mockups are generally flexible. Some people like to “lower” pretty fair to high fidelity designs and other people like to leave them at “pen and paper” type sketches. At the end of the day, it’ll be a project by project decision.

Wrapping up

Well thats a crash course, with examples, of how I generally work with key stakeholders to sketch out projects. Ultimately, figuring out the best way to communicate your vision to your engineering team is going to be a personal decision. You’ll probably end up having to mix together several techniques that take advantage of the skills on your team and match the requirements of your project.

Symfony2: Configuring VichUploaderBundle and Gaufrette to use AmazonS3

Last week, I was looking to install the VichUploaderBundle into a Symfony2 project to automatically handle file uploads. As I was looking through the Vich documentation I ran across a chunk describing being able to use Gaufrette to skip the local filesystem and push files directly to Amazon S3. Since we’d eventually need to load balance the app and push uploaded files to S3 anyway, I decided to set it up out of the gate. Unfortunately, the documentation for setting up Vich with Gaufrette is a bit opaque so here’s a step by step guide to getting it going.

Install Everything

The first thing you’ll want to do is install all the required packages. If you’re using Composer, the following will work:

Once all the packages are installed, you’ll need to configure *both* Gaufrette and Vich. This is where the documentation broke down a bit for me. You’ll need your Amazon AWS “Access Key ID” and “Secret Key” which are both available at https://portal.aws.amazon.com/gp/aws/securityCredentials if you’re logged into AWS.

Configure It

Once everything is configured at the YAML level, the final step is adding the Vich annotations to your entities.

Make sure you add the “@Vich\Uploadable” annotation to your Entity or Vich will fail silently.

The “mapping” specified in “@Vich\UploadableField(mapping=”logo”, fileNameProperty=”logo”)” needs to match the value under “vich_uploader.mappings” which you defined in config.yml

Finally, one last “gotcha” to be cognizant of is this bug – https://github.com/dustin10/VichUploaderBundle/issues/123. Since Vich uses Doctrine lifecycle callbacks to manage files, if no Doctrine fields are changed then the Vich code isn’t executed. The easiest way to get around this (and what we used), is just to manually update the “updated_at” column every time a form is submitted to ensure that the upload handling code is executed.

Anyway, as always, questions and comments are welcome.