Startups: You should be building a minimum viable business

Published in 2011, Eric Reis’s book The Lean Startup became the blueprint that dozens of “web 2.0” companies were built against. At a high level, the book promotes an “agile” like approach to developing a new company. Loosely speaking, the “happy path” to building a new startup would involve synthesizing ideas into a minimal viable product (MVP), receiving feedback, rapidly iterating, and finally reaching product market fit. Due to its simplicity and wide applicability, the “lean startup methodology” has become wildly popular especially among first time entrepreneurs. With this popularity, the concept of a “MVP” has basically become a buzzword rallying cry to justify what people are building.

Unfortunately, a common problem we’ve noticed is that people focus solely on the MVP and end up neglecting the other parts of the business, notably sales and marketing. To combat this, you should really be thinking of building a minimal viable business instead of specifically an MVP. Ok great, so what are the components of an MVB?

Product (MVP)

The product that you’re actually building is ultimately going to be one of the most important parts of your new business. Surprisingly, I’d argue that the exact features that make it into the MVP aren’t terribly important. What is key, is that your users experience an “aha!” moment while using the product which will help you convey the value. Another key takeaway is less is more. Start small and grow the product to avoid leaving users overwhelmed, confused, and discouraged.

User Acquisition

Awesome. So you’ve built a fantastic product, now how are you going to get in front of people? First time tech founders often overlook an effective user acquisition strategy and it ends up being a major risk factor for them. Hallmarks of an effective strategy are that it needs to be replicable, measurable, and generally affordable. Because of this, “getting great PR”, “going viral” or “buying a super bowl commercial” generally don’t qualify as tractable strategies. Instead, things like paid search, affiliate marketing, and native content ads would be more reasonable strategies to consider.

Community

Community is another area that is often overlooked. Even though it’s usually associated with B2C companies, it’s still important for B2B companies. On the B2C side, the majority of users are less likely to engage with “empty” social sites since no one wants to be the only person at the party. You’ll need a strategy to seed any social features your site has and also keep the heartbeat there once you launch. From a B2B perspective, you’ll still need to think about things like answering support issues, writing newsletters, and generating blog content. Although they seem trivial, having an actionable plan for these “community” issues helps establish user trust and win brand champions.

Metrics

The last piece of the “minimum viable business” are the KPIs and metrics that you’re looking to track. Tracking key indicators is important because they provide a yardstick to let you know if you’re moving in the the right direction. A key point is to make sure you’re tracking useful numbers. Vanity metrics like “# of followers” or “page views” aren’t really going to help you determine the health of your business. You’ll need numbers like “customer acquisition cost” or “lifetime value” which help you distill how your company is doing.

That’s a wrap

Wrapping up, building an MVP is just one of the components of building a successful startup. You’ll need to consider several other important aspects which will help you build, measure, and iterate along your path to building a successful company.

As with all advice, just remember that 90% of all advice is bullshit.

Applying for a job? What we like to see

We recently just finished hiring another engineer for our team (welcome Jared!), during which we’ve seen quite a few different applications and it also led to my previous post about what we’d like to see recruiters do before contacting us.

While I am certainly no hiring expert, I think some of the following points may help people applying for a job out.

Read the Full Posting

Many job postings, including ours, will include details about what they want to see from any potential candidate. I’ve seen everything from a simple put in the subject “XYZ” to answer the following programming questions. On some of our job postings we include something like include your favorite beer, the best place you’ve vacationed, etc.. These questions aren’t only used to weed out spam, but we like to see what people come up with and to see if they are a good fit. Regardless of what the posting request is, it’s important to fully read it and apply properly. Several potential candidates didn’t read ours in full and didn’t follow the simple application instructions. If you don’t follow the instructions it can reflect poorly as it may indicate you aren’t able to pay attention to details.

Shorter Can Be Better

I remember back in college when working on resumes many advocated “keep it to a single page”; I can’t agree more now. I’ve received some resumes that are over 6 pages long. A resume, in my point of view, should be something that bullet points your skills, experience, and education. Most of the time, you can fit all this information on to a single page. Being clear and concise pays its dividends. I’m much more interested in the resumes that I could briefly look at and get a good feeling about the candidate. The long resumes often spent a half page to page summarizing some activity at a previous job; it’s much easier to inquire about a specific experience you have if I don’t understand it.

Keep Programming Languages Short

One thing I saw on an alarming number of resumes was that someone would list 10+ programming languages as their core languages. I’m all for having basic knowledge in multiple languages, but I wouldn’t list all the languages I’ve ever worked on as my core language.

The long lists (java,python,php,ruby, perl, c++, c# and scala), more often on recent graduates, seemed more like a list of languages the candidate knew existed. Understandably, right out school you may not have a specific language that you have an in-depth knowledge of. In this case, I recommend looking up what language the job position that you are applying for is, and list that and one or two others that compliment it (such as PHP, Javascript, SQL) rather than listing everything you’ve ever worked in. I gave much more attention to candidates who applied with fewer languages and then listed a few projects/concepts they did within each language.

Past Projects, Not Classwork

One of the last points I want to touch on is when people want to know past projects you’ve worked on. What I’m looking for is something to just show that you can put all the programming concepts you’ve learned together to form some sort of project. Whether this is a simple side project you had or something you worked on at your last job, it shows you are able to take some theory and apply it to realistic applications. Often at career fairs I’d get 30 resumes all which list the exact same project, which was an assignment from class. While coming straight from college you may not have many side projects, have at least one (or make one), and list it first. By having at least one side project that you’ve built (or been a major part of), it shows that you’re able to take what you’ve learned in theory and apply it. Classroom assignments are often “fill in the blank” or too rigid, follow these exact instructions, to demonstrate your full capabilities.

While these are only some of the points I look for on incoming applicants, they are extremely important. Let me now if you think I’ve missed a few!

Good luck on applying to your next job!

Tech: The 3 mistakes that doomed the Facebook Platform

Yesterday afternoon, PandoDaily’s Hamish McKenzie published a post titled Move fast, break things: The sad story of Platform, Facebook’s gigantic missed opportunity. The post outlined the lofty expectations and ultimate failures of the Facebook Platform. Central to Hamish’s piece was the thesis that a series of missteps by Facebook alienated developers and eventually pushed the platform into obscurity.

With the benefit of hindsight, I’d argue there were actually only three major mistakes that ended up dooming the Facebook Platform.

Lack of payments

Hamish mentions this, but I think the lack of payments across the platform was the source of many of its problems. With no seamless way to charge users for either “installs” themselves or “in-app purchases”, developers were forced to play the eyeball game and as a consequence were left clinging to the “viral loop”. Facebook Credits ended up being a non-starter and as the Zynga spat demonstrated, the 30% haircut was intractable. In a world where Facebook launched “card on file” style micropayments with the Platform, maybe we’d be exchanging “Facebook Credits” at Christmas.

No sponsored feed placements

Without on platform payments, developers were essentially left chasing Facebook’s “viral loop” to drive new users, eyeballs, and hopefully eventually revenues. Developers eventually started gaming the system, generating what users perceived as spam, and ultimately forcing Facebook to change notifications. I’d argue that had developers originally had some way to pay for sponsored feed placements they would have been less likely to chase virility. Along with the functionality to sponsor feed posts, Facebook undoubtedly would of ended up building rate limits and other spam fighting measures in order to protect the “sponsored post” product and ultimately helped the platform.

Everything tied to Connect

Even today, one of the most popular components of the Facebook Platform is the Connect single sign on piece. The problem was, and to some extent still is today, was that everything was tied to Connect. Even if you were just logging into a site with Connect, it still had access to your entire Facebook account. Facebook eventually fixed this, but it opened the floodgates of every site posting unwanted updates, breaching user trust, and hurting the credibility of the entire platform.

The PandoDaily piece has a deeper exploration of what drove the decline of the Facebook Platform but I think lack of payments, sponsored feed posts, and the tie in with Connect put the platform in a difficult position from day one.

PHP: Does “big-o” complexity really matter?

Last week, a client of ours as us to look at some code that was running particularly slowly. The code was powering an autocompleter that searched a list of high schools in the US and returned the schools that matched and an identifying code. We took a look at the code, and it turns out the original developers had implemented a naivete solution that was choking up since the list had gotten to ~45k elements and I imagine they had only tested with a dozen or so. During the process of implementing a slicker solution, we decided to benchmark a couple of different approaches to see how much the differences in “big-o” complexity really mattered.

The Problem

What we were looking at was the following:

– There is a CSV file that looks something like:

ID, STATE, SCHOOL NAME
2,NMSC DEPT OF ED & SVCS,IL
3,MY SCHOOL IS NOT LISTED DOMEST,NY
4,MY SCHOOL IS NOT LISTED-INTRNT,NY
8,DISTRICT COUNCIL 37 AFSCME,NY
20,AMERICAN SAMOA CMTY COLLEGE,AS
81,LANDMARK COLLEGE,VT

With data for about 45k schools.

  • On the frontend, there was a vanilla jQuery UI autocompleter that passed a state as well as “school name part” to the backend to retrieve autocomplete results.
  • The endpoint basically takes the state and school part, parses the available data, and returns the results as a JSON array.
  • So as an example, the function accepts something like {state: “MA”, query: “New”} and returns:
[
  {name: "New School", code: 1234}.
  {name: "Newton South", code: 1234},
  {name: "Newtown High", code: 1234},
]

The Solutions

In the name of science, we put together a couple of solutions, benchmarked them by running them 1000 times and calculating the min/max/average times, and those values are graphed below. Each of the solutions is briefly described below along with how they’re referenced in the graph.

The initial solution that our client had been running read the entire CSV into a PHP array, then searched the PHP array for schools that matched the query. (readMemoryScan)

A slightly better approach is doing the search “in-place” without actually reading the entire file into memory. (unsortedTableScan)

But can we take advantage of how the data is structured? Turns out we can. Since we’re looking for schools in a specific state whose name’s start with a search string we can sort the file by STATE then SCHOOL NAME which will let us abort the search early. (sortedTableScan)

Since we’re always searching by STATE and SCHOOL NAME can we exploit this to cut down on the number of elements that need to be searched even further?

Turns out we can by transforming the CSV file into a PHP array indexed by state and then writing that out as a serialized PHP object. Another detail we can exploit is that the autocompleter has a minimum search length of 3 characters so we can actually build sub-arrays inside the list of schools keyed on the first 3 letters of their name (serializednFileScan).

So the data structure we’d end up creating looks something like:

{
...
  "MA": {
  ...
   "AME": [...list of schools in MA starting with AME...],
   "NEW": [...list of schools in MA starting with NEW...],
  ...
  },
  "NJ": {
  ...
   "AME": [...list of schools in NJ starting with AME...],
   "NEW": [...list of schools in NJ starting with NEW...],
  ...
  },
  "CA": {
  ...
   "AME": [...list of schools in CA starting with AME...],
   "NEW": [...list of schools in CAA starting with NEW...],
  ...
  },
...
}

The results

Running each function 1000 times, recording the elapsed time between results, and calculating the min / max / and average times we ended up with these numbers:

test_name min (sec.) max (sec.) average (sec.)
readMemoryScan .662 .690 .673
unsortedTableScan .532 .547 .536
sortedTableScan .260 .276 .264
serializednFileScan .149 .171 .154

And then graphing the averages gets you a graphic that looks like:

The most interesting metric is how the different autocompleters actually “feel” when you use them. We setup a demo at http://symf.setfive.com/autocomplete_test/ Turns out, a few hundred milliseconds makes a huge difference

The conclusion

Looking at our numbers, even with relatively small data sets (<100k elements), the complexity of your algorithms matter. Even though the actual number differences are small, the responsiveness of the autocompleter between the three implementations varies dramatically. Anyway, so long story short? Pay attention in algorithms class.