Tips: Five guidlines for exchanging services for equity

Being a services business, we’ve always been lured by the forbidden fruit of developing a SaaS product, selling it to a bunch of users, and then sitting back and watching the revenue pour in every month. Over the last few years, we’ve been approached a couple of times by people looking to have us serve as the engineering team of a company in exchange for equity which would eventually begin generating revenue. Despite this situation being relatively common, there are relatively few guidelines on how to approach receiving equity in exchange for work as a services company. With that in mind, here are some guidelines we use in these situations. Also, these suggestions are probably also equally applicable if you’re a services company looking to develop a product in house.

Treat the project or company like a billable client

An issue that we’ve repeatedly noticed is that equity or internal projects are often treated as “second class” citizens within service companies. Since they aren’t billable, companies end up doing things like leaving them off the planning schedule, pushing the work off until Fridays, or only assigning interns to the project.

One strategy to help combat these issues is to on-board the project like you would any billable project. Depending on your process, this might mean doing things like creating a Basecamp project, adding the project to your time tracking software, and holding a “kickoff meeting”. Whatever your process may be, the takeaway is to treat the non-billable project like a normal one throughout your organization.

Set a budget and stick to it

Budgets are important for any project but they’re especially important when equity is involved or it’s an internal project. When dealing with a non-billable project, you’ll want to avoid the “equity guilt” where you’re guilted into “doing just a little more” since you own a measurable % of the company as well as eventually be able to calculate an ROI for your investment. A proper budget will help you do both since you’ll be able to know when you’ve contributed your fair share and hopefully one day calculate your ROI against the budget.

Developing a budget in these situations is relatively easy, just take how you’d normally bill and decide on a number for the version one of your buildout. If you bill hourly, pick a number of hours, if you bill by the week then pick a number of weeks. Since the project is already set up as a “first class” project you should be able to just add a budget against it.

Someone needs to “own it”

One of the problems that arises in equity deals and internal project is that there isn’t really a client and subsequently there often isn’t a single person responsible for making key decisions. Having a single person that ultimately “owns” the project will mitigate “design by committee” issues and also help keep the momentum of the project going.

Fair warning though, “owning” a failed project carries a heavy emotional and psychological toll so as an executive you’ll have to be ready to support an employee that accepts this responsibility.

Clearly define responsibilities

Clearly spelling out who is responsible for what is important when a services company is being brought in for equity. Detailing responsibilities will help you avoid a situation where your team started off as the marketing team but then ended up fielding support issues and ultimately resenting the project.

Agree upon KPIs and know when to quit

Knowing what you want to track and how you’ll measure success is important when money isn’t being exchanged because it helps keep everyone accountable and prevents the project from slowly stagnating. Having a good handle on your KPIs also helps motivate the team if things are going well and they’ll ultimately be a primary factor in deciding if you should continue the project.

Unfortunately, one of the hardest aspects of any project is knowing when to quit. This decision is usually harder in equity only or internal projects since there’s no pressure of burning capital and there won’t be the finality of the money running out. Despite this, knowing when to pull the plug will help you facilitate an orderly shutdown of the project and also give everyone involved the chance to debrief.

So should you do an equity only project? Well it depends. Given the the reality of startups, chances are you’ll not going to enjoy an eight figure exit and retire a millionaire. But chances are, if your team gets involved with an interesting project they’ll get a chance to learn a lot, experiment outside their comfort zone, and maybe even leverage the project into new business. So my answer would be a “maybe” depending on where your business is and what sorts of opportunities you’re being presented with.

PHP: Some thoughts on using array_* with closures

The other day, I was hacking away on the PHP backend for the “Startup Institute” visualization and I realized it was going to need a good deal of array manipulation. Figuring it was as good a time as any, I decided to try and leverage PHP 5.3+ new closures along with the array_* functions to manipulate the arrays. I’m not well versed with functional programming but I’ve used Underscore.js’s array/collection functions so this is mostly in comparison to that.

The Array

The entire shebang is on GitHub but here is the gist of what we’re intersted in:

There is a CSV file that looks like ssdata.csv.sample except with more entries that is read into a list ($data) where every object has keys cooresponding to the values in the header. Thinking in JSON, the array ends up looking like:

Ok great, but now what can we do with it?

Sorting:

Using the usort function is particularly natural with closures. Compare the following:

It’s pretty clear the version with closures is much shorter, more conscience, and ultimately easier to follow. Being able to “capture” the local $sortKey variable is also a key feature on the closure version since with the static version there’s no easy way to introduce variables into the sorting function.

Mapping:

In the linked example, I used array_map to basically convert an array of characters into an array of ASCII values for those characters.

With such a small map function, it’s hard to see or appreciate the benefits of using the closure along with array_map. With the closure though, you’ll get a couple of benefits including isolated scope so that you won’t inadvertently rely on the value of a variable that isn’t directly related to transforming the array values.

Using the closure would also “look” much cleaner if the array had non-numeric keys, since without being able to use integer indexes the for(…) loop would be more confusing.

Filter it:

This isn’t used but it could have been to return only the elements that were selected.

Looking at the the version with the closure, its a bit easier to follow and since it’ll enforce scope isolation if the “truth test” was a bit more complicated you’d only have to debug what’s actually inside the closure. Also, not having to “skip” some elements leaves the code with a nicer feel and overall I’d argue its just better looking.

Overall Thoughts:

Overall, using closures with the array_* functions will definitely lead to cleaner, more concise, and easier to follow code. Unfortunately, there are a few rough spots. Like with most of the standard library, the argument order is inconsistent which is always a constant irritation. For example, for no apparent reason array_map is “callback, array” but array_filter is “array, callback”. Also, another irritation is that the “index” isn’t available inside several of the callbacks like on array_reduce or array_map.

Personally though, the biggest limitation is that none of the array_* functions will work with classes that implement the Traversable or Iterator interfaces. That means if you have a Doctrine_Collection and you want to reduce down to a single result you’re still stuck with a foreach(…).

Anyway, as always I’d love to hear other opinions in the comments.

Symfony2 and Ordering of Relations and Collection Form Fields

Recently I was working on a project where I kept finding myself ordering a relation over and over by other than something than ID order (ie id= 1,2,3,4,5). For example, I always wanted my relation to be ordered by the ‘name’ field, rather than the ID or order it was inserted into the DB. Let’s take this schema as an example:

The issue is each time I attempted:

I wanted the output to be in alphabetical order for example. To make this the default for that relation you can add the following annotation to your ‘Post’ entity:

Now if you do “$post->getPostAttachments()” they’ll be automatically in order. The ‘@ORM\OrderBy’ column takes care of the ordering automatically. You can specify as many columns on the relation as you’d like there. In addition, this will make it so that all form collections on post with post_attachments are also ordered by name, rather than ID. This affects the relation call every time. If you are only looking into having it some of the time, look into using the repository to do the ordering for those calls.

Behat and Symfony2 – A Simple Gotcha

Recently we were using Behat on a new project with the Symfony2 extension. It took a bit to get it up and running correctly as the docs (for the extension setup) seem to be incorrect. First place the behat.yml directly in the project root. Second, when using the “@” notation to reference your bundle you need to be sure to enclose it in quotes. For example, ‘bin/behat –init “@MyBundle”‘. Without the quotes it will not be parsed correctly and will not setup the structure as you want.

If you are running into the following error:

Most likely the initial setup didn’t go correctly. We kept having that issue whenever we added the behat.yml to our root directory, but then didn’t use the quotes to enclose the @MyBundle. Hopefully this saves you the headache!

I’ve shot over a pull request to the main behat repo for the extension so it hopefully will be fixed soon:

Happy testing!

S3: Using Amazon S3 for large file transfers

A few days ago, a friend of mine reached out asking for a good solution for securely transferring a relatively large (~1GB) file to several of her prospective clients. Strangely, even in 2013 the options for transferring such a large file in a reliable manner is pretty limited. I looked into services like YouSendIt, WeTransfer, and SendThisFile but they all suffer from similar limitations. Most of them have a <1GB file size limit, their payment plans are monthly subscription based instead of pay as you go, and they don’t offer custom domains or access control. Apart from these services, there is also the trusty old school option of using an FTP server but that raises the issue of having to maintain your own FTP server, using a non-intuitive FTP client, and still being locked into paying a monthly fee instead of “pay as you go". Stepping back and looking at the issue from a different angle, it then became clear that the S3 component of Amazon’s Web Service offering is actually an ideal solution for this problem. The S3 piece of AWS is basically a flexible “cloud based” storage solution that lets you programmatically upload files, store them indefinitely, and then serve them as you please. Looking at the issues we’re trying to overcome, S3 satisfies all of them out of the box. S3 has a single file size limit of 5 Terabytes, files can be served off a custom domain like archives.setfive.com, billing is pay as you go depending on the resources you use, and S3 supports access control so you have fine grained access over who can download files and for how long. So how do you actually use S3?

Setting up and using S3

  • The first thing you’ll need is an Amazon account that has S3 enabled. If you already have an Amazon account, just head over to http://aws.amazon.com/s3/ to activate S3 for your account.
  • Next, there are several ways to actually use S3 but the easy way is probably using Amazon’s own Web Console. Just head over to https://console.aws.amazon.com/s3/home?region=us-east-1 to load the console.
  • In AWS parlance, you’ll need to create a “bucket” which is the root organizational structure on S3. You can map a “bucket” to a custom domain name so think of it like the “drive” that you’re upload files to. Go ahead and create a bucket!
  • Next, click the name of your bucket and you’ll get “into” the bucket where you should see a notice telling you the bucket is empty. This is where you can upload and delete files or create additional organizational folders. To upload a file, click the “Actions” menu in the header and select “Upload”. Click upload, and then in the popup select “Add Files” to add some files and “Stat Upload” to kick off the upload.
  • When the upload finishes, in the left panel you’ll see the file you just upload. Congratulations you’re using the cloud! If you want to make the file PUBLIC, just right click on it and click “Make Public”, this will let you access the file without any special URL arguments like https://s3.amazonaws.com/big-bertha/logo_horizontal.png
  • To get the link for your file, click it to see the properties and then on the right panel you’ll see the link.
  • To delete a file, just right click on it and select “Delete”

Anyway, thats a quick rundown of how to use Amazon’s S3 service for file transfers. The pricing is also *very* cheap compared to traditional “large file transfer” services.

Check out some other useful links about S3: