Friday Links: How the magic happens

Happy holidays to everyone! Hopefully the eggnog has been flowing freely and if you’re in the Northeast you’ve been enjoying the unusually warm weather. I’d been planning to post the usual link roundup but changed my mind. I ended up seeing The Imitation Game last night and it inspired me to do something different. The movie is definitely worth seeing and if you work in technology it illuminates a connection to a shared past that most of us don’t often think about. So continuing in this theme, here are four links that’ll help explain how the actual sausage gets made.

Friday Links: Obama codes, MSFT <3 BTC, and hacked k-cups

It rained, it snowed, but hey it’s Friday! Grab some beers and snack on some links.

Video: Video effects with avcon and ImageMagick

A couple of months ago I was at Firebrand Saints and started wondering how the effects that they run on their videos work. At Firebrand, their main bar has a few flat screen TVs showing cable channels you’d expect but every now and then one of the TVs will start displaying the video through a filter. So as you’re sitting at the bar you’ll notice Sportcenter go from regular video to what looks like Sportcenter put through a pixelated filter. Unfortunately, the fall got a bit busy and I forgot to jot this down but it recently came back to me while watching the Family Guy – Take On Me skit.

So how do you programmatically apply effects to video? After some searching around, it seems like the preferred way to do this is to convert the video to a series of images, apply the effects, and then encode the images back to a video. Since we’re on Linux the weapons of choice for this are libav (ffmpeg fork) and ImageMagick. Additionally, I used youtube-dl to grab some source video from YouTube.

Playing around with manipulating videos and images is pretty CPU intensive so I decided to do this on a c3.xlarge. Once you have a machine up, just run the following to get everything setup:

Now, snag yourself a video from YouTube:

Next, you’ll want to extract the audio and then the individual frames from the video:

And now for the fun part – time to apply some effects to the image! As an fun first pass I ran the “paint” transform across the images. PS. Remember how we launched on that c3.xlarge? Well now we can run the transforms in parallel with xargs:

Finally, you’ll need to encode the images back together into a video format of your choice:

Here’s a clip of the video I ended up with:

So what else can you do with this? Well that’s the awesome part! An image manipulations you can do programmatically (ImageMagick, NodeJS, etc.) you can apply to your video.

Perhaps some Mystery Science Taylor Swift?

Unfortunately this adventure has left me with more questions than answers:

  • How do you apply filters to a video in real time?
  • Would it be possible to integrate this into a rooted (or stock) Chromecast?
  • Could you build something to support user interactivity? Maybe with Canvas/nodeJS?

I’d love any thoughts, input, or ideas!