Adding ORDER BY FIELD to Propel Criterias

Every now and then, we use Sphinx to provide full text searching in MySQL InnoDB tables. Sphinx is pretty solid. It’s easy to set up, pretty fast, and easy to deploy.

My one big issue with Sphinx has always been making it play nice with Symfony, specifically Propel. The way Sphinx returns a result set is as an ordered list of [id, weight] for each document it matched. As outlined here the idea is to then hit your MySQL server to return the actual documents and use “ORDER BY FIELD(id, [id list])” to keep them in the right order that you received the list.

The problem is, Propel Criteria objects provide no mechanism to set an ORDER BY FIELD. This is an issue because if you drop Criterias you loose Propel Pagers which generally adds to a lot of duplicated code and is honestly just not very elegant.

Anyway, after some thought I came up with this solution.

If you read through the definition of “Criteria::addDescendingOrderByColumn()”:

All it really does is add the second part of the ORDER BY clause to an array which then gets joined up to build the final SQL. Because of this, you can actually just add an element onto the orderByColumns array which will cause Propel to execute an ORDER BY FIELD SQL statement.

To make the magic happen, I sub-classed Criteria and then added a addOrderByField() function to let me add a field to order by as well as a list to order by.

8/8/12: Update per Simon’s comment below

Also add this function to make sure your ORDER BY FIELD columns get cleared:

To use it, do something like this:

And thats about it. Since sfCriteria is a sub-class of Criteria the code works seamlessly with existing PropelPagers and anything else that expects a Propel Criteria.

Regex To Extract URLs From Plain Text

Recently for a project we had the problem that it pulled data from numerous API’s and sometimes the data would contain urls that were not HTML links (ie. they were just http://www.mysite.com instead of <a href=”http://www.mysite.com”>http://mysite.com</a> .  I searched around the web for a while and had no luck finding a regex that would extract only urls that are not currently wrapped already inside of a html tag.  I came up with the following regex:

/(?<![\>https?:\/\/|href=\"'])(?<http>(https?:[\/][\/]|www\.)([a-z]|[A-Z]|[0-9]|[\/.]|[~])*)/

Parts of it are taken from other examples of URL extractors.  However none of the examples I found had lookarounds to make sure it isn’t already linked.  I am not a master of regex, so there may be a better expression than I wrote.  The above expression is written to be compatible with PHP’s preg_replace method.  A more generic one is as follows:

(?<![\>https?://|href="'])(?<http>(https?:[/][/]|www.)([a-z]|[A-Z]|[0-9]|[/.]|[~])*)

This expression will match http://www.mysite.com and www.mysite.com and any subdomains of a website.  The first matched group is the URL.  One thing to note is if you are using this that you need to check if the URL that is matched has an http:// on the front of it, if it does not, append one otherwise the link will be relative and cause something like http://www.mysite.com/www.mysite.com .

One tool that was very helpful in making this was http://gskinner.com/RegExr it is incredibly helpful.  It gives you a visual representation in real time as you create your expression of what it will match.

Note: You will lose the battle in trying to extract URL’s using regex. For example the above expression will fail on a style=”background:url(http://mysite.com/image.jpg)”. For a more robust solution it may be worth while looking into parsing the DOM and running regex per element then.

Random acts of madness: JS+Flex+Rhino – WebWorkers for IE

Preface: This is a bad idea with untested code. If you deploy it on a production server bad things will happen.

A few weeks ago I was trolling the Internet and ran across an interesting piece over at John Resig’s blog about Javascript WebWorkers. Basically, WebWorkers are a draft recommendation that allow you to run Javascript functions on a background (non-UI thread) thread. Essentially, they would allow you to do long running computations without hanging the browser’s UI. Pretty neat. Problem is that they are currently only available in Firefox 3.5+, Safari 4, and Chrome 3.0

In my never ending quest to use every buzzword at least once I decided to try and implement a compatibility layer to bring support for WebWorkers to other browsers. The plan was to use Java6’s new embeded Javascript interpreter (it’s just Rhino) to execute the WebWorker code server side and then pipe the output back to the client. Again, this is really just a proof of concept.

There are three parts to the rig: the client Javascript library, a Flex/AS3 application for streaming client to server communication, and a Java application that uses Rhino to execute the Javascript.

Client Javascript

The client Javascript detects the user’s browser and then will define a “Worker” object if the user’s browser doesn’t support WebWorkers. The new “Worker” object uses the Flex application to pass messages back and forth to the server and calls the user’s onmessage function when data arrives from the server.

I sniped the browser detection code from Quirksmode and it seems to work fairly well. The rest of the code is below:

Flex/AS3 Application

The Flex application is basically a dumb conduit between the server and the client. All it really does is pass messages between the Java on the server and the Javascript on the client.

The trickiest part of getting this to work was Adobe’s insane rules for allowing their Socket classes to connect to servers. In order for the client to successfully connect to the server you need to serve a XML policy file from port 843. Additionally, this file can’t be served by a HTTP server but must be a custom server that only spits back the file along with a null carriage return. A detailed description of this abortion is here http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html

This really posses two problems. One, you need to be running some random “policy file server” for Flex sockets to be of any use. And two, since 843 is a privileged port, this server can’t be started by a regular user.

The most interesting parts of the ActionScript are probably the snippets that call out to Javascript functions:

ExternalInterface.call("sfWebWorkersSWFReady", true);

Java Server

The most complicated part of this whole thing is probably the Java application that actually executes the WebWorker Javascript. All the communication between the Flex and Java is done entirely with JSON. The server basically does the following:

  1. Listen for connections from the Flex and accept them when they come in.
  2. When a message comes in – it can either be a request to create a new web worker or a postMessage() event containing some data for an existing worker.
  3. If it’s a request for a new worker, the server will download the Javascript file and then execute it inside a Rhino container.
  4. Otherwise if Flex passed a postMessage() message the server will forward that data to the running web worker.
  5. The other event that happens is that a web worker can send messages back to the Flex.

Anyway, I tested this on IE7+ and it seemed to work decently well. Per the warning on top I don’t want to leave this running on a live server anywhere.

If you want to get it to actually run, do the following:

  1. Download the zip of all the sources here.
  2. Start the JAR in WebWorkerServer/WebWorkerServer.jar with java -jar WebWorkerServer.jar 9999
  3. On the top of web/sfwwcompat.js change the IP address or the server to where your Java server is located (localhost if you want)
  4. Open web/wwsha1.html in IE or Chrome 2.0 and you should see stuff happen.

What’s in the box:

  • web/ contains the Javascript and a demo.
  • WebWorkerConduit/ contains the Flex applicaiton.
  • WebWorkerServer/ contains the Java server.

Credits: I borrowed the WebWorker SHA1 implementation from John Resig who adapted it from Ray C Morgan.

So here is another crazy idea. Instead of executing the WebWorker code on the server, would it be possible to dynamically make the WebWorker code re-entrant using setTimeout() on the client where loop structures exist?

FanFeedr Widgets Are Live!

Over the past few weeks we had the opportunity to work with FanFeedr to put together some widgets for their sports news platform. Previously, FanFeedr had been using Sprout to build their widgets but this required someone to hand build a Flash widget for every “resource” on FanFeedr (there are a lot). In addition, since the Sprout widgets are Flash they aren’t easily crawled by search engines.

Our widgets are different. They allow FanFeedr to generate widgets on the fly for any of their pages and allow users to customize the color schemes. Check out a widget builder for the NY Yankees here.

Basically, our widget builder works by allowing users to customize the size and colors used in the widget. This data is serialized as a JSON object and then base64 encoded so that it can be sent to the “generator” on the server. Then, the server unpacks the payload and builds a widget according to the data specified in the JSON object. In addition, our embed code includes a noscript tags so that search engines pick up the links in the widget as well.

Anyway, working with FanFeedr was a great experience and we hope to continue our relationship moving forward. Go build yourself a widget!

The Redline Challenge

For one reason or another we decided to sponsor a pub crawl this weekend. The plan was hatched over some beers at Underbones on Thursday night for a Saturday morning go time. We knew we basically needed three things: a list of bars, some swag (tshirt?), and obviously a website. We decided that the route of the crawl should follow the MBTA Redline so that we could start downtown and then finish in Somerville. This made picking bars pretty simple, gave us some branding, and of course we registered
REDLINECHALLENGE.COM.

We wanted the website to have some informative information, live location updates, and of course pictures of the debauchery. The biggest problem was that neither Daum nor I have location aware phones. To get around this, we decided to update Twitter with our current location along with a “#loc” hashtag and then have the site update based on that. Since we were all ready using Twitter, we decided to use Twitpic to allow us to post pictures to twitter on the fly. Additionally, we took advantage of Verizon Wireless’s email to SMS service and allowed people to contact us via the website. All told, we built the site in about 3 hours and it proved to be pretty useful. People used it to find us on the crawl and to contact us while we were out. Everyone also got a kick of seeing a live photo stream.

What’s next? Clearly, The Greenline Challenge.