Categories
The Network

Building the 9rules Network: Episode 3

As I mentioned in Episode 2, I wanted to tell you how I used our page views to build our cache of member data. This episode is more of a documentation of my thought process than how I programmed it all to work. More of a why than how.

For quite a few months, we’ve featured the latest five post titles from a random set of our members. There are many ways I could have pulled this off, but I chose the Occam’s Razor approach. When I was given a choice between several solutions, I chose the simplest solution to implement, which was to use our members feeds as our primary means for collecting their headlines.

Once I had chosen how I was going to collect the data, I had to make a decision on how I’d automate that process. At the time, our server was little more than a 3-dollar linux box sitting in someone’s closet behind the extra toilet paper and the boxes of winter clothing. To be fair, that machine served us well for quite sometime, but it left us with little options when building robust automated solutions.

So how was I to automate the caching of our member data when I had very little access or control of our server? I used our page views. Each time our page was hit, I’d randomly load about a half-dozen of our member feeds and create a local cache of those entry headlines. Once the local cache was older than about an hour old, the next page view would end up looking for a new copy of the member feed. This solution was extremely sloppy, horrible with page load-time and weight, and did not add much value to our site. Yet, we had very little choice at the time.

The very first version of our 9rules member data cacher ran on lastRSS. After seeing its extreme limitations we quickly moved to Magpie, which allows us to parse every RSS version as well as Atom. We’re still using a customized version of Magpie to this day.

In our next episode, I’ll post some code to cache XML feeds locally, which could be of much use to many of you out there.