Ferretarmy.com has a New Hosting Provider!

Ferretarmy has been hosted by GoDaddy since 2007. Today, this is no more.

GoDaddy has a penchant for being in the news, in almost all cases for negative reasons. I believe people were rightfully outraged when they heard the founder likes to shoot elephants. Their advertising is juvenile. Their pricing strategies are infuriatingly complex, and they constantly try to upsell services that often should be free (private domain registration, anyone?). Their tools (thought they have come a LONG way since 2007) are complex, poorly designed, and unintuitive. Worst of all, they initially supported SOPA – this move alone caused customers to leave in droves┬álate last year.

Well, enough is enough at some point, no? Instead of renewing my hosting contract for Ferretarmy.com in July, I decided to take the plunge and move off Godaddy compeletely (hosting AND domains). Moving a website, even one built on top of WordPress, is not exactly trivial (though let’s not get too far out there – it’s definitely not rocket science either!). All Ferretarmy.com properties are now hosted with Dreamhost – so far, I’ve been very impressed with the toolset that Dreamhost provides, as well as their customer-centered approach to web hosting.

 

jQuery Templating with .tmpl

Templating of web controls is a valuable technique that allows for code reuse. For example, if you have a repeated list of items with common attributes that need to be displayed on a web page, templating is a great solution. Templating can often be accomplished on the server side through controls such as ASP.NET repeaters. However, with the web of today, asynchronous programming is where it’s at. With AJAX data calls, you often get JSON objects back that must be turned into HTML and displayed on a web page.

There are a few ways of doing so – you could have an asynchronous call return an HTML fragment, for instance (done that). You could also embed placeholders in hidden HTML fragments on a page, and use a combination of jQuery’s clone/replace/append methods to push object data into them (done that too). However, there’s an even better technique available now, as a first-class citizen in the jQuery library – the .tmpl templating API.

jQuery templating is a feature that’s currently in beta (though it is distributed with jQuery as of version 1.4.3), and is based upon an existing templating plugin. I’ve had the pleasure of using the jQuery templating API for a bit of time now, and I admit that it is by far the easiest, most robust templating solution I’ve found. There are template controls for repeaters ({{each}}), conditional statements ({{if}} and {{else}}), templated HTML fragments ({{html}}), and wrapping ({{wrap}}). That’s a lot of functionality out of the box!

By far my favorite part of html templating, however, is the <script type=’text/x-jquery-tmpl’></script> tagging that you can wrap around your template fragments. One thing I’ve disliked with templating is that you have to embed potentially malformed HTML into your page – for instance, you might end up having to template id attributes. Malformed html is not the worst thing in the world, especially in hidden fields, but it certainly doesn’t help with search engine rankings and such. Having a way of clearly saying “I’m a template fragment” is a great thing, both for search engines and general maintainability.

If you haven’t gotten a chance to take a look at jQuery templating, now is certainly a great time to do so. It comes for free with jQuery, and it’s a fast, easy, and robust solution for templating needs.

text/x-jquery-tmpl

Musings on SPDY Protocol

So, yet another week, and another fairly large announcement from Google. They’ve been coming pretty fast and furious lately – the open-sourcing of both Closure and Chrome OS are two fairly recent developments. However, the one this post will focus on is the announcement that Google is working on a new application-level protocol, dubbed SPDY.

SPDY (pronounced ‘speedy’) is designed as an adjunct to the http protocol, which has been around for nearly twenty years now. SPDY was designed to run atop of tcp/ip, which is the place in the networking stack that the browser talks to the web server. Hence, to be able to utilize SPDY, both the browser and the web server need to understand the protocol. However, the router need not change at all, which will greatly aid in implementation, if it’s standardized as a protocol.

So what are the features of the SPDY protocol? First, it’s faster than standard HTTP through a number of mechanisms. By default, it gzips it’s headers (and all content), making the packets more lightweight. It only needs one channel for data transfer (it multiplexes requests through the channel, which functions as a stream), so there’s a tremendous amount of connection overhead reduction versus standard-fare HTTP. The connection is designed to remain open as long as the client wants, which means that new content can be pushed from the server to the client. Packet prioritization is built into the protocol as well.

The nicest part of SPDY, in my opinion, is that it requires the use of SSL by default. This means that every packet sent over the wire is encrypted. Deep packet injection and packet sniffing are realities in the world, so a switch to secure communication is long overdue.

The roadmap for SPDY seems pretty straightforward. It’s not finished yet, but being an open source effort, a standard could be developed in a few years. Browsers and web servers can build in protocol support as the standard is still being developed. Browsers and can try to handshake in SPDY with a web server, then default back to HTTP if necessary. There aren’t many competing protocols out there, especially ones that don’t require any router firmware updates (which would take a decade to roll out, at least). Even if the web doesn’t decide to run on SPDY, the only reason for that would be someone came up with an even better idea in the meantime. A new protocol is definitely in store for the future of the web.

Google Closure

Google Closure

Google just open sourced Closure, their robust JavaScript library. Closure is used in a variety of Google products, notably GMail and Google Docs. I had a chance to play with it for a few minutes, and I’m posting my first impressions.

Google Closure is a fairly complete JavaScript framework. In that sense, it duplicates a lot of the functionality of existing JavaScript libraries, such as jQuery and YUI. It contains behaviors, AJAX, event handling, selectors, UI components, and more. The syntax is fairly straightforward, but it’s not the same as other libraries, so there’s a learning curve in getting acquainted with it.

Closure has excellent dependency management through a robust loading mechanism. Basically, if you don’t reference a certain piece of the library, rest assured that it’s not going to load (and slow down your page as a result). In addition, it comes with the Closure Compiler, which will walk your JavaScript, determine what libraries you need, then aggregate and compress all the required files. Being able to deploy one file instead of many is a great way of speeding up your site.

One thing that irks me is that there only way to get Closure is through Subversion access to the trunk. If Google wants Closure to be adopted widely, they’re going to need to start offering discrete, packaged versions of it. Many developers (and novices) will be put off by the current distribution method otherwise.

The API is well documented, and has links to the actual code for each method (something that I’ve not really seen before in API documentation). The API itself is very robust, with a ton of methods and accessors on each object. I’m not sure I’m a big fan of the way things are laid out in the API, but I don’t have enough experience with it to say anything definitively here.

One doesn’t have to stretch the imagination to believe that Closure has amazing potential. This is, after all, what GMail is built from. I’m excited that it’s been open-sourced and I can’t wait to use it a bit more. Google has given a grand gift to the developer community with this release.

Cross Domain AJAX with CSSHttpRequest

Normally, AJAX is limited to retrieving data on the same domain that served the page. This is a limitation of the XMLHttpRequest, done mostly for security purposes. There are several ways to get around this limitation, of course, using a variety of methods and techniques. This is one of the stranger that I’ve come across – CSSHttpRequest.

Essentially, this small JavaScript library exploits the fact that CSS stylesheets are not subject to the same domain policy, which enables cross-domain POST requests for .css files. This allows you to pull CSS rules from remote domains – rules that contain name-value pairs embedded in valid CSS styles. The particular technique used is to return name-value pairs in background url fields for fake style rules, as such:

#c0 { background: url(data:,Hello%20World!); }

The client-side applies these style rules when they return from the remote domain, then JavaScript is used to read the property values and turn them into essentially JSON data ready for client consumption.

Of course, there’s a reason that the same origin policy is enforced. Normally, resources on the same domain are considered trusted, while external resources are not. With a library like this, essentially, if you’re vulnerable to JavaScript injection, there’s not much you can do to keep someone from embedding CSSHttpRequest on your page then using it to pull content from a remote domain. Nasty trick, but it’s totally possible and even trivial to some extent.

There are certainly legitimate reasons to need cross-domain AJAX. However, I wouldn’t think to use such a technique on a public site – it has the feel of a hack since (AJAX isn’t supposed to cross domains, after all), and the potential erosion of public trust is not worth the benefits. As an exercise of an interesting technique, it’s very cool, though.

jQuery UI Image Carousel

jQuery UI Image Carousel

I’ve just released my latest plugin, the jQuery UI Image Carousel! The plugin is a standard-fare image carousel, and it’s also fully compatible with jQuery UI. It’s got a few options that make it fairly versatile – from a minimal look all the way to a collapsible version with a custom header.

The JavaScript itself wasn’t that daunting – version 1.0 clocks in at around 180 lines. Most of it is presentation management as well – making sure that the correct jQuery UI classes are applied to the markup, and so forth. In total, it was a bit of a longer exercise – there was a lot of work to insure compatibility with older versions of IE, for instance. I hope you enjoy, and if you use it, please take a sec to leave a comment with your page URL.

Disabling Button Clicks The Better Way

Most of the time when you want to cancel an action on a web page, such as a button click, you’d return false in the client-side button event handler, as such:

<a href="blah..." onclick="return DoAction();">Link</a>

...

function DoAction() { /*Do Something */ return false; };

However, there’s a better way, using the event.preventDefault() method. I had the need to use this recently, when I was fixing a trivial issue with the Filament Group jQuery UI buttons – my ‘a href’ styled buttons were still clickable when they were disabled. An easy solution to this problem is to capture the click and prevent the default action. I hooked this action to all disabled buttons, as such.

$(".ui-state-disabled").click(function(event) { event.preventDefault(); });

This code isn’t that robust – if you were to add or remove the ui-state-disabled class after page load (which I didn’t have to), you’d want to make sure to handle it appropriately. Either way, preventDefault() is definitely an elegant solution to disabling a click action.

jQuery Image Overlay 1.2

I’ve updated my image overlay plugin yet again, to version 1.2. Again, it was a fairly minor enhancement – I added the ability to turn off the animation via an option. The translucent image overlay effect is popping up all over the web nowadays, and I want to make sure that my plugin implementation offers as much as any other technique to achieve the overlay effect.

In other development, I’ve been trying to learn how to implement iPhone style gesture controls via JavaScript. I’ve been seriously spinning my wheels on this – my demo is turning out to be a dud, of sorts. It’s frustrating (especially the testing, which I can only do on my iPhone currently), but hopefully it comes together sooner or later.

Determining Geolocation on the Web

Geolocator Tool

Here’s a GeoLocator example I put together that showcases two popular GeoLocation techniques – IP lookup and using the GeoLocation API – in order to both demonstrate the techniques as well as to show the issues with each.

In HTML, there are a few ways of determining the location of a user. Traditionally, this has always been accomplished by IP lookup – a lookup is performed with your IP against a database of known IP locations. This technique leaves a lot to be desired, but it’s been all that’s been available for a very long time.

In HTML5, however, there’s a new Geolocation API. The gist of this feature is that instead of relying on IP address lookup, the browser will instead interrogate the device for this information. This allows devices that have native geolocation hardware (GPS receivers and Wifi antennas, for example) to be able to pass their known location to the browser. Many web-enabled devices of today already provide support for this feature, including the iPhone, the Palm Pre, Android phones, and some netbooks.

The problem with all these methods, however, is that none of them just work on all devices. IP lookup works best in the workplace, but it doesn’t work as well for home users and doesn’t work at all for mobile users, as it pretty much requires a stationary device (amongst other things). The Geolocation API is not supported in all browsers yet (with the major notable holdout being all versions of Internet Explorer), and it tends not to work well on devices that don’t have built-in GPS capabilities (which the bulk of desktop and laptop devices still aren’t equipped with).

Worse yet, when location is returned via any technique, it is often inaccurate. In my case, IP lookup and Geolocation lookup provide locations that are over 10 miles away from each other! That’s not a discrepancy that can be resolved in many cases.

In the longer term, the Geolocation API is almost certainly the way to go. When it is fully supported, it is the most accurate method available. Today, though, the landscape is still fractured and implementing accurate geolocation is not trivial on the web.