Archive for the ‘Performance’ Topic

Web Font Performance: Weighing @font-face Options and Alternatives

Feb ‘12 27

Web fonts are a key ingredient in today's website designs; at my employer (AOL) it is a given redesigns will feature downloadable fonts. The days of maintaining a sprite full of graphic text headlines are behind us. We’ve moved on—but what approach yields the best performance?

The goal of this article is to look at the various web font implementation options available, benchmark their performance, and arm you with some useful tips in squeezing the most bang for your font byte. I will even throw in a new font loader as a special bonus!

Font Hosting Services vs. Rolling Your Own

There are two approaches you can take to get licensed, downloadable fonts on to your web pages: font hosting services and do-it-yourself (DIY).

Font Hosting Services like Typekit, Fonts.com, Fontdeck, etc., provide an easy interface for designers to manage fonts purchased, and generate a link to a dynamic CSS or JavaScript file that serves up the font. Google even provides this service for free. Typekit is the only service to provide additional font hinting to ensure fonts occupy the same pixels across browsers.

The DIY approach involves purchasing a font licensed for web use, and (optionally) using a tool like FontSquirrel's generator to optimize its file size. Then, a cross-browser implementation of the standard @font-face CSS is used to enable the font(s). This approach ultimately provides the best performance.

Both approaches make use of the standard @font-face CSS3 declaration, even when injected via JavaScript. JS font loaders like the one used by Google and Typekit (i.e. WebFont loader) provide CSS classes and callbacks to help manage the "FOUT" that may occur, or response timeouts when downloading the font.

What the FOUT?

FOUT, or “Flash of Unstyled Text,” was coined by Paul Irish and is the brief display of the fallback font before the web font is downloaded and rendered. This can be a jarring user experience, especially if the font style is significantly different.

FOUT of some form exists in all versions of Internet Explorer and Firefox 3.6 and lower. Check out the video of my demo below (preferably in full screen mode) at the 1.6 second mark to see it in action:

You'll notice in Internet Explorer 9, the content is blocked until the image has downloaded. Your guess is as good as mine.

Here are my recommendations for avoiding the FOUT:

  • Host the fonts on a CDN
  • GZIP all font files except .woff (already compressed)
  • Cache all font files for 30+ days by adding a future expires cache header
  • Remove excess glyphs (characters) from the font files
  • Ensure @font-face is the first rule of the first stylesheet on the page (IE)
  • Still have a FOUT? Read on, a JavaScript font loader may be in order.

Removing Excess Font Glyphs

Font Squirrel has an awesome tool that lets you take a desktop font file and generate its web counterparts. It also allows you to take a subset of the font, significantly reducing file size.

To show just how significant, I added Open Sans and tried all three settings:

Glyphs Size
Basic 940 66.9 KB
Optimal 239 20.9 KB
Expert 119 13 KB

From the table above, it should be obvious that the byte size is directly correlated to the # of glyphs (characters) in the font file.

I suggest you follow along with me at www.fontsquirrel.com/generator!

The Basic setting leaves the characters untouched. Optimal reduces the characters to around 256, the Mac Roman character set. We are able to see the greatest savings by selecting Expert mode and only including the Basic Latin set, then manually adding in characters we need.

  • Under Rendering, uncheck Fix Vertical Metrics
  • Under Subsetting, check Custom Subsetting...
  • Under Unicode Tables, check only Basic Latin
    Note: This assumes the fonts will only use English characters; for other languages add the characters you need.
  • If you are typography nerd, copy and paste ‘ ’ “ ” into the Single Characters field
  • Verify your Subset Preview; adjust if needed
  • Under Advanced Options, give your font a suffix based on the subset (i.e. latin)

JavaScript Font Loaders

Typekit and Google joined forces to create an open source WebFont Loader that provides CSS and JavaScript hooks indicating a font's status as it downloads. This can be useful in normalizing the FOUT across browsers by hiding the text and adjusting CSS properties so that both fonts occupy the same width.

The three states it tracks are loading, active, and inactive (timeout). Corresponding CSS classes (wf-loading, wf-active, and wf-inactive) can be used to control the FOUT by first hiding headings and then showing them when once downloaded:

h1 {
    visibility: hidden;
}
.wf-active h1 {
    visibility: visible;
}

JavaScript hooks for these same events are also available via callbacks in the configuration object:

WebFontConfig = {
    google: {
        families: [ 'Tangerine', 'Cantarell' ] // Google example
    },
    typekit: {
        id: 'myKitId' // Typekit example
    },
    loading: function() {
        // JavaScript to execute when fonts start loading
    },
    active: function() {
        // JavaScript to execute when fonts become active
    },
    inactive: function() {
        // JavaScript to execute when fonts become inactive (time out)
    }
};

The WebFont loader also includes callbacks for fontactive, fontloading, and fontinactive that is fired each time a font updates, giving you control at a font level. For more information, check out the WebFont Loader documentation.

Introducing Boot.getFont, a fast and tiny Web Font Loader

I haven't seen one out there (leave a comment if I missed it) so I wrote a little font loader that provides the same hooks for loading fonts called getFont as part of my Boot library.

It weighs in at 1.4 K after GZIP (vs. 6.4 KB Google, 8.3 KB Typekit) and easily fits into your existing library. Simply change the "Boot" string at the end of the file to update the namespace (i.e., jQuery).

Fonts are loaded via a JavaScript function, and a callback can be supplied that executes once the font has finished rendering.

Boot.getFont("opensans", function(){
    // JavaScript to execute when font is active.
});

Boot.getFont provides similar CSS classes to the WebFont Loader but at a font level, affording precise control:

.wf-opensans-loading {
    /* Styles to apply while font is loading. */
}
.wf-opensans-active {
    /* Styles to apply when font is active. */
}
.wf-opensans-inactive {
    /* Styles to apply if font times out. */
}

You can easily configure it to grab fonts based on your directory structure by loading a configuration object:

// Global
Boot.getFont.option({
    path: "/fonts/{f}/{f}-webfont" // {f} is replaced with the font name
});

// Font-specific
Boot.getFont({ path: "http://mycdn.com/fonts/{f}/{f}-wf" }, "futura" );

I haven’t had time to document all the goods, but the library is available here if you are interested.

Gentlefonts, start your engines!

Now that we are armed with the knowledge needed to ensure fast-loading fonts, let us take a look at the performance of the implementation options.

I set up the following test pages, loading the same web font (Open Sans), spanning DIY and various hosting options at Typekit and Google:

  • System: Our control test; this page does not load any fonts and uses Arial.
  • FontSquirrel Optimal: FontSquirrel generator’s recommended ‘Optimal’ setting and FontSpring’s cross-browser @fontface declaration. Fonts hosted on the same server as the web page like most small websites.
  • FontSquirrel Expert: Used recommended tips above to trim font file size using the FontSquirrel Generator, I replaced the ‘Optimal’ font kit in the above test with a minimal ‘Basic Latin’ character set.
  • FontSquirrel Expert (CDN): Same as the above test, however fonts are hosted from a CDN on a different domain.
  • Boot.getFont: This test updated the ‘FontSquirrel Expert’ test to use my Boot.getFont JavaScript library.
  • Boot.getFont (CDN): Same as Boot.getFont test, except font files are hosted from a CDN on a different domain.
  • Google Web Fonts Standard: I chose Google to represent a free font hosting service, and since this is a speed test, and Google is all about speed, I figured they should be in the race. Google provides 3 implementation options, this being the default—a <link> element pointing to a dynamic stylesheet that loads the font(s). Note: I left out the ‘Import’ option as results were nearly identical to ‘Standard’ option.
  • Google Web Fonts JavaScript: This option includes the WebFont loader discussed above to load the fonts, hosted from Google’s servers.
  • Typekit: Here, I created a kit at Typekit and used the options that provided the smallest font file.

I used webpagetest.org and loaded each test page 10 times in Chrome, Firefox 7, IE7, IE8, and IE9 over a 1.5 mbps DSL connection. We are comparing implementation, so I took the fastest test to weed out network latency issues and other causes of variance in the data.

Here is how they stack up, ranked by the fastest time (ms) across browsers:

Fastest Load Times (ms) by Implementation and Browser
IE9 IE8 IE7 Firefox Chrome Fastest
System 373 358 370 506 398 358
Boot.getFont (CDN) 692 697 696 652 680 652
FontSquirrel Expert (CDN) 710 697 681 667 681 667
Boot.getFont 812 698 798 693 704 693
FontSquirrel Expert 822 704 784 802 792 704
Typekit 798 999 959 795 815 795
FontSquirrel Optimal 997 800 803 933 925 800
Google Web Fonts JavaScript 1096 1097 1126 1254 801 801
Google Web Fonts Standard 896 850 870 1003 899 850

Take some time to digest the data. To better compare implementations across browsers, check out these charts:

IE 9

Font Implementation Benchmarks: Internet Explorer 9

IE 8

Font Implementation Benchmarks: Internet Explorer 8

IE 7

Font Implementation Benchmarks: Internet Explorer 7

Firefox

Font Implementation Benchmarks: Firefox

Chrome

Font Implementation Benchmarks: Chrome

My Observations

The Do-It-Yourself implementations were consistently the fastest, especially when combined with a CDN. This is due to physics—less bytes, requests, and CPU overhead are required to serve the font.

It is interesting to compare Google Web Fonts (GWF) to Typekit since they use the same core loader, but that is where the similarities end:

Google Web Fonts in Firefox (1254ms): JS » CSS » Font

Typekit in Firefox (795ms): JS » CSS Data URIs

In browsers that support them, Typekit uses Data URIs in the CSS to load the font, whereas GWF first loads the JS, then the CSS, and finally the font. Typekit uses this approach in IE 8 and lower where Data URIs are not supported, ending up with slower load times in those browsers.

Google is also slower because of their multiple DNS lookups; Typekit rightly uses one domain for all assets.

I was impressed by the performance of Boot.getFont, which ended up being faster (sometimes by a hair, sometimes more) than the standard @font-face CSS in all cases. My hypothesis is that somehow the JS triggers a reflow/repaint that forces the fonts to download sooner in all browsers.

Final Thoughts

While this article could probably be split into several, I wanted a single place to document implementation choices, tips for optimizing them, and have some reference benchmarks. If other font providers want to hook me up with a free account (and host Open Sans, for consistency), I’d be happy to include them in another study at another time.

I was again dissappointed to see Google turn out another slow service. Google friends, take some notes from Typekit!

I am looking forward to hearing your thoughts and observations on this experiment, and to your recommendations for speeding up web fonts. Thanks for reading!

§

Google’s Button is Slow…And so is Facebook’s.

Jun ‘11 2

There, I said it!

Yes Google, who built speed into its core values, has an entire website dedicated to making the web faster, developed PageSpeed, invents protocols named “SPDY”, and makes bad ass videos showing how quickly their browser loads web pages from the local disk, has their own take on the Like Button.

And it’s slow. Not only is it slow, it is slower than Facebook’s Like Button, which I didn’t think was possible.

Check out the painful WebPageTest results here:

Let’s assume it’s the first time we get to experience the joy of these buttons (clear cache):

Google +1 Facebook Like
Load Time 2.2 sec. 1.8 sec.
Bytes 66 KB 92 KB
Requests 8 9

2 seconds to render a button? A button. Really.

It’s okay, +1 will soon be everywhere, so they are sure to be cached super-well? No. Repeat view (cached) results:

Google +1 Facebook Like
Load Time 1.8 sec. 0.8 sec.
Bytes 25 KB 4 KB
Requests 4 1

The worst part is that this button will almost certainly impact SEO ranking in Google, making it essential for most websites. And speed is also a ranking factor. I’m confused.

Facebook’s Like button is also required for maximizing traffic. Let’s see what they both look like — together:

Our bare minimum hope for first impression load time is 2.5 seconds. According to Google, entire pages should load this fast!

Google, Facebook: I don’t think you need me to make recommendations on how to fix it. I know you can do it – please make it a priority.

Please also provide an asynchronous JS snippet as a recommended option in your instructions like you did with Google Analytics.

You hurt me today Google, and you’re hurting the web. I thought you and me were like THIS:

Experimental Artz version 8974530


Most Popular Stories

§

Foreground <img> Sprites – High Contrast Mode Optimization

Apr ‘10 20

An issue that has stressed the relationship between web performance and accessibility is the little known fact that CSS image sprites, a technique used to reduce image HTTP requests, dissappear in Microsoft Windows’ high contrast mode. This is because they are typically created using the background-image CSS property.

To demonstrate this issue, let’s take a look at some popular websites in High Contrast mode.

In Google Video, the next and previous buttons dissappear:

screen shot in high contrast mode showing arrows dissappear in google video

In Yahoo Finance, the navigational tabs and buttons dissappear:

screen shot of yahoo finance tabs and buttons going away in high contrast mode

In sites like Facebook, Amazon and AOL Music, logos vanish from thin air…err, screen:

facbook logo dissappear in high contrast

amazon logo dissapear in high contrastaol music dissappear in high contrast

Popular content sharing service AddThis also incorporates CSS sprites for its toolbox sharing buttons:

addthis screen shot

It is great more sites are using sprites to deliver a faster user experience, however we need to recognize (myself included) we are damaging the user experience for High Contrast users.

Introducing <img> Sprites

While noodling over a new design for AOL.com that featured graphical headers using our new corporate identity font, I decided to prototype something I had thought about a couple years ago but never got around to doing.

Since <img> elements show up in High Contrast mode, why not try to crop the image to show what we want?

Our example HTML for graphic headers in this case look like this:

<h2 class="popular"><img src="img-sprite.png" alt="" />Featured</h2>
<h2 class="featured"><img src="img-sprite.png" alt="" />Popular</h2>

We set the alt attribute to "" so screen readers skip over it. We include the "Featured" text so search engines have an understanding of what this section is about (more powerful than alt text).

The following CSS is then applied to crop the parts of the image we want:

h2 {
	overflow: hidden;
	position: relative;
	height: 50px;
	width: 200px;
}
h2 img {
	position: relative;
}
h2.popular img {
	top: -100px;
}
h2.featured img {
	top: -200px;
}

Simply set the height (and width if needed) on the outer container (in this case, the <h2>) to the size of the image you want to crop, and play around with top (and left if needed) to move the image into place.

Verifying Your Implementation

high contrast accessibility panelTo enable High Contrast mode in Windows:

  1. Start Menu… Control Panel
  2. Open Accessibility Options
  3. Click on the Display tab
  4. Ceck the High Contrast checkbox
  5. Click Apply to see the effect.

Or…

  1. Alt + Shift + Printscreen

<img> Sprite Working Demos

Check out our CSS Sprites Demo, and then turn on High Contrast mode. Then, visit the <img> Sprite page to see the difference.

Known Limitations

For image cropping to work, it must be inside a block element or an inline element with the CSS property display: block.

Chris Blouch, AOL’s resident accessibility expert tested this technique out on various HTML elements and found that we cannot crop <img> elements inside the following elements:

  • Fail to crop: <fieldset>, <legend>, <input>, <button>, <table>, <tr>, <td>, <th>

All other tags should work, please leave a comment if you find otherwise.

This solution has been tested to work in IE6+, Firefox 3.5+, Chrome and Safari 4+ and it is expected to work in all future browsers.

Detecting High Contrast Mode

Chris Blouch also created a High Contrast detector as part of the AXS Accessibility JavaScript Library. It should come in handy if you are really having trouble getting your site to look good in High Contrast mode.

  • http://dev.aol.com/downloads/axs1.2/readme.html#hd

More Information on High Contrast Mode

This video gives a nice overview of the challenges facing people with a low vision disability. At 19:38 the host goes through some of the accessibility tools available in Windows like High Contrast mode:

Bonus! Printable Image Sprites!

A couple commenters pointed out background images don’t print by default and this technique solves that. Here’s a print preview of my demo pages in Firefox for some evidence.

CSS Sprites Printed

screen shot of css sprite printed, images not showing up

<img> Sprites Printed

screen shot of img sprite printed, images showing up

I guess I shouldn’t assume it looks good in IE too, let me know if it doesn’t.

Further Reading

I wanted to call out Thierry Koblentz, he kindly informed me (see comments) that he wrote about this exact technique save for me going the relative positioning route. Turns out I’m not as original as I thought — nice job Thierry.

§

jQuery Performance Rules

Apr ‘09 8

Once upon a time, all we needed to worry about was reducing Bytes and Requests and playing around with load order to make things faster. Nowadays, we are increasingly impacting one more major component in performance – CPU utilization. Using jQuery and other frameworks that make selecting nodes and DOM manipulation easy can have adverse affects if you’re not careful and follow some simple practices for reducing the work the browser has to do.

  1. Always Descend From an #id
  2. Use Tags Before Classes
  3. Cache jQuery Objects
  4. Harness the Power of Chaining
  5. Use Sub-queries
  6. Limit Direct DOM Manipulation
  7. Leverage Event Delegation (a.k.a. Bubbling)
  8. Eliminate Query Waste
  9. Defer to $(window).load
  10. Compress Your JS
  11. Learn the Library

(more…)

§

Bulk Image Compression with Photoshop Droplets

Aug ‘08 10

I recently exported a bunch of photos from iPhoto for an article I am working on, and discovered there was very little compression applied. Even at a lower 640 by 480 dimension size, the 30 images totaled 5.4 MB in size!

I needed a way to quickly compress these, and then I remembered Photoshop’s ability to create Droplets. A Droplet is an icon created by Photoshop that launches Actions on files that you drag on top of it. The resulting file is then saved in a folder of your choice.

This allows me to drag all 30 images on to the Droplet, and have Photoshop compress the entire batch automatically.

For those of you that learn by watching, I created a 5 minute screencast showing how it’s done.

Get the Flash Player to see this player.

For those of you that learn by reading, read on!

Step 1: Open a Test Image

In Photoshop, open up any image. The image is not important, you simply want something you can record your actions on.

Step 2: Record a New Action

In Photoshop, open the Window…Actions panel.

actions panel

Click the Create new action button.

create new action in actions panel

Give the Action a name descriptive of what it does. We will name ours JPEG50, because this Action will save out a JPEG at 50 Quality.

Click Record.

Step 3: Save For Web

Your Action is now recording, so be careful from here on out!

Tip: If you wanted to resize the image, or apply Filters before saving, you could do that and Photoshop will record these steps!

file save for web

Click File…Save for Web & Devices.

save for web jpeg settings

Set the compression type to JPEG, and choose a Quality setting you like. We recommend 50 Quality for optimum visual quality to file size.

Unless these files will be used in Flash, always use the Progressive option. It enables your JPEGs to render progressively in your user’s browser.

Click Save.

Step 4: Choose a Location for Compressed Images

You will need to create a folder for the compressed images, so when the Action or Droplet is run you know where the resulting files go.

compressed images save folder

We will create a new folder on our Desktop called Compressed JPEG 50 Progressive, descriptive so we know what it’s for.

Create the folder and click Save.

Step 5: Stop Recording Actions

stop recording the action

Click the Stop button on the Actions panel to stop recording.

Step 6: Create a Droplet

create droplet menu

To create your Droplet, click File…Automate…Create Droplet…

save droplet in a new folder

Choose a location to save your droplet that is easy to get to, like the Desktop.

choose action for droplet

Choose the Set and Action you just created for the Droplet.

Ensure Suppress File Open Options Dialogs and Suppress Color Profile Warnings are both checked.

When finished, click OK.

Step 7: Try it!

drag files on to the droplet

Drag the images you want to compress on to the Droplet.

If all goes well, the resulting optimized JPEGs will be in the Compressed folder you created in Step 4.

Step 8: Review Results

Let’s check out the before and after in terms of quality and file size.

Before at 234 KB:

photo before optimization

After at 84 KB:

optimized photo at 50 quality

We had a savings of 150 KB, 64% of the original size! The quality is also quite good.

If you happen to be viewing this page in Safari, you will notice that the colors are different than the original. This is because Safari supports color management, and we should address this.

For you non-Safari users, here is an image showing the original (top) against the optimized (bottom) version:

grass needing color correction

Notice how the grass in the original is much richer than the optimized version. See what you are missing out on? This is because I don’t have Photoshop configured to automatically convert Color Profile mismatches to the Working Space.

Color Correction in Photoshop

To fix this, go to Edit…Color Settings.

color management in photoshop

When working on the web (RGB), you always want to use your Monitor’s profile to ensure your images look the same across browsers. In my case, it is the Color LCD profile.

Under Color Management Policies, ensure RGB is set to Convert to Working RGB and all checkboxes are off. This way you won’t be bothered again.

Finally, run your images through the Droplet again. The colors should more closely match the originals now.

After Color Correction (84 KB):

after color correction

Much better, as it was meant to be seen. Good thing we checked for quality!

Image Compression Impact on Page Load Times

Altogether, we were able to quickly optimize 30 images from 5.4 MB to 1.9 MB, a savings of 3.5 MB or 65%. Let’s see how this plays out in page load times.

I created two test pages, one with our original photos and one with our optimized photos, and ran them through Pagetest to see the difference.

Original Photos – Speed Test Results

  • Average Load Time: 33 seconds
  • Bytes In: 5439 KB

Optimized Photos – Speed Test Results

  • Average Load Time: 13 seconds
  • Bytes In: 1865 KB

Savings

  • Load Time: 20 seconds (60%)
  • Bytes In: 3574 KB (66%)

The results are in amigo – the optimized images loaded 20 seconds faster!

Final Thoughts

Droplets can be a nice way of getting your optimization work done quickly, but at the cost of missing opportunities where you might save even more bytes by saving at lower a Quality – or the reverse, compromising quality at the cost of lower bytes. Always experiment and push to find a balance between low KB and image quality.

Did you know that Pagetest has an image compression check? It tests all JPEGs to see if they are saved at the equivalent of 50% Quality in Photoshop. Use the Pagetest Optimization Report (sample of our test here) to help you spot areas of your site where you might need to share our JPEG 50 Droplet with those responsible for the heavy images.

By loading images faster, you are helping your users consume them faster and thus giving them more reason to stay around.

§

Using mod_concat to Speed Up Start Render Times

Aug ‘08 1

The most critical part of a page’s load time is the time before rendering starts. During this time, users may be tempted to bail, or try a different search result. For this reason, it is critical to optimize the <head> of your HTML to maximum performance, as nothing will be visible until it finishes loading the objects inside.

One easy way to speed up rendering during this crucial time is to combine your CSS and JavaScript, saving the performance tax associated with every outbound request. While easy in theory, in practice this can be difficult, especially for large organizations.

For example, say your ad provider wants you to include their script in a separate file so they can make updates whenever they choose. So much for combining it into your site’s global JS to reduce the request, eh?

mod_concat makes combining shared libraries easy by providing a way to dynamically concatenate many files into one.

See mod_concat in Action

We created a couple test pages to show the benefits here. In our first example without mod_concat, we see a typical large scale website with many shared CSS and JavaScript files loaded in the <head> of the HTML. There are scripts for shared widgets (two of them video players), ad code, and more that typically plague many major web sites.

You can check out the Pagetest results here, and check out the time to start render (green bar):

pagetest waterfall with mod concat disabled

In the test page, we have 12 JavaScript files and 2 CSS files, a total of 14 HTTP requests in the <head>. I have seen worse. The green vertical bar is our Start Render time, or the time it took for the user to see something, at 4 seconds!

We can see that the time spent downloading is typically the green time, or the time to first byte. This happens on every object, simply for existing! A way to make this not happen, is to combine those files into one, larger file. Page weight (bytes) stay the same, but Requests are reduced significantly.

Let’s take a look at our Pagetest results of a second example with mod_concat enabled.

pagetest waterfall of music page with modconcat enables

Notice our the number of Requests went from 14 to 5, and we saved 1.5 seconds! We probably could have made an even faster example by moving to just 2 requests (one for CSS and one for JS), but the speed win here is clear.

How mod_concat Works

mod_concat is a module for Apache built by Ian Holsman, my manager at AOL and a contributor to Apache. Ian gives credit in the mod_concat documentation to David Davis, who did it while working at Vox, and perlbal.

The idea is straightforward, and you can pretty much figure out how it works by viewing the source code of our second example:

<link rel="stylesheet" type="text/css" media="screen" ←
	href="http://lemon.holsman.net:8001/cdn/??music2.css,common.css" />
<script type="text/javascript"  ←
	src="http://lemon.holsman.net:8001/cdn/??music2.js,mp.js,dalai_llama.js,ratings_widget.js,widget_config.js,common.js"></script>
<script language="javascript" type="text/javascript" ←
	src="http://tangerine.holsman.net:8001/o/??journals_blog_this.js,adsWrapper.js,flashtag.js,feeds_subscribe.js"></script>
<script type="text/javascript"  ←
	src="http://orange.holsman.net:8001/digital/??dm_client_aol.js,cannae.js"></script>

You can see in the highlighted code above that a single request is referencing multiple files, and the server is returning the concatenated version. The URL takes the following format:

http://www.yourdomain.com/optional/path/??filename1.js,directory/filename2.js,filename3.js

Let’s break it down.

http://www.yourdomain.com/

The first bit should be straight forward, it’s the host name.

http://www.yourdomain.com/optional/path/

Next comes the optional path to the files. This is important, because you can’t concatenate files above this directory if you include it. However, it allows you to optimize a bit so you don’t need to keep referencing the same path for files below this directory.

http://www.yourdomain.com/optional/path/??

The ?? then triggers the magic for the files that come next. It’s a special signal to Apache that it’s time to combine files!

http://www.yourdomain.com/optional/path/??filename1.js,

If the file is in the current directory, you can simply include it next, followed by a comma “,”.

http://www.yourdomain.com/optional/path/??filename1.js,directory/filename2.js,

If you need to go a bit further in the directory hierarchy, you can do that too.

http://www.yourdomain.com/optional/path/??filename1.js,directory/filename2.js,filename3.js

You can include as many files as you wish as long as they fall within the same server directory path defined early on in your optional/path/.

Performance and Caching Considerations

mod_concat uses the Last-Modified date of the most recently modified file when it generates the concatenated version. It should honor any max-age or expires Cache Control headers you set for the path in your server or htaccess configuration.

If you have a far future expires or max-age header, to bust the cache you will need to rename one of the files or directory names in the string, and then the user will download the entire concatenated version again.

Because mod_concat is an Apache module, performance is near instantaneous. Performance is improved further still if the server happens to be an origin point for a CDN, as it gets cached on the edge like an ordinary text file for as long as you tell it to, rarely hitting your servers.

Same Idea, Different Platforms

For regular folks like myself who don’t have the ability to install Apache modules with their hosting provider (cough, Lunarpages, cough), mod_concat is not the best option. The idea of concatenating JavaScript and CSS has been implemented on other platforms, and I will briefly call out those I found in my brief Googling – feel free to list more that you know of.

Rakaz’s PHP Combine Solution

Niels Leenheer of rakaz.nl has a nice solution for PHP. Niels writes:

Take for example the following URLs:

  • http://www.creatype.nl/javascript/prototype.js
  • http://www.creatype.nl/javascript/builder.js
  • http://www.creatype.nl/javascript/effects.js
  • http://www.creatype.nl/javascript/dragdrop.js
  • http://www.creatype.nl/javascript/slider.js

You can combine all these files to a single file by simply changing the URL to:

  • http://www.creatype.nl/javascript/prototype.js,builder.js,effects.js,dragdrop.js,slider.js

Niels takes advantage of Apache’s Rewrite rules as such to make the combine PHP script transparent to the template designer:

RewriteEngine On
RewriteBase /
RewriteRule ^css/(.*\.css) /combine.php?type=css&files=$1
RewriteRule ^javascript/(.*\.js) /combine.php?type=javascript&files=$1

This is nice because it keeps the PHP script and HTML template separate from each other, just like mod_concat.

Ed Elliot’s PHP Combine Solution

Ed’s solution for combining CSS and JavaScript is less flexible from a front-end template designer’s perspective, as you’ll need to touch PHP code to update the files being merged together. However, the advantages I see to his take on the problem are:

  • He masks the actual file names being combined, and
  • A new version number is automatically generated to automatically bust the cache

For folks who don’t mind digging into PHP, the above benefits may be worth the effort. I especially like the cache-busting, as it allows me to put a far future expires header without worrying if my users will get the update or not.

PHPSpeedy

Finally among the PHP scripts I found is PHPSpeedy. Also available as a plug-in for WordPress, PHPSpeedy appears to get the job done like the others, with the added benefit of automatic minification.

This might be useful for folks, but I’m the obfuscator type and promote that for production build processes. I’d love to see a safe obfuscator like YUICompressor written in C so we could turn it into a module for Apache.

Lighthttpd and mod_magnet

For users of Lighthttpd, mod_magnet can be used to do the concatenation. It appears similar in nature to Rakaz’s solution, though I will leave it to you to dig in further as it seems to be fairly involved. This blog post by Christian Winther should help get you started.

ASP.Net Combiner Control

Cozi has developed an ASP.net control to combine multiple JS and CSS into a single file, and includes a cool versioning feature much like Ed Elliot’s script. It’s very easy to use; you simply wrap the script with the control tag in the template:

<WebClientCode:CombinerControl ID="CombineScript" runat="server"><script src=" ←
	script/third-party/jquery.js" type="text/javascript"></script><script src=" ←
	script/third-party/sifr.js" type="text/javascript"></script><script src=" ←
	script/third-party/soundmanager.js" type="text/javascript"></script><script src=" ←
	script/cozi_date.js" type="text/javascript"></script></WebClientCode:CombinerControl>

It then outputs the following code at runtime:

<script src="../Combiner/Combiner.ashx?ext=js ←
	&ver=59169b00 ←
	&type=text%2fjavascript ←
	&files=!script'third-party*jquery*sifr*soundmanager*!script*cozi_date*" ←
	type="text/javascript"></script>

The only problem I see with their approach is that since the output file has query parameters, Safari and Opera won’t honor cache control headers as it assumes it is a dynamic file. This is why simply adding ?ver=123 to bust the cache is not a good idea for those browsers.

Java JSP Taglib – pack:tag

Daniel Galán y Martins developed a combine solution for Java called packtag. It follows in the spirit of PHPSpeedy and provides additional optimizations such as minification, GZIP, and caching.

It’s not obvious from the documentation what the output of the combined script looks like, but in a flow graphic it seems to include a version number, which would be cool.

The code to do the combination goes right in the JSP template, and looks like this:

<pack:script>
<src>/js/validation.js</src>
<src>/js/tracking.js</src>
<src>/js/edges.js</src>
</pack:script>

CSS can be combined too. The syntax appears to be quite flexible:

<pack:style>
<src>/main.css</src>
<src>../logout/logout.css</src>
<src>/css/**</src>
<src>http://www.example.com/css/browserfixes.css</src>
<src>/WEB-INF/css/hidden.css</src>
</pack:style>

As you can see this idea has been implemented in many languages, some with additional innovations worth considering, so if you can’t leverage mod_concat, at least use something similar as the benefits are well worth it.

Final Thoughts

mod_concat is a performant, cross-language, high-scale way to build concatenation into your build process while maintaining files separately. While it lacks automatic versioning (Ian, can we do this?), it provides a clean way to dynamically merge JS and CSS together without touching a bit of server-side code, and it works across server-side languages.

One feature I’d like to see added is a debug mode. For example, if the code throws an error it may not be apparent based on line number what file is having issues. Perhaps the filename could be included in comments at the start.

Remember, improving the time to start rendering the page is critical and you should focus on this first. With tools like mod_concat and the others mentioned here, there should be little excuse to implement this into your routine. Little pain, a lot to gain.

§

PNG Alpha Transparency – No Clear Winner

Jul ‘08 25

As a long time user of Adobe Photoshop, I missed the boat on a very important discovery in image optimization – PNG-8 supports full alpha transparency!

Alex Walker wrote a great article on PNG and included a nice example on creating PNG-8 images with a full alpha transparency layer with Fireworks – yes, Fireworks. Stoyan Stefanov points out this ability in his image optimization mistakes presentation as well. Thanks to you both for enlightening me!

Before Stoyan and Alex, I like probably many other thousands of Photoshop users believed, or still believe, that PNG-8 is identical to GIF, i.e. an all or nothing scenario when it comes to transparent pixels. In Photoshop, we are left with the usually bloated, heavy PNG-24 format that I typically steer folks away from.

However, in applying PNG-8 to my favorite PNG transparency techniques, I came to a different conclusion than Alex and Stoyan. This article shows there is no silver bullet when it comes to saving out PNGs (are you listening, Adobe?).

PNG Transparency Text Effects

One cool technique we can do with alpha PNGs are text effects, as detailed here by Nick La. The technique involves layering an empty element containing the horizontally tiled background gradient over system text.

Using this CSS and HTML, we can pull off the desired effect:

<style type="text/css">
.glossy-text
{
	font: 45px 'arial rounded mt bold';
	margin: 0;
	position: relative;
	color: #f30;
}

.glossy-text b
{
	background: url(glossy-text-photoshop.png) repeat-x;
	position: absolute;
	width: 100%;
	height: 27px;
	top: 4px;
	display: block;
	_background: none;
	_filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(src='glossy-text-photoshop.png', sizingMethod='scale');
}
</style>

<div class="example">
<h2 class="glossy-text"><b></b>PNG Can Overlay Text</h2>
</div>

Here are the results…

PNG Can Overlay Text

And another using the same image, with blue text…

One Image for Every Heading!

The optimization win here is clear – use only 1 image across multiple headers to pull off a polished design for headings!

The PNG Image

I created the PNG image in Photoshop using a Gradient Fill layer. This gives us a fine deal of control over the gradient and the level of transparency to apply at each point. We can visually see how much transparency is applied by looking at the shade of the Opacity Stops. White is 0% Opacity (invisible), while black is 100% Opacity, or fully visible.

Photoshop Gradient Fill

Now, we will head on over to our trustworthy Save For Web tool, and notice how our PNG-8 doesn’t support full alpha, as usual.

PNG-8 PNG-24
Glossy Text Photoshop Png8  Glossy Text Photoshop Png24 

I will save out the PNG-24 version, which comes out to 156 bytes, not too bad at all. Let’s see if Fireworks and its PNG-8 format can do better.

Now, if you are new to Fireworks (like me), the workflow is a bit different than Photoshop. Let’s start by opening up our PNG-24 image saved out of Photoshop, and switching to the export Preview view.

Fireworks Export Preview View

The Export Preview is essentially the same thing as Photoshop’s Save for Web tool. Look over to the right in the above graphic at the Optimize and Align panel – those are the settings it’s using. Let’s update that to PNG-8, and the Alpha Transparency option.

Fireworks Optimize Align Png

To export the image, we can go to File…Export, but we can also see the expected KB in the lower-left corner of the panel, a file size of 248 bytes. After exporting, we see it was actually 238 bytes. (Adobe, why can’t this be completely accurate?)

Photoshop PNG-24, 156 bytes glossy text from photoshop
Fireworks PNG-8, 238 bytes glossy text from fireworks
Fireworks PNG-8 (dithered), 278 bytes glossy text dithered from fireworks

Now, this brings me pause, because the PNG-24 I saved out of Photoshop was a mere 156 bytes – 37% smaller in size! You can also clearly see that the Fireworks image is banding, which I would not expect to happen on such a low color image. I also dithered it, and it got larger and the pattern was still noticable.

It would seem that for this design purpose, the glossy text overlay, Photoshop’s PNG-24 is the better choice. My luck indeed!

Gradient Header Backgrounds

Similar in design to the text overlay image, the same finding is bound to hold true for gradient background techniques, right? Read on…

I’m going to create another Gradient Fill layer, and overlay a fade to white, and a fade to black over a layer filled with red. Notice I added a Color Stop in the center to ensure one side is white, and the other is black.

Glossy Background Gradient Fill

Just to be fancy, I will go ahead and add some fully opaque rounded corners using our selection tool. First, I’ll make a circle selection with a 10px diameter, giving us a 5px corner radius.

circle selection at the top left corner of the header

Next, we’ll add to the selection along the top and left hand sides of the circle selection.

same thing on the left hand side

Next, we’ll need to use Select…Inverse to flip the selection so we can fill it in.

menu select inverse

Using our pencil tool with at least a 5px radius, we fill in our rounded corner with white.

filled in with white

Notice that it also nicely anti-aliases against the layer below. For the other side, we’ll simply make a selection around it, and copy it over to the other side of the header.

copying the rounded corner

Move it over to the other side and do Edit…Transform…Flip Horizontal.

menu flip horizontal

And finally, position it in the right spot.

position the right corner

It is time to save out our PNG. Let’s go ahead and disable our layers and crop the image.

before diabling layers

after disabling the layers

cropped

And now for the PNG-24 vs. PNG-8 test.

Photoshop PNG-24, 399 bytes Glossy Background Photoshop Png24
Fireworks PNG-8, 397 bytes Glossy Background Fireworks Png8
Fireworks PNG-8 (dithered), 411 bytes Glossy Background Fireworks Png8

The tables have turned – PNG-8 wins by 2 bytes! However, notice again there is banding going on, which is bothersome to me for such a low color image. I played with all the settings I could in Fireworks to no avail, and the dither is larger in size and looks worse, so again I will have to hand this one to Photoshop’s PNG-24.

ImageOptim, a GUI PNG Tool

And then, I thought about messing around with some programs Alex mentioned in his article, the programs PNGQuant and PNGNQ. PNGQuant and PNGNQ take 32-bit or 24-bit PNG images and "quantize" them down to 8-bit, or PNG-8. Now, what sucks is that the tools are command line, although PNGQuant has a GUI version for Windows, it is difficult to install and doesn’t help OS X fans like myself and many other designers.

I couldn’t get either of these to work on OS X, because I am chobo and didn’t want to spend more then the 30 minutes I did trying to compile the source code on OS X.

In my Googling for GUIs, I discovered ImageOptim. Now, I have no clue what language the developer speaks, or what the tool does exactly, but if you want to help me translate for my readers, be my guest:

screen shot of imageoptim homepage with non english text

If I had to guess, it appears to try various PNG algorithms until it gets one that compresses the best. The tool is very user-friendly, and would fit nicely into any process as you simply drag and drop your files into its window, and it takes care of the rest.

To see how we fare, let’s take our full quality Photoshop PNG-24, and drop it in.

dragging png24 into imageoptim window

Viola! ImageOptim crunched our PNG-24 down to 355 bytes, a savings of 11%. Recall this is also smaller than our Fireworks PNG-8 (397 bytes).

imageoptim results window savings of 11%

The resulting file was smaller and identical to the original:

Photoshop PNG-24, 399 bytes Glossy Background Photoshop Png24
Fireworks PNG-8, 397 bytes Glossy Background Fireworks Png8
ImageOptim PNG, 355 bytes image optim png

Let’s go back and see if we can save anything from our Glossy Text image.

glossy text results showing no gain

Looks like we didn’t gain anything, oh well.

My one beef with ImageOptim is that I have no clue what it did. Did it throw away information? What program did it use, OptiPNG, PNGCrush, AdvPNG? And why is their logo of a man getting impaled by credit cards?

imageoptim logo

Okay, with that we’ll use the ImageOptim version of the PNG to complete our design, along with the following CSS and HTML, for those interested.

<style type="text/css">
div.glossybg
{
	width: 250px;
	font-family: verdana;
	margin-bottom: 1em;
}
div.wide
{
	width:500px;
}
div.glossybg h2
{
	color: #fff;
	height: 32px;
	font: 18px/30px verdana;
	margin: 0;
	padding-left: 12px;
	background: #f30 url(glossy-background-imageoptim.png) repeat-x;
	text-align: center;
}
div.glossybg h2 b
{
	display: block;
	background: url(glossy-background-imageoptim.png) top right; /* Tricky bit! */
	background-color: #f30;
	padding-right: 12px;
	font-weight: normal;
}
div.glossybg h2.cold, div.glossybg h2.cold b
{
	background-color: #0066b3;
}
div.glossybg p
{
	border: 2px solid #ccc;
	border-width: 0 2px 2px;
	margin: 0;
	padding: 10px;
}
</style>

<div class="glossybg">
<h2><b>A Red Header</b></h2>
<p>Some text inside the skinny box.</p>
</div>
<div class="glossybg wide">
<h2 class="cold"><b>A Blue Header</b></h2>
<p>Some text inside the fat box.</p>
</div>

A Red Header

Some text inside the skinny box.

A Blue Header

Some text inside the fat box.

Notice the tricky bit of CSS indicated above. We are layering the image in <b> element of the header to pull off a rounded corner on the right side, allowing us to stretch the image to various widths, a favorite technique of mine.

Transparent Image Overlays

For my final experiment, I will create a banner header with a logo overlay, to demonstrate a more complex application of PNG.

We’re going to create a website banner for a fan site of a well known American politician. I downloaded some free artwork from his campaign website.

For our transparent overlay, I will need to cut him out of his poster and copy and paste him on a black background.

obama cut out

Then, we switch our document to Lab color mode.

lab color mode switch

This gives us a nice Lightness channel which we can use to create our overlay. In the Channels panel, select the Lightness channel, we’ll then make a selection using Command + Click (Windows Ctrl + Click).

select lightness channel and make a selection from it

This selects the light areas of the image, and also includes transparency information. Visually, you will see anything greater than 50% white with a marquee around it. If we wanted the dark pixels, we could simply select the inverse to obtain it.

This is also why we created him on a black background, to maintain the outline (transparent pixels are counted as white).

Now that I have my selection, I am going to switch back to RGB mode, create a new layer, and fill the selection in with white. I disabled the color layer to show the end result.

filled in the selection with white

We now have a layer with white transparency information in the shape of our political figure. Disable the black background, and save it out as Photoshop PNG-24. Export it through Fireworks and ImageOptim as outlined above.

Photoshop PNG-24
16764 bytes
Fireworks PNG-8
4542 bytes (73%)
Fireworks PNG-8 (dithered)
5265 bytes (69%)
Obama Photoshop Png24 Obama Fireworks Png8 Obama Fireworks Png8

ImageOptim was unable to gain any savings, so I didn’t include it. It seems as though ImageOptim doesn’t include a quantizer that will reduce the color palette like PNGQuant and PNGNQ, which is what we really want.

But I think we are finally getting somewhere on the Fireworks front. Our Fireworks PNG-8 was 73% smaller than our original PNG-24, though that banding is back (see his shoulder).

I exported another image out of Fireworks with a 100% dither, and think it looks much better. While a tad larger, I would recommend going with the dithered Fireworks PNG-8 image.

Let’s see how our finished product looks.

And if the boss said to make the background more patriotic, we can do so without affecting our transparent image.

What a catchy campaign slogan!

The Drop Shadow

I almost forgot the drop shadow! Well, I did forget it, I am editing this post just after I published it. Here is how a two color logo faired with a drop shadow.

Photoshop PNG-24

25232 bytes
Fireworks PNG-8

7150 bytes (72%)
Fireworks PNG-8 (dithered)

8615 bytes (66%)
ImageOptim

23255 bytes (8%)
obama logo with photoshop fireworks fireworks dithered fireworks dithered

In this one, my vote is for the Fireworks PNG-8 dithered with a whopping savings of 66% and a decent looking shadow.

Let’s add a gradient to the logo, and see how that looks.

Photoshop PNG-24

42264 bytes
Fireworks PNG-8

11049 bytes (74%)
Fireworks PNG-8 (dithered)

13909 bytes (67%)
obama logo with photoshop fireworks fireworks dithered

At first glance, the dithered version would be my choice. However, if you look close enough, you see some odd dark specks here that just don’t belong. I tried to get rid of them in Fireworks, but my skills there are lacking. In this situation I would probably modify the source image to get the result I wanted. 1/2 point for effort, Fireworks.

PNG8 Graceful Degradation in IE 6

The final thing I’d like to echo with Stoyan and Alex about PNG8, is how gracefully it degrades in IE6.

Ie 6 Vs Ie 7 Png8 Transparency

Notice that all pixels that had transparency applied (the drop shadow) disappear, allowing for a graceful degradation in IE6. For most cases, this will be entirely acceptable and allow us to avoid the performance penalty and CSS hack associated with AlphaImageLoader, the traditional way to enable alpha transparency support in IE 6.

Take a look at how wide your IE6 audience is, and make a call on if it’s worth the design/performance tradeoff to fully support it.

Findings and Conclusions

At the end of the day, the score was +1 for Photoshop PNG-24, +1 for ImageOptim, and +2.5 for Fireworks PNG-8 (dithered). Because of Fireworks’ poor performance on the first two scenarios, there is no clear winner.

With my late discovery of Fireworks PNG-8, I went into this article thinking I would have the end all answer for saving out PNGs. If you’ve been reading, you know that it’s not quite so simple. We simply need better tools; preferably, one tool.

My final thought on a designer-friendly transparent PNG workflow:

  1. Save out your transparent PNG out of Photoshop as PNG-24, and take note of the size.
  2. Open the PNG-24 in Fireworks, and export it as PNG-8 with Alpha Transparency (play with the dither option), and take note of the size(s).
  3. Run your Photoshop PNG-24 through ImageOptim, and see if you saved anything.
  4. Make a final decision based on quality, size and longevity (e.g. how long will the image be around, how important is it?).

There seems to be a big gap on the GUI PNG tool side for saving out high quality, low file size PNGs. While command line tools exist, they are not a realistic answer for designers who haven’t ever launched a terminal window, and for developers who don’t have the time or patience to compile source code.

I want to encourage Adobe to look at the available open source PNG tools and get them into Photoshop CS4′s Save For Web, where it belongs.

Until that happens, I am going to have to respectfully disagree with Stoyan and Alex that PNG-8 is the clear winner, as in 2 of the important use cases above, it wasn’t.

§

Beating Blocking JavaScript: Asynchronous JS

Jul ‘08 23

MSN is now implementing a technique for loading JavaScript in a way that doesn’t stall the rendering of the document. They use Dynodes, a technique I also recommend for loading functionality on demand so it only consumes bandwidth when it is needed.

JavaScript Blocks Everything

To see the problem, view a Pagetest waterfall report of pretty much any website today, or see this recent run of AOL.com. Notice that 1 second is spent downloading and executing JavaScript, one at a time.

waterfall graphic of aol javascript blocking rendering

One by one, folks! This is how all browsers load JavaScript (unless the defer attribute is used in IE) when called from your standard HTML <script> element.

However, viewing a waterfall report of MSN, we can see that they call 3 JavaScript files (dap.js, hptr.js, and hp.js) asynchronously, and allow the subsequent CSS files to load right away.

waterfall graphic of msn javascript not blocking

Had their scripts loaded in the standard way, dap.js, hptr.js, and hp.js would delay the page for 1.4 seconds!

Loading JavaScript Asynchronously

MSN is using standard DOM functions to create and append a script element to the HTML document’s <head> element. This technique, originally coined as Dynodes, is encapsulated in a JavaScript loader, much like the one used by JS frameworks such as Dojo.

We downloaded and formatted the MSN.com HTML source code so you can have a closer look at it. Start on line 297 which kicks off a process to a function aptly named JS:

(function(){}).JS(Msn.Page.Track).JS(Msn.Page.Js)

Note that it is passed in two URLs defined back on Lines 13-19:

Msn={
	Page:{
		SignedIn:'False',
		Js:'http://stj.msn.com/br/hp/en-us/js/46/hp.js',
		Track:'http://stj.msn.com/br/hp/en-us/js/46/hptr.js',

The JS function kicks off a method that pulls down these two scripts. Take a look at line 130 to get to the heart of this technique:

var c=g.createElement("script");
c.type="text/javascript";
c.onreadystatechange=n;
c.onerror=c.onload=k;
c.src=e;
p.appendChild(c)

The <script> element (c) is appended to the <head> element (p), as defined back on line 113.

MSN also appears to be closely monitoring the load of all the scripts called by JS, in case something happens during the process. Event handlers are set on readystatechange, error, and load to stop the polling process (fired every 100ms) once the script is finished.

Their code is quite obfuscated and difficult to follow, but you can look for the timeout function on line 125. There also appears to be an optional parameter to kill the process after a specified period of time.

JS Loader Prototype

We designed a JS Loader Prototype (not nearly as fancy as MSN’s) that illustrates the benefits of this technique, and tested behaviors in IE and Firefox.

In our prototype (we strongly suggest you view the source now), the code is organized into 4 sections:

  • JavaScript #1 and #2 are called from a <script> block in the <head> using our JS Loader function.
  • JavaScript #3 and #4 follow next, called by the JS Loader in a <script> block in the <body>.
  • JavaScript #5 and #6 are called after some text again by the JS Loader, from a second <script> block in the <body>.
  • Finally, #7 and #8 are loaded in the traditional, HTML <script> element fashion.

In Internet Explorer, the script files queued up normally and the screen was not blocked for any period of time (until #7 and #8). Notice a very short Start Render time in our test run at Pagetest, and JS loading as fast as HTTP1.1′s 2 connection per domain limit will allow it:

waterfall chart of js loader prototype in ie showing full asynchronous load

In Firefox, however, any content below the next inline HTML <script> section is blocked until the scripts from the previous inline HTML <script> section called by the loader are complete. You have to see it to believe it!

waterfall chart of firefox blocking until all js has download in each script block

Notice in the above chart, within each script block both JavaScript must fully load before the next script block is allowed to start processing. The takeaway here is to include as many JS Loader calls as possible in one script block.

We developed a workaround for this issue by including a timeout delay before calling the script. This allows Firefox to continue rendering like IE and Safari, and affords the fastest possible download. See the updated JS Loader Prototype for Firefox here.

waterfall of updated prototype showing that firefox no longer blocks other scripts

We put a longer delay on the first script (10 seconds) so you can see that Firefox no longer waits until the script block has completed before loading others. The important bit of code looks like this:

js:function(url)
{
	// If you want to call IE and Safari straight up without the delay, uncomment this.
	// (navigator.userAgent.search('Firefox')) ? js = setTimeout("artz.create('"+url+"')", 0) : artz.create(url);
	js = setTimeout("artz.create('"+url+"')", 0);
},
create:function(url)
{
	s = artz.ce('script');
	s.type = 'text/javascript';
	s.src = url;
	artz.tag('head')[0].appendChild(s);
},

You will be pleased to know that Safari renders much like IE, with the added benefits of 4 open socket connections!

waterfall shot of safari with 4 open socket connections

Hopefully the benefits to this approach are clear. With a large site like MSN faithfully using Dynodes for their scripts, it might just be the time to standardize on this approach.

Race Condition Challenges

Not so fast! (pardon the pun) There are some additional considerations we will need to think through when moving in this direction.

  • JavaScript functions in the external scripts may not be available when inline HTML JavaScript functions call for them.
  • Along the same lines, even if Script A is called before Script B, Script B may finish and execute before Script A.
  • DOM elements may not yet be available in the HTML should an external script need them to hook events, access data from, etc.

In tackling the above, we would first recommend following the Progressive Enhancement approach, and work to completely eliminate inline HTML JavaScript function calls. The external scripts can latch any events needed on to links, buttons, etc., once they are ready.

While race conditions may still exist, a way to solve this is using the setInterval function to initialize your functions where you know a race condition may exist.

function id(id){return document.getElementById(id)}

var oranges =
{
	init: function()
	{
		if(id('oranges'))
		{
			clearInterval(oranges_init);
			alert('We have oranges!');
			// We may proceed with orange code!
		}
	}
}

oranges_init = setInterval("oranges.init()", 100);

We recommend using a period of 100ms to go easy on the CPU, and still feel instantaneous to users. To see this code in action, have a look at our ID Polling Prototype.

Document.Wrong

External scripts loaded that include the infamous document.write method executed by the script will cause problems with this technique. Be sure to wrap it in a function and call it from the HTML if you decide to head down this path. We hope that by now, you have thrown this ancient tool away.

Advertising vendors…this means you!

Final Thoughts

This technique has been on the back shelf for some time due to the tricky Firefox and race condition issues.

That said, if we are careful, and with MSN proving its value, now just may be the time to adopt Dynodes as a standard JavaScript loading practice.

Leave us a comment with your thoughts, questions, and concerns, and post links to implementations that leverage this!

§

Optimizing Web Performance with AOL Pagetest

Jul ‘08 10

In this screencast, I walk through how to analyze your site using the reports generated by AOL Pagetest, and explain why and how to go about addressing your pressing performance issues. Come for the tool demo, and stay to learn about the anatomy of an HTTP request, the importance of CDN’s and keep-alives, and a handy Apache module to concatenate CSS and JavaScript. Leave me feedback so I can improve it, and enjoy!

Please index me, experiment!

“The results are in amigo. What’s left to ponder?” — Hansel