Blog About
I like pitting my blog against various performance analyzers in hopes of getting a good score.
Usually i do pretty well, mostly on the account of this blog being a completely static website.
Let’s compare some internet avarages for various performance metrics, and see how they stack up against my blog.
This is the blog post i will be using for the benchmarks.
Avarages are from HTTP-Archives. Performance data for my blog was gathered using Google Lighthouse.
Category | Avarage | My blog |
---|---|---|
Page size | 2.1 MB (2124 KB) | 0.42 MB (50.6 KB) |
Total requests | 73 | 21 |
FCP | 2.1s | 0.7s |
Category | Avarage | My blog |
---|---|---|
Page size | 1.9 MB (1914 KB) | 0.42 MB (52 KB) |
Total requests | 69 | 21 |
FCP | 5.2s | 2.3s |
Disclaimer: I understand that the static nature of this blog makes the above comparison somewhat unfair, but this was the best data source i could find so i’m using it regardless.
Let’s start by running my blog through Google Pagespeed and identifying the most outstanding problems.
Google-fonts request is optimized well for size, as it only fetches certain weights and character sets, but the time it takes to complete could be improved.
As for the CSS, i could move them to the main HTML-file, therefore saving time and eliminating a few extra requests.
At the time of writing, this blog only has about ~40 lines of Javascript. Re-writing those with vanilla JS instead of jQuery should be trivial. Frankly, i should have done it in plain JS to begin with.
Additionally, the script itself can be inlined, once again saving time by eliminating an extra request.
I honestly thought i already had file compression enabled in Nginx, but apparently this is not the case. Easy fix.
Potential savings from this are pretty slim, but let’s leave it as a bonus step. Maybe there exists some CLI tool to mass convert images to webp?
Now that the most outstanding problems have been identified, the optimization can start.
One way to prevent CSS from render blocking would be to use asynchronous loading with rel="preload"
I considered this approach, but turns out it does not work with Firefox, which makes this strategy a no-go.
There is however a really cool hack to make this work described here. It involves using the media type
attribute available to link tags:
<link rel="stylesheet" type="text/css" href="[long url]" media="print" onload="this.media='all'">
Essentially this works by setting the link tag’s media
attribute to print
, which means that it should only be applied when user is using print-based media, i.e. printing out the page.
This turns the request asynchronous, which upon completion will turn the onload
attribute to all
, applying the CSS to the page.
While this is a cool trick, i eventually just ended up inlining all the css to the main HTML-file.
Google fonts request also got completely eliminated. I decided to stop using an external font provided by Google, and instead swapped to a web-safe classic, Arial.
I threw out jQuery, and wrote the scripts again with vanilla JS:
// example of an old jQuery function
$('#menu-button').click(function(){
toggleMenuIcon();
$('#menu').toggleClass('showMenu');
});
// new vanilla js version
const menuButtonClicked = function() {
toggleMenuIcon();
x = document.getElementById('menu');
x.classList.toggle('showMenu')
}
I also inlined the script, and placed it in the footer.
This was probably the easiest one to fix. I added the following gzip directives to Nginx, so the files will now be compressed before transit, reducing file sizes.
gzip on;
gzip_static on;
gzip_types text/plain text/css text/javascript;
There is an apt-package for a webp CLI tool, so i used that in conjunction with this script i found to mass-convert all of my png files to webp:
#!/bin/bash
# converting JPEG images
find $1 -type f -and \( -iname "*.jpg" -o -iname "*.jpeg" \) \
-exec bash -c '
webp_path=$(sed 's/\.[^.]*$/.webp/' <<< "$0");
if [ ! -f "$webp_path" ]; then
cwebp -quiet -q 90 "$0" -o "$webp_path";
fi;' {} \;
# converting PNG images
find $1 -type f -and -iname "*.png" \
-exec bash -c '
webp_path=$(sed 's/\.[^.]*$/.webp/' <<< "$0");
if [ ! -f "$webp_path" ]; then
cwebp -quiet -lossless "$0" -o "$webp_path";
fi;' {} \;
Optimizations are done, time to compare the performance:
Category | old | new | increase |
---|---|---|---|
Page size | 0.05 MB (50.6 KB) | 0.02 MB (28.1 KB) | 60% (22.5 KB) |
Total requests | 21 | 12 | 42% (9) |
FCP | 0.7s | 0.3s | 57% (0.4s) |
Category | old | new | increase |
---|---|---|---|
Page size | 0.05 MB (50.6 KB) | 0.02 MB (28.3 KB) | 60% (22.3 KB) |
Total requests | 21 | 12 | 42% (9) |
FCP | 2.3s | 1.0s | 56% (1.3s) |
After these optimizations, Google pagespeed now gives me a perfect score for speed: