Get NitroPack with up to 37% OFF
“Successful problem solving requires finding the right solution to the right problem” - Russell L. Ackoff
It might seem trivial, but this quote applies perfectly to today’s speed optimization world.
Recent years have shown us that slow websites are a big problem.
That’s why new speed optimization case studies appear all the time. Google’s pushing for a faster Web with initiatives like the page experience update. Users have higher expectations and endless distractions, resulting in a complete intolerance for slow load times.
In theory, site speed should be improving as devices become more powerful and connections get faster. As we’ll see later on, that’s not the case.
It’s easy to blame heavy frameworks, Content Management Systems, bad dev practices, or several other factors. But the fact remains - the industry is yet to find the right solution(s) to the speed problem at scale.
That’s one of the reasons we’re building a different speed optimization product.
But before we get to that, we should first discuss the road that led us here. A good place to start are the two most common approaches for speeding up a website over the last few years.
Today, the CMS rules the Web. WordPress alone powers 40% of all websites.
Historically, the most common way to optimize CMS websites has been to use different plugins. For example, one for caching, another for image optimization, a third one for code minification, and so on.
While popular, this approach has some drawbacks. Especially when it comes to ease of use and effectiveness.
Each plugin has different settings and requires updates and monitoring. As you install four or five of them, the complexity increases exponentially.
On that note, using plugins that aren’t built to work together can lead to long and expensive integrations with inferior results. Add to that the tedious setups and constant monitoring, and you have a complete UX nightmare.
Another issue is that each additional piece of software you add to a site’s infrastructure comes with a performance cost. For example, an image optimization plugin that runs on the same server as the CMS will probably require additional CPU time to resize and compress images.
As a result, the more plugins you install, the slower your site becomes since each one adds code that must be executed. That’s why we decided to go with a cloud-based approach for NitroPack. More on that in a bit.
There is one big upside to Option 1, though - it can be entirely or almost free.
For example, the WordPress ecosystem is full of optimization plugins, so there’s a chance you can find a combination that works for your setup. Again, that takes time and effort, even without the required maintenance afterward.
We’ve been part of the speed optimization industry for almost 10 years, so we’ve seen this approach evolve to where it is today.
When we first started in 2013, NitroPack was only a caching tool. Like others, we gradually added more settings for users to tweak. While we only offered one tool, its complexity still grew with each additional feature.
At one point, we hit a wall as the number of settings became unmanageable. Since we believe that everyone should have an easy way to achieve great performance, we had to change strategies.
Today, we reduce the complexity for our users by setting up the most important things on our end. That’s hardly possible when working with plugins developed and supported by different organizations.
That’s not to say everyone should stop using multiple plugins. If this approach works for you and provides a better experience for visitors - great!
It’s just that at one point, configuring and maintaining all the different plugins takes too much effort for most website owners (who also have a business to run).
Having a clean codebase is the best foundation for a fast website. That’s why optimizing your website’s code (by yourself or by hiring a developer) is an excellent option.
Alas, there are still a few challenges here. And all of them apply whether you have a custom-built, platform, or CMS site.
First, finding performance experts can be extremely difficult. Speed is a niche topic, which very few developers specialize in.
There’s also a larger upfront investment. Engineering hours are always expensive, especially in such a tightly specialized area.
Large companies usually employ entire performance teams. On the other hand, most businesses either can’t afford that luxury or want their developers to work on higher priority tasks.
Another problem is that performance isn’t a “one and done” type of deal. The more a site grows, the harder it gets to keep it fast. Pages become larger, there’s more code to maintain, the infrastructure gets more complex.
That’s why speed initiatives take months and require lots of resources even for billion-dollar businesses like Notion, Yahoo, or eBay.
Of course, if you have the budget and can hire the right people, this option is immensely effective. Especially when it comes to JavaScript (JS) optimization.
That’s why we endorse techniques like code splitting and Idle Until Urgent alongside our resource loading and JS execution features.
Regardless of the downsides, hiring a skilled developer or a specialized company to optimize your site’s code can be a better investment than buying multiple speed optimization tools. For some JS-heavy websites, it might be the only adequate solution.
So far, we’ve provided website owners with two options.
One costs less but takes a lot of effort and produces inconsistent results. The other is much more effective but can be too time-consuming and expensive for most.
At the same time, websites continue to grow larger and run a lot of JS.
Since 2011, the average desktop and mobile page have grown 5x and 10x, respectively. Source: The HTTP Archive.
Most users also aren’t browsing the web on the latest devices. For many of them, websites are getting slower, as their devices can’t handle the resource-intensive nature of the modern web.
Alex Russell refers to this as the mobile performance inequality gap.
The average OnLoad time for mobile websites has gone from 4s in 2011 to almost 19s in 2021. Source: The HTTP Archive.
Any way you slice it, the fact remains:
The existing options for speeding up a website have so far produced undesirable results.
The majority of the websites are still slow. Small and mid-size business owners can’t afford the luxury of hiring experts to optimize their site or rebuild it from scratch.
As a result, many website owners simply ignore performance altogether. Others settle for good lab results (like the PageSpeed optimization score) without caring for the actual user experience.
Google’s trying to change that with initiatives like the page experience update, which made the Core Web Vitals ranking factors. But until there’s an easy and reliable way for most website owners to speed up and keep their sites fast, we can’t expect huge improvements.
We can (and should) be doing more to ensure a faster Web. That’s why we built NitroPack.
To avoid the pitfalls we discussed, we made a lot of unorthodox choices when designing NitroPack.
First, we built an all-in-one service, with most of the important features enabled by default. Features typically spread out between different tools (like caching, image optimization, and a CDN) come out of the box with NitroPack.
This, along with our preset optimization modes, ensures ease of use and stability of the optimization process.
Everything’s under the same hood and is managed through a single dashboard. This approach eliminates the need to use and configure multiple plugins, which oftentimes can’t work efficiently together.
We also made NitroPack a cloud-based service. Actually, performing optimizations in the cloud is essential for three main reasons:
First, it reduces server overhead to a minimum. For example, the NitroPack plugin for WordPress acts as a connector to the NitroPack service. It calls (sends data) to our API, and our service optimizes the website’s pages.
As a result, our infrastructure handles heavy operations like minification and image optimization. Put simply, we avoid a big problem that the first approach usually has - making sites slower while optimizing them.
Second, being cloud-based helps us quickly deliver new technologies to our users. Before we ascended (pun intended) to the cloud, every upgrade we made had to be transferred onto our clients’ infrastructure via updates on their end which took days, sometimes even weeks. Again, this is a common problem with the first approach.
Third, issues like server configurations, file permissions, and disk space limitations often prevented our software from running as intended, as every self-hosted environment has its own setup and limitations. It became increasingly difficult for us to ship software that runs the same in every environment.
The cloud model avoids that problem. Everything we develop is shipped onto our infrastructure, and NitroPack users benefit from it automatically.
Combining that with the rest of our features results in a service that guarantees a fast page experience while being reliable, accessible, and easy to use.
Naturally, our new approach to performance optimization can be confusing to people. And to be fair, we should’ve been more clear about how and why we do things differently.
This blog post is an attempt to cover the why. Check out “How NitroPack Works” for a deeper dive into the how.
Failing to explain both our how and why has led to some wrong conclusions about NitroPack. We address those in community discussions, videos, blog posts, and case studies that show how websites benefit from our service.
We also understand that the automated, hands-off approach is not perfect for every website out there. The Web is too diverse for that. That’s why we’re investing heavily in our R&D and support teams, as well as in future initiatives to connect customers with the right experts for their performance needs.
On the one hand, we want our service to work reliably in as many environments as possible.
After all, NitroPack was founded on the fundamental belief that everyone deserves a fast website. The goal is for the majority of users to set up NitroPack and not worry about anything else.
On the other hand, we also want to quickly help people who have issues with our service or need a tailored optimization approach.
Many of these cases only require minor tweaks, like switching between optimization modes. But there are also situations that NitroPack can’t resolve right now. For example, our service isn’t compatible with web apps that run on JS frameworks like Angular or React.
Again, that’s something we expect. One tool can’t possibly fit 100% of the websites out there. That’s why we have no problem telling customers when NitroPack isn’t the solution for them and pointing them towards a better option.
60-80% of the sites built on the most popular CMSs and platforms don’t pass their Core Web Vitals. There's still a huge gap between efficient and working speed optimization solutions and end customers. We're just scratching the surface of performance optimization.
To do better, site owners need more options for solving the speed problem. And that’s where we're going to focus our efforts with NitroPack - bringing a more performant, reliable, and user-friendly speed optimization solution to everyone, no matter the platform their site is built on.
Of course, making great performance available to everyone won’t be easy, as we’re tackling a very complex problem. We’ve hit some bumps along the way, and we’re grateful to our customers for sticking with us.
We are confident that we’re moving in the right direction. Websites using NitroPack have the highest Core Web Vitals pass rate compared to other similar technologies.
The best - for us, the entire industry, and all website owners - is yet to come.
Passionate about all things performance optimizations, Ivailo is the driving force behind our product and core NitroPack features.