People say that server-rendering “makes sites load faster”, but wasn’t that the whole point of client-side rendering in the first place?
I wanted to press into these types of assertions for a moment as it’s more nuanced than simply “do it this way and it’ll be faster”.
Speaking in a very general sense, client rendered applications can be faster amortized over the lifetime of use of the app than a similar classic server-side rendered app. This is true when subsequent requests to fetch content by the client rendered app, and thus the
This trade-off makes client-side rendering an obvious choice for a site where a user’s average number of page views per visit is high, such as a line of business application. The trade-off does not so obviously break toward client-side rendering for a site where a user might only see a few pages per day. Perhaps a news site might fall into that category for many users.
So, describing an arbitrary situation where you run a news site where the average user reads 5 articles per week and you update the site with new features or fix bugs etc and deploy, on average, once per week, 20% of the time your users will see a much slower than median
time til readable content. Now, what those page load times are and the frequencies of page views and updates are going to be unique to you. But I hope this at least gives you a good framework for thinking about client-side vs classic server rendered systems.
The way I tend to think of server-rendered client-rendered systems is as a method for evening out the amortization of your
time til readable content per page view over the lifetime of the use of the app, while also giving you better than average
time til readable content when compared to a classic server side rendered app.
The big benefit server-side rendering of this type of app brings to
time til readable content is moving asset download and framework bootstrap to after the page is usable.