I run a content website for a living. This year I switched from Craft CMS to my own static site generator.
I no longer think about the server or the CMS. I no longer need to keep it updated. I got rid of the heavy database and the elaborate caching setup. Now it's just a static file server. If it was just the blog it would be even easier to host. The website is more reliable and requires virtually no maintenance.
The best part for me is that I can work offline. It's just me and my text editor, so even my tiny Macbook 12" feels blazing fast.
Version control is also incredibly valuable. I can review my changes or roll them back. I can also find/replace across the entire content with regex. Text files are easy to work with.
Static makes so much sense for sole-author sites with technical owners.
Would be great to make the tech more accessible to those without the skills to recompile and deploy. It's such a fast and affordable way to build websites, but existing tools like Hugo assume a lot from users that can put the tooling out of reach.
(Enjoyed your write-up too. I wrote the original version of html-to-markdown that you used in your migration; it's great to see it's still useful.)
> Static makes so much sense for sole-author sites with technical owners.
Totally agree for this narrow use case. But you're also correct that if you're a startup with a marketing site--going the Gatsby/Hugo/etc. route is a total disaster. Have seen so many technical founders make this mistake.
What happens is you realize, crap, my team needs to publish content for SEO and build new landing pages...and they aren't able to do my esoteric Git + Hand coding + Markdown/Front Matter + build process steps. So then you get sold some convoluted "HEADLESS CMS," which claims to solve your problems, but is actually a total freaking nightmare to setup and maintain (god forbid you ever want to change anything).
Have witnessed this charade more times than I can count. Seriously, just use Webflow. I know HN is highly disgusted by the idea of something being marketed as a "no-code" tool. But Webflow basically is a static site generator + headless CMS, just with a UI layer for non-technical folks. When you click publish on updates to a Webflow site you're basically automating the recompile and deploy steps.
To offer the other side of this, at $dayjob we just got done getting out of Webflow and into hand-coded HTML. That stack comes with a number of severe headaches.
* The UI layer is complete and total garbage. It is unbelievably slow and clunky and it makes doing anything with it a chore.
* The UI is complicated enough that it would be easier to teach the non-technical folks to use markdown.
* Uncomfortable amounts of lock-in. You can export your entire site of course but retrieving your "CMS" records is not so straightforward.
* If you need to do anything interesting, the UI benefit is gone, because now you get to write code AND use their awful platform.
* Expensive, with a recent price hike with no increase in the functionality or quality.
The recompile and deploy steps can be automated in ways that don't require hitching yourself to this behemoth.
I’ve been running a news site with Hugo for the last four years. You setup a CMS that lets the non-technical people make content changes without you. For design changes, you always are going to need someone who knows how to code, so it’s not a big deal to use Hugo.
Of all the problems we’ve had running a website, Hugo has not been one of them.
Webflow has some huge constraints when you’re trying to actually build a good marketing page.
Try framer instead, the fact that you can rely on React for anything that’s not possible to do in the visual editor is a life saver and actually enables you to build sites that are on-par with traditional headless CMS plus meta framework stacks while still being just as easy (if not easier) to maintain by non-technical users.
The only problem is the price-tag which means you’ll be outpriced for small personal projects but it’s still a good deal for startups and bigger companies.
I'm surprised you'd say Webflow has the constraints vs. Framer. Framer definitely seems promising but is way too limited and basic at the moment. They don't even have an API, and the CMS just feels super sandboxed. Good luck doing any of the programmatic SEO stuff you can do with Webflow. I'm confident they'll get there in a few years though.
Or you can have a look at React Bricks, too.
Devs create content blocks as React components with inline visual editing and sidebar props, with constraints so that the design system cannot be broken.
Content editors can use these Lego bricks of content with freedom, without breaking the design.
I’ve been down this path enough times that I built https://sitepress.cc/, which lets you embed content in a rails app with features that are present in Jekyll, Middleman, etc. like Frontmatter, site hierarchy traversal, etc. It keeps content as files in the app/content directory, but when it’s time to pull data in from the Rails app for SEO, it’s all right there in the Rails app. There’s no “Headless CMS” crap to jump through.
For me, this is another way of keeping everything in a monolith, and which requires a lot less context switching. If I’m building a feature and I want to create marketing or support content for it, it’s all right there in the same repo. I just create the markdown files I need, commit them to the repo, and I’m don.
The thought of switching between a static content site or something like Webflow just seems silly for a small team or project.
And people could not resist the urge to add styling and heterogeneous layouts all over. Weird typography, layouts, alignments and extra padding here and there.
A secret reason why I like to push markdown workflows is that you can't deviate from the site's overall look and feel. Focus on your content, hit publish and it will look good. Even if we change the site's look down the road, the content is still going to be fine.
Thanks for the warning, I won't try to tell you otherwise hahaha! Besides, if you like marquees who am I to say you're wrong!? I have exactly 0 design or taste bones in my body and think lime green text on a blue background is the pinnacle of human beauty.
Wikipedia only recently perfected their Visual Editor in a way that made me feel comfortable using it, but it's still a situational decision whether I use VE or edit the source directly.
And at $dayjob I am accustomed to poring over raw MarkDown, due to lack of a viable WYSIWYG application.
I agree completely with this thought. So many people jump right to a wordpress because it can be "easy" (and as an EE, even figuring out the details of Hugo was pretty confusing for what basically amounts to an evening hobby). When I first started putting together my website [0] I considered the tradeoff between static and "dynamic", but constrained myself to the former because I didn't want to pay extra for hosting costs. The Fastmail file/website hosting has been absolutely incredible for this purpose.
All-in-all, doing a static site has allowed me to learn so much more about html and other basic web technology than I think I would have ever gotten from a Wordpress or similar site. The biggest downside thus far is that I wanted to start a sort of Wiki-style knowledge base for users to contribute in my particular niche, but of course the static aspect precludes any user editing. I use that area of my site as my own knowledge-base still, putting info there that I reference all the time at my day job. For fun the other night, I got sucked down the rabbit hole of adding Disqus comments too. I don't expect anyone to ever use them, but instead was just learning about how to do it.
[0] The site is at https://ShieldDigitalDesign.com if anyone is curious to see what a dumb EE can do with a Hugo-generated site hosted with Fastmail.
Hugo’s documentation site is poor. WordPress installation is a beast and involves getting PHP, Apache, and MySQL installed, which are each nontrivial. Hugo only seems more intimidating than WordPress because of the size of the documentation gap.
Yeah, I'd agree with that. Even when just setting up the Disqus comments the other day I found the docs lacking. This would probably be less of an issue making Hugo config changes was something I did more than a couple of times per year.
Products like [decap CMS](https://github.com/decaporg/decap-cms) try to bridge that gap, but I agree that this space needs to be further developed. In fact I think there needs to be a bunch more work to allow mere mortals to use version control and branch workflows in day to day work.
Tina does a good job of making branching workflows more accessible with its recent editorial workflow features (backed by a GitHub repo and PR reviews):
I haven't used it, but Publii[0] might be along the lines of what you're thinking of. I ran across it in a previous HN discussion, and it seems to be static site generator with a pretty user-friendly graphical interface.
I was just thinking about how much trouble it would be to onboard an employee. Markdown is not rocket science, but getting the SSG to run? Committing files? Oh boy.
On the other hand, it's great for me, because I don't need to babysit a database or maintain hosted software. I can go on vacation and not worry about anything.
Adobe actually had a product like that about 20 years ago that the agency I worked with then used with their clients. Clients could just directly edit their content in a wysiwyg format, hit a publish button and it would upload. Can’t remember what it was called!
[ IMHO <cough> it could use a section on TV Series and Movies set in Berlin past and present with a quick rating on how realisticly they portray actual Berlin ... but that's just me :-) ]
This really should have been the case once we moved to SSD on server. For 95+% of all website, we likely have enough RAM to store 70% of the Hot Content in Cache and 30% of the content served from SSD at 10,000 IOPS Random Read.
Unless you constantly fiddle with the site design, the site and HTML generation should really be on devices and the whole thing should take less than a second to generate.
Unfortunately as other comments have said the current way of Github / Version Control / Text Editor are very much tech / Programmer focus. We need something like a hosted version of Wordpress, back to the old days of Dreamweaver / Frontpage.
It's such a tragedy Intel completely dropped the ball on Optane drives. With them having an order of magnitude higher random IOPS, it would have been incredible for servers!
They claim that but it's not true. It will deliver significantly less than that in 4k random read.
I don't have the link to benchmarks at hand right now, but there was a review running CrystalDiskMark on these drives and the 4k random performance of the Optane drive was more than 10x the 980 pro. For servers delivering tons of small-ish files, that's exactly the use case.
Those numbers are quoted with high Queue Depth. So not always applicable depending on circumstances. But even in the worst case scenario of QD1 it is still plenty for 95%+ of sites.
It has (used to have? Can't find them on the site now) pre-packaged binaries that you would drop into a folder structure generated by the technically-minded person, and the content editor can simply click on that binary, which opens the backend of the CMS in the web browser, make changes and click deploy.
I suppose you could build this on top of Django as well, with django-distill, putting the distill-deploy step into the Django admin and then compiling that to a binary.
> Now it's just a static file server. If it was just the blog it would be even easier to host. The website is more reliable and requires virtually no maintenance.
Same here, I'm using Sphinx and loving it !
I very much appreciate the ability to get all nitty gritty whenever I want to implement something "special" like the Favorite Git Aliases post which when edited gets turned into a .bash_aliases file and pushed to Gitlab and mirrored to GitHub! :D
This is very interesting. Impressive how much efficiency and performance is gained. This also means that fewer resources (electricity) are consumed, that less bulky hardware is used and, most importantly, that security is improved. A static website is more difficult to attack and the only possible flaw is in the web server and not in the code of the website.
I usually use Hugo. It is very complete and rich in features. I have also created a multilingual website using it.
Have you considered integrating Turbo Hotwired into your static website? You could have an improvement in the responsiveness of the navigation and a reduction in the load on both the server and the client side.
If necessary, Turbo can also be integrated with Mercure for live page streaming.
> This also means that fewer resources (electricity) are consumed
Most servers in the world use a constant amount of electricity, regardless of load. Generally the way to make DCs more green is to build them around a renewable plan with batteries (that they already need). Save for the fact that the whole thing is made of rare earth metals and lead, and you've got yourself something pretty green.
> Most servers in the world use a constant amount of electricity, regardless of load.
That seems unlikely. Even if you disable frequency scaling, hlt uses less energy than a full pipeline of avx. Disks use more energy when reading or writing than idle (although for spinning disks, if they're not idle enough to spin down, the difference isn't that much)
Datacenters do tend to have a lot of batteries, but the runtime expectation is pretty small, 5ish minutes takes a lot of batteries and is enough for the onsite backup generator to start amd warm up. It's been a while; maybe those aren't lead acid anymore, but they used to be. Stationary, weight doesn't matter, volume isn't super important, cost is important, leads towards lead acid, IMHO.
I have a robot vacuum cleaner that runs linux, and it CAN run a Caddy server. I managed to get a website running on it last time. Now that my websites are static I'm tempted to host my personal blog on it.
I try to keep things simple. Simple websites are fast and reliable. Turbo seems to be a departure from that. Since the heaviest part of my pages is the text, it would not help much.
In its simplest and most basic form, Turbo does it all for you. All you have to do is add the javascript files and when the user visits a page, Turbo will intercept the request, pass it to the server, retrieve the HTML response, extract the content of the body tag and surgically insert the content into the client browser. I'm using it on a couple of sites (including mine, a static one built with Hugo) and web applications and the difference in speed and responsiveness is remarkable.
Of course you can go further. I'm now building a PWA using CodeIgniter and Turbo. The big advantage of this approach is that you can use "classic" technologies (HTML5 + backend) and achieve similar responsiveness to the modern Javascript frameworks, reusing all the existing backend code and logic.
I read your Ursus page with high interest. Do you plan on sharing the source/scripts/github? Do you plan on monetizing it? Personally, it sounds like something that would be great to check out.
Affiliate marketing. I am just bery careful about which products I mention and how. People need a bank account, health insurance, liability insurance, etc. I get a cut when I refer them to a business. I work with pretty much every business though, so I don't need to sell anything.
I did the same with my blog. It was custom written in PHP and was fairly stable, no reddit/hn hug ever killed it but I wrote a wrapper to make it static and now I can host it on a Raspberry Pi with nginx. Even thought about making it solar powered because of the low energy needed.
An edited site could also (mostly) be static if edits pushed / queued a job to publish the updated static content. Ideally a casual visitor would never see non-static content, other than possibly dynamic addins that mix the data client side if they have an active account.
CMV: if you're running code on the server anyway, there's no advantage to having all the precomputed header+content+footer combinations. You may as well run a script that combines the generic header + specific content + generic footer at request time. The advantage of static sites is that you don't have to worry about any server scripting at all.
You might be reading too much into the infrastructure. I was speaking generally.
E.G. The workflow might be updating a database on the local system and a set of affected pages published. Or it could be modifying comments within source code files and a Makefile like regeneration of document files that are then rsynced to a host elsewhere.
If there are user specific portions for some reason those would act more as a real application in sideband to the data, rather than a synchronous (and render blocking / load slowing) detraction from the experience.
One benefit is that python scripts are easier to run on a dev machine than nginx. These days I rarely run the whole website with docker, just the Python static file server with `ursus -s`. It made a huge difference on my slow machine. I could listen to music and work at the same time without bringing the machine to a crawl.
It's very small, the CSS is inlined before the content, everything loads in a single request, and I use fonts and images sparsely. CloudFlare makes it fast everywhere.
but then formatting is awful in markdown, especially with all the template and tag syntax you add there which sometimes take more space than the actual words they mark up, and also due to all the tags you can't search/find/replace across the entire content, how do you find a "__bold__ in a phrase" or "bold in a __phrase__" reliably?
I spend most of my time editing markdown, and this was never an issue.
On the other hand, now I can use any search and editing utility I want, not just whatever my CMS provides. A good example is adding a space after the § character, which I use a lot. It would take five seconds to fix it across the website.
A good text editor theme makes a big difference. I fine-tuned an existing Markdown theme for Sublime Text to better highlight the markup, especially headings.
weird that broken search basics are not an issue, you extol the virtues of site search in your blog, that engine is capable of matching "shcfua" to "Schufa". Markdown, on the other hand, fails in search long before getting to fuzziness
> § character, which I use a lot. It would take five seconds to fix it across the website
True, and then forever to hunt down those few instances where § was used in a quote or something and should not have been fixed
> I fine-tuned an existing Markdown theme for Sublime Text to better highlight the markup, especially headings.
I "highlighted" most of text markup to make it invisible, a bold text is already visualy bold even in Sublime, why woud I need an extra __indicator__? And heading size is also the best highlighter, which plain text editors don't do
But yeah, I wish there were Sublime-level tool for rich text so you wouldn't have to compromize like that...
Search can be anything you want. It's just text. You are free to use the tools that work for you.
I see the diff in Sublime Merge before committing, so I'm far less likely to break something. I used to make so many errors because the WYSIWYG editor caught a random keypress.
Personally, I want those indicators to be there. I'm editing the text and the formatting. I want both to be on the screen.
Frankly I don't need to convince you that this is better. I work with this 40 hours a week and it's better for me. Feel free to take a different approach, but also accept that this approach is tested and true for me.
But it's not "just text", that's the whole point of "markup", and that's also the reason why the diffs would fail outside of simplistic changes. To search properly you'd need to strip that markup, so it can't be anything I want, which text editors offer that???
Also, formatting is there on the screen - in the form of formatting!
The markup is part of the text. Making text bold is a change to the text. It's part of the diff. The text without the markup has no value. I'm not sure I understand what you're getting at.
I meant searching for a "bold in a phrase" phrase when it can have various markup in arbitrary places (also, it's not just __, you might have some other valid marker)
But anyway, you can't really use regex here as it's too complicated and error prone for such a frequently needed operation, you need a better mechanism line index or a built in functionality to ignore all markup
look where? it can be any other marker, and it can be nested, and it can be in any position, you can't craft a regex on the fly to capture that, your regex will be longer than the phrase.
The biggest difference between a static site versus a dynamic one is security surface area.
A static site web server can only ever, worst case, be induced to serve the wrong files. And you can mitigate that by making sure the only files you put on the server in the first place are ones you want to serve.
A dynamic site can be induced to run code. It can be induced to return the wrong data from the database it has access to (passwords for example). It can be induced to modify data.
Wordpress breaches happen all the time. Nginx breaches don’t.
I just want to straighten the perspective on things a bit.
Technically there's no web site that doesn't "run code". There's always code involved. It's code all the way down, from the web server to the file system drivers, the operating system and so on.
While it's true that static files reduce attack surface, we need to think deeply why is that, and design intelligent dynamic systems that have the security of static ones.
And ultimately it's about input, and how input is treated. It doesn't matter how complex code you run, if you accept no input, you can't attack that system. Of course with no input you can't even figure out which page to show. So there's always input, even for static sites. But I'm trying to point our attention to what's the major distinction here.
I think they were merely implying that the website codebase doesn't have anything executable. I can't imagine anyone reading HN thinks that serving websites, generally, doesn't require a giant pile of code at many levels. Sure it doesn't eliminate the server-side security concerns of the entire deployment, but probably would from the entire website codebase; many people run static websites in read-only caches. At that point, the security concerns are more in the system/ops/network realm than the site itself.
Sure. Heartbleed for example was a data exposure vulnerability that leaked data from the HTTPS stack, which exposed private keys and plaintext from other inflight requests. Any real world system that handles HTTP requests, at minimum, has to do some reading of config data, some reading of data to serve up, and some writing to log files.
The point is that a static web server does that and only that. A dynamic web server is one that does that and more.
Just to build on this, there are RCEs that involve overflowing headers; Go just had one not that long ago. There's plenty of inputs on a GET request. You still need to do proper security on a static site.
That's because the nginx inputs have a much more homogeneous type than your random django-based system.
I'm not sure there's anything you can learn from that. Some use-cases just lead to more secure software than others, and this is a completely non-actionable piece of information.
As they said, it's attack surface. The static site does one thing; the Django site does more things. This is the use-case difference you're talking about: the more your code does, the more likely it is to have vulnerabilities. And it's actionable: make your code do as few things as possible.
Perhaps we could describe a dynamic website as being a static website + extra (i.e. the dynamic stuff).
As mentioned in the article, every dynamic website will also need some static assets (the .js files, images, etc.)
So, it seems like a dynamic website will always have greater attack surface area than a static one, by definition. It will have all of the static site surface area, plus some more.
----
I wonder if there is room for a language that is not as general as JS, but can still be used to add dynamism to a website. So, you don't have to be stuck in the "it's Turing complete, so we can't be sure it's well-behaved before running it" situation.
Perhaps CSS is kinda turning into that, these days.
Though maybe CSS will become Turing complete too, at this rate (:
A static web server can theoretically be induced to execute code because it still has parsers.
While I do agree a static site has a smaller attack surface, I think it’s more that nginx has way more eyes on it and a much slower development rate than your average custom configuration of WordPress plugins.
Because if you were to build your own static web server, your first version would probably more susceptible to attack than a stock WordPress installation.
This was my thinking as well. A website running application code will also require maintenance vis a vis updates to mitigate security holes. And that never stops.
You could theoretically never need to “update” a static site. There’s not even really a concept for it. Unless the target changes HTML versions, I guess.
From the perspective of a developer, I think the distinction is pretty clean and lies in the abstraction provides by the web (by which I mean hypermedia served over HTTP/S).
The semantics of that abstraction are that requests come in to a specific path with headers and a body, and then responses are sent back out with headers and a body. The intermediate networking details like TLS are hidden. In fact, more and more things are being hidden from the developer, with most modern frameworks automatically managing authorization sessions and request headers. Within that abstraction, the standard "static doesn't depend on request/state, dynamic does" is pretty clean, but it does rely on the semantics provided by the abstraction.
It's a little bit like TCP being a connection-oriented protocol over a fundamentally packet-based underlying infrastructure. I can use TCP for applications that require stream-like or packet-like data transfer. Then you could argue that "in reality, the distinction doesn't exist because it's built on IP which is packet-like". Sure — but you'd be on the wrong layer of abstraction.
That's not to say the article doesn't make a good point. As developers, we should certainly always keep in mind the underlying statefulness of "stateless websites". It's always good to know your abstractions a few layers deeper than you think you need.
My personal website is an odd mix of static and dynamic; that is, it's mostly static, but the blog portion is dynamic. When you hit a blog URL, it fetches a Markdown file from the disk, converts to HTML, and then jams that HTML into a template which provides the rest of the page, brings in CSS, etc.
It's still fast and efficient; when one of my blog posts was #1 on HN last week, my friend texted me "good luck, hope you have Cloudflare set up". I don't, but the load average on my 2 core, 1GB memory VPS never exceeded 0.15.
Odd today, but you're essentially describing the idiomatic, pre-framework use-case that drove the design of PHP. "It's mostly HTML, but when you get to this line in this one file, run it as code [to e.g. parse some other file from some other format] and embed the result in the output."
Made perfect sense in 2001 for building a static website where you happened to want to have e.g. a comments section on each blog post. And this kind of PHP was cheap enough that even your ISP was usually willing to let you slap some of it up on your /~userdir/ and point the public Internet at it.
I went from hacking up the source HTML files I designed on Geocities and uploading them to Freewebs (so I could remove the linkback/ads) and it worked really well for a while. Then I wanted dynamic stuff and PHP was perfect. Eventually I accepted the stigma it had even while I never really bumped into the ugly parts, or even treated it as a normal programming language.
I think after bouncing around with Node and the flavor-of-the-week static site generators, I might be going back to PHP for my personal sites.
Which is insane. If you told me I'd be heading in this direction anywhere between 2 and 15 years ago, I'd think you were lying.
All I really need are template files. I could use iframes but that stinky technology actually deserves its stink.
I think people underestimate how fast computers are now. You can easily withstand Hacker News with a small VPS as long as you have a sane architecture that isn’t too database heavy.
Yeah, if you launch Threads you need whatever it is Facebook does to scale. But for a regular read only site? You don’t need to spend very much at all.
That is an odd mix (without additional detail at least). I’m curious about the rationale for dynamically rendering the Markdown. I can think of a few reasons (dynamic content templated in, reduced build time/complexity come to mind), but maybe it’s something(s) I haven’t considered.
It was basically a pragmatic choice. I figured the following:
- For most of the site, hand-jamming HTML works fine, because I have a small number of pages outside the blog.
- For the blog, I wanted easy authoring, thus Markdown. I figured if I rendered it on each request, I wouldn't need to make a separate tool to regen stuff, I could just build it all into the server; any changes I made to the markdown would be instantly reflected on the site.
I wrote this in Go at a conference in 2011. Hugo didn't exist yet or I probably would have just used that.
edit: I initially wrote this on Plan 9, and it's so old that there's still a mkfile invoking go/8g and go/8l individually to build it. It's 265 lines of code, which does the static content, generates the blog stuff (including creating the blog "archive" page on-demand), and manages multiple domains so I don't have to invoke e.g. apache to use the same host for multiple sites.
As an "outsider" who dabbles in wasm I never understood why running custom code on the server to generate "dynamic" webpages would be better then a dumb server which just serves files, and the dynamic part is running in the browser instead (other then that 90's web browsers sucked for this type of stuff).
E.g. nothing will ever beat a dumb file server with a cdn in front when it comes to simplicity and scalability.
Depends how much you want to want to avoid showing a spinner during first load _and_ when moving between pages.
And how much skill you have to not mess up basic browser functionality.
As a reference point, GitHub still breaks the back button for me like 40% of times when I'm navigating their source code (me using Chrome on OSX because I only need to do this on my work laptop). I don't even know how they manage to do this.
In my experience, sites that generate HTML server-side tend to feel faster and more reliable than anything that depends on rendering client-side. Loading a booru with 50+ images per page with a cold cache in my browser, always feels quicker and more pleasant than loading text-only GitHub pages that should already be cached everywhere and haven't been modified for days.
You might want to store user-submitted content in a database, for example. Or you might want to do authentication. You might want to provide full-text search in a multi-gigabyte dataset. You might want to provide an interface to something which can't be accessed from a web browser. You might want to validate user-provided input.
All of those require custom code running on a server. So now you have to take whatever data you have on the server, translate it into a well-defined custom transfer protocol, send it to the client, translate it back, and finally turn it into html. And the same for the other direction, but now you suddenly have to do input validation on both the client and the server to prevent anyone creating a custom client from doing anything malicious.
Or you can just turn it directly into html on the server and be done with it. It's a lot less work, and provides basically the same experience for most applications.
Yeah, depends on the type of app / webpage of course. E.g. a blog doesn't need any of that, but a webshop does. But there's also a middleground where most of the code runs in the browser, while only some small parts need to run on the server (like login/auth, or a "cloud drive" storage). IMHO this type of web application is currently the most interesting.
PS: how complicated that is all depends on the library ecosystem IMHO.
Also users haven’t had “a browser” for a decade. Between personal and work I have four, technically five, and then some apps on my Apple TV (YouTube in particular). User accounts let me share state between them. There’s no protocol yet that lets you do that on the client, though Apple might be working that way.
If you haven’t used the device integration features of OS X and iOS you are missing out. Cut and paste is very, very useful.
Like with anything, that might be a good strategy in some situations, but not in all. For example:
* If you have some secrets that need to stay hidden (e.g. database passwords, API keys, encryption keys, etc) then that needs to be processed by code that lives on your servers. If all your code runs on the client, then there is always the chance for an attacker to discover those secrets.
* It is often easier to present a smaller, limited API to the client to reduce the surface area available to attacks. For example, if you let the client connect directly to your database, you need to make sure that that database is correctly secured, that the correct permissions are there, and that nothing can go wrong. But if you just give them a list of books, or films, or whatever your app does, then it will be harder for them to break through that defence. (And if they do manage that, you should still have the database set up as securely as possible to provide an additional layer of defence.)
* A site often feels more responsive if the initial load is served as one single chunk of ready-to-use data (as opposed to serving the application, showing a loading indicator, making the necessary data fetches, then showing the data). Even if they take the same time the first option often intuitively feels quicker to the user.
* Leading on from that, it can often be quicker to load things up front, just because the server that's responding to the user is probably sitting next to your database and any other servers it needs to talk to. These network calls are also much more reliable in that sort of context. If you push all of the processing to the user, you're going to have to handle the slower and more unreliable calls.
* The server will probably be a much more consistent platform than user browsers. Browsers are a lot better these days, but there are still lots of slight differences that you'll need to be aware of. If you have control of a server, though, you can specify the exact versions of every tool, runtime, etc that you need, and be able to update things more deterministically.
Fwiw, these aren't all true all of the time, and there will be exceptions. I work primarily on frontend applications, and there's a lot of value in a well-designed application that does the majority of work, if not all, in the browser. But this is typically only true for fairly complex web apps, or projects where some amount of rendering has to happen in the browser whatever happens.
Dynamic pages do have some overhead, as discussed in the article. But it is still vastly superior to running the site in the browser.
Because the generation is done once, and it is darn quick. After that all of the numerous advantages of a static page still applies from a users perspective. The resource use is many orders of magnitude less and the page is much more responsive. The user experience is vastly superior, we haven't cared about that in the last decade.
I've had sites where I just write in Markdown and rsync it to a webserver. All pages were rendered dynamically. The advantage was that I set it up once then literally never care about a website again and just write in my beloved Markdown.
It was great. Until the webhost took away PHP anyway.
GH Actions (or any othe CI service) can do the same workflow though. Commit and push markdown, GH Action starts server-side, converts the markdown to html with a static site generator of choice, and uploads to webhoster.
If you use Jekyll as static site generator and GH Pages for hosting, it even works out of the box (eg no GH Action tinkering needed, just flip a switch in the GH project settings)
But arguably, a local static site generator workflow works just as well (convert to html locally and upload those files to web hoster instead).
Because you can't guard your competitive advantage as easily when you need to send your entire codebase to the client, and it turns out companies like that stuff.
What's the "competitive advantage" of a webpage though? It's usually just some code snippets copied from StackOverflow and a database ducktaped together right? ;)
With modern frameworks it is not even that. What you get is a compiled, minimized page where all of the technical interesting things are lost after the build step.
The only thing you can make use of is the design and the actual content.
So taking a fairly simple forum that I run as an example, on every page load you'd pull down 100MB or so of database, and then query that down to just the one chunk of a few hundred bytes to display a post, in the browser?
You wouldn’t pull a 100mb database to query in the browser; that’s not how static sites work. Instead, you’d rebuild the relevant pages whenever the content in the DB changes.
Now, for a forum, I’m not sure this is a good idea, unless you have a super fast optimised build process. Even then, the delay in build is not what people expect from a forum. But the OP is not suggesting pulling a 100mb db down into the browser.
Forums didn't have multiple HTML pages in the early 90s, they stuck everything into a database like they do now, and pulled it out with PHP3, or even Perl (the one I wrote was PHP-FI, then converted to PHP3, and is now entirely lost).
Big fan of NextJS and its static page export for that reason. I build and deploy just static .html/.js/.css to whatever CDN or static webserver of my choice. Since all indivudual pages/routes are pre-rendered during build, the first load is blazing fast and indexable by search engines. Also the way NextJS chunks the .js code and pre-loads contributes to the fast-load experience. For rich functionality you can interface any REST API you like, and be as dynamic as you want. Plus, MDX plugin makes it easy to creates competely static areas/content focus sites within the same project.
Unfortunately, I get the feel that with the release of v13 app router, they don't give the static export feature enought love. Features from the pages router were dropped for static export (shallow routing, static rewrites/redirects). I fear they are dropping static export all together in the future, as you don't need their comercial vercel offering at all when using static export.
This is incredibly complex for many static site use cases (i.e. blogs, documentation, marketing pages). Most don’t need JavaScript let alone React/JSX/Middleware/SSR/et al. I can’t imagine the long-term maintenance of something with so many moving parts and npm dependencies. Not saying there’s not a use case, but we should KISS before jumping to projects with a 1.8 GiB Git checkout & 828,128 LoC just to make a landing page.
It is definetely only geared if you create a webapp, not a website. You can have static content (with MDX) for certain parts of your app, and restrict i.e. dynamic javascript stuff to your user's account/login section. But everything is within a single codebase.
I personally don't care for the size of my node_modules folder, storage is cheap. Plus, with a yarn.lock or package.lock you don't have to care about npm dependencies, if you don't upgrade any, your simple `yarn install && next build && next export` will always produce the same result for years to come. The actual export output of NextJS is amongst the smallest you can get - you also get a nice summary of how big the initial load is for every route.
Everything is chunked per route, all js chunks and assets contain their hash in the filename. That is why you can serve all of /_next/static with the `Cache-control: immutable`. This means if you don't update a section of your app, there is not even a need for the client to send GET requests with the `If-modified-since`, means: No additional roundtrip for any chunks. Preloading can be configured, by default its on-link-hover - that is, if your user hovers over a hyperlink/route, nextjs will already fetch the required chunks for that route before the click happens.
Yes, NextJS is not competing with Hugo or other static site generators. It is aimed at tech-savvy people, and only for webapps that will require programming JS.
Agreed, but I think most people spend most of their time working on things more complicated than a blog. It seems there's a lot of excitement for purely static sites, which is fine, but I would call it hype at this point, and caution someone jumping into a new project against using a static site generator where it doesn't fit.
This is a actually the contrary they started caring more though all scenarios might not be covered yet.
See this example from Dan Abramov: https://gist.github.com/gaearon/9d6b8eddc7f5e647a054d7b33343...
It's not related to Vercel business model but to the fact that this use case is less common (in proportion) when using Next.js.
I am not sure what you mean by static rewrites, isn't that covered by middlewares?
The link you posted from Gaeron uses the (now deprecated) pages router, directly supporting my argument.
I did a migration from pages router to app router for my static-export project. Btw both are still present in v13, you can pick whichever you want, but long-term the pages router will be gone.
With static redirect/rewrites I mean the feature in pages router where you put in the "redirect"/"rewrite" statement in your next.config.js. These will be respected and work in a static export environment. If you try to perform a static export using app router, it will error and tell you that you have to remove "redirect"/"rewrite" from next.config.js and it is no longer supported.
Yes, you could use a middleware for redirects, but that is exactly my point: Middlewares do not work with static exports, they are middlewares for the server-side next server. Which is not present in a static export (and requires running your own server again, or use vercel's offering).
The same holds true for shallow routing for static exports, it is a feature still present in pages router, but not in app router. There is a GH issue of a lot of users complaining about this, but no feedback from the maintainers whether this is planned to bring back: https://github.com/vercel/next.js/discussions/48110
Indeed it uses pages router, my point was more the timing because it has been posted recently while Lee Rob was tweeting about next export being reenabled (in 13.4 I think).
For shalllow routing you are probably right (I don't use them much so I can't tell much, but I do have a few critics regarding app router too) but regarding rewrites, an URL rewrite has to happen in a server so I still don't get your point. What was removed in v13 is the shortcut via next.config.js not the ability to do rewrites/redirects which can be done easily using middlewares or whatever your host provides. It's also recommended to setup i18n yourself in a similar fashion instead of using the built-in config, something that I've been advocating for a long time (https://github.com/vercel/next.js/discussions/17631#discussi...).
Gaeron example or the comments below show how to handle dynamic routes rewrites via nginx for instance.
Rewrites and redirects do not need to be executed server-side ever since the html5 URL API. Of course you can redirect client-side, as in NextJS: router.push(). And given that redirects/rewrites work for the pages router also contradicts that point.
But yes, you will need to add support within the webserver like nginx. A simple modification to `try_files` will do, and is probably present in most deployments anyway (because you will want URL routes like example.com/news instead of example.com/news.html). I think they just not considered that feature in the design of the app router. And I also don't think it is technically impossible to apply client-side redirects/rewrites within app router.
I used m4 some time in the 90s to generate my site, before I'd heard the term "static site generator".
Then I switched to PHP, and then Python, and now back to static using Jekyll.
It's just so much better, when possible to use. Aside from SSL certs everything can be fixed on your timeline. Compare PHP breaking something on upgrade, where you need to fix it immediately, and the site is down until you're done.
Static site, even if the generator breaks, fails... well... static. If you don't need to post anything new, then you don't have a problem.
And if the server explodes, you can just ask a friend to host some files. Not "hey, you run PHP version X with settings Y, right? And postgres?".
(Some friends maybe even say "no, I don't want PHP on my machine")
I still use m4 for such tasks. I feel like it's only 1 step up from having a sed "s/VERSION/1.2.3/g" in terms of complexity and as long as everything can be done with external shell commands, you can avoid installing python, etc.
I've been exploring an architectural pattern for a few years now which I think gives you the best of both worlds: it lets you run dynamic server-side code, but in a way that's both extremely inexpensive to scale up and that is self-healing if anything breaks.
The key idea is that you deploy a full read-only copy of your site's data as an asset bundled into your application.
This only works for sites that don't constantly have updates, since you need to re-deploy the entire site when anything changes (just like a fully static site).
Benefits are you can deploy to inexpensive dynamic scale-to-zero hosting (like Vercel), you can handle any amount of traffic by running multiple copies of your app and if the app crashes your host can just restart the application automatically.
I am not sure how this differs from a static site generator that also has some backend functionality? E.g. I've seen static sites with server-side search etc, comments systems etc which submit each post/comment as a separate flat file and then regenerate static pages automatically from those etc.
Is it more that your approach stores stuff in sqlite and builds from that, rather than markdown files? That seems to be the only meaningful difference I can see compared to normal static sites with backend/server-side functionality?
The key idea here is that your server-side stuff is completely read-only - so comment systems aren't supported - because the server-side data is treated as a deployment asset.
Have you used platforms like Vercel? Their one big limitation is that you can't use persistent storage with them - if you want to talk to a database of some sort you need to add an extra database vendor.
The baked data pattern works around that by bundling read-only data (usually a SQLite file) with your deployment.
Oh ok so you still have a "dynamic" site that is database driven, but that database is read-only and stored in a pre-generated sqlite file that is deployed with the rest of the site assets?
I don't want to piss on your parade here - perhaps I just don't get this pattern - but isn't this all of the downsides of a dynamic site but without any of the benefits? You are still running slow server side stuff that relies on doing per-request database stuff and all the complexity and security headaches that go along with it (upgrading PHP or Wordpress or whatever), but with none of the benefits of server side functionality and databases (i.e. it is all read-only).
Tangentially, I've been exploring something kind of fascinating...
So, in compiled binary executables it was not uncommon to include encoded binary resources, like files or images. Not tons of them, because they would explode the size of the executable.
So here's the interesting part - we decided to build executables in a way that data in those executables could not be changed (makes sense since it's compiled and the compiled code runs on the machine, and for security reasons).
That said, think about what a container (e.g. docker) is. A running container is kind of like a packaged executable, except it also has a filesystem.
So if you pack data (which can be modified and runtime!) into a container, it's a similar concept to an embedded resource in an executable, except it can change.
Now, in a container, any changes made to data at runtime inside the container won't persist unless the container is given persistent storage.
What I've been wondering lately is why we didn't invent some kind of single "executable plus volatile data space embedded within the executable", so that programs and data (say, a database) could couple together into a single file.
Just a musing but tangentially related to your "baked data" - basically, embedded resources in executables just embeds the encoded data right into an executable.
For scripting languages, of course, we can just make a script file that contains a variable with the encoded data as base64 directly.
For the latter 2, it's only good for relatively small static data of course, but it would be interesting to build tech that somehow lifted that constraint of executables.
> That said, think about what a container (e.g. docker) is. A running container is kind of like a packaged executable, except it also has a filesystem.
> So if you pack data (which can be modified and runtime!) into a container, it's a similar concept to an embedded resource in an executable, except it can change.
> Now, in a container, any changes made to data at runtime inside the container won't persist unless the container is given persistent storage.
> What I've been wondering lately is why we didn't invent some kind of single "executable plus volatile data space embedded within the executable", so that programs and data (say, a database) could couple together into a single file.
Things like this have existed before, and I presume still exist. For instance, before MacOS X, applications for Macintosh for (vaguely) similar to "application bundles. Leaving aside the technical differences of implementation, classic Macintosh applications had a "resource" store that included code and 'static' data (pictures, text, whatever) but could be modified by running program. Changes could be persisted without using any other files. In practice, changes were often persisted elsewhere, be cause if a fault occurred during modification, the executable could be corrupted, and because this made things like resetting to defaults easier (by deleting or moving an external preferences file, for example).
I had not, thanks for sharing this is really interesting! I see "Self-Modifying PKZIP Object Store" and this is very much what I had in mind. Neat, will have a look!
I’ve been following this pattern and now it has a name! I host my projects page like this (https://usmanity.com/projects). I didn’t want to edit an html file each time I wanted to add new projects to the list or change the details on an existing one so I used Notion and I bake the data before committing to GitHub.
Back in the day, Drupal had a module called “Boost” that sort of did this. Once you activated it, it baked out every page of the site to HTML in a directory, and altered .htaccess to send all traffic there. When you updated content, it rebaked everything.
I think one big missing part still with static sites is how you host the CMS to edit it.
Correct me if I'm wrong but Decap CMS (previously Netlify CMS) runs in the browser and makes reads/edits via GitHub which can then trigger rebuilds and deploys, but it still needs a small server/proxy I think because CORS stops your browser communicating directly with the GitHub API. Netlify hosts a GitHub backend that proxies requests for you but now you're tied to Netlify and their pricing plan changes.
Is there a simple solution here with minimal configuration? I guess you could use a browser extension to selectively relax CORS but that's not ideal.
It feels like a Git-based static site generator with a CMS that has Markdown editing and live previews that runs in the browser with minimal hosting/server restrictions would work for a huge number of small websites and blogs.
What I would like is a good way to create structured data admin systems that can include HTML content, and then have my static generator access that to build the pages via a JSON feed or whatever. For example, having records for individual products including a body description which can be in HTML format. This means that other people can update the products, but we still have a static website. I thought Airtable would be suitable but surprisingly it doesn't support HTML fields very well.
Not sure if this helps but Decap-CMS lets you define your own records with fields, like a shopping product record with a price field and Markdown/HTML description, where the content is saved to Git in simple JSON/YAML data files. You can then edit the records via the browser web interface, which commits changes to Git and your static site generator would build the site from the data files.
> I think one big missing part still with static sites is how you host the CMS to edit it.
This problem is completely solved by Surreal CMS [1]. Make your website anyway you want, connect surreal by FTP and let the user/client edit what you permit them to edit. The cost $12 per month is dirt cheap for never having to worry about it, and giving non-tech users a total WYSIWYG editor.
>I don't understand why people complain about the lack of tooling, when practically perfect tooling exists.
A tooling can't be called perfect if you have no control over it. Having your workflow depend on software that you don't own and can't control is anything but perfect.
my experience is limited but I once saw "frontmatter" extension to VS Code which seemed very powerful to serve as CMS and even edit your site's templates.
currently this extension works on locally installed version of VS Code on you laptop,but doesn't work on Github Codespaces.
if we could somehow get that to work (within the free tier of github codespaces, with reasonable limits on usage time) i think it would be a winner -- completely online version controlled setup with no dependency on a dev environment installed on your own machine, while serving static sites off say just S3, and still getting a full CMS experience.
the very important thing about static sites are that you can much more easily "fire and forget" a static site - put it up on an S3 bucket site or whatever and you are pretty much in the clear about worrying about it.
put up a "fire and forget" site using any kind of PHP or (perish the thought) wordpress (assuming self hosting), and if you dont check that site a few times a year, it will be all Russian porn ads before you know it.
And if you dont get hacked you risk that your site is just offline due to something breaking down (maybe the db needs to be restarted, maybe the webhost changed php version etc). I got a couple of static sites and it so nice to know they are always up and never need to be fixed. My dynamic sites on the other hand needs to have alerts to check if they go down.
Outdated PHP is a big thing. So many times I've pulled a php file from an old server and it fails to run on the new because some methods were deprecated twenty-dickety-two. Which isn't a criticism of the language, just how things go. You either have to rewrite it to modern or run an unsupported version of PHP.
The vast majority of the internet is files of text and images that don’t need to be updated that often.
Further, rendering of text and images is handled by client configured applications that change - with the content fitting into client specified configurations
Given that, if you want your content to be fast to access and simple to maintain over the longest period then:
1. Provide the smallest possible amount of bytes to the client that completes the communication task
2. Format the data in a way that will remain supported by the most clients over the longest term
It’s fun to play with different setups for a variety of use cases and server side rendering, but history shows that the most reliable design pattern prioritizes clients being able to consume and render data as quickly and clearly as possible.
I’ve never found a simpler way to do that than a file server hosting static .html files with text, svgs and pngs.
Seems obvious enough: a static website runs no server-side code to generate content, because the content is already complete. Perhaps more debatable, but I would also say no client-side code should be used. Just HTML, CSS, and static media files.
FWIW for my personal site, I run my own web server, which is incapable of running server-side code. It just copies files to sockets, nothing else.
I think the key point that the original article was making was that there is server side generation going on - some server still needs load the file and turn it into an HTTP response, with all the correct headers, metadata, etc, and with the correct failures when the file can't be found. This server-side code needs to be implemented (and configured, etc) somewhere.
That said, as the second article points out, static vs dynamic can still be a useful abstraction layer, as static servers like nginx and Apache are so solid that you can usually just treat them as part of the basic HTTP infrastructure.
At my agency we probably build two custom sites a month. We used to do this with PHP WordPress templates, then we moved to Nuxt server side rendered on Node (Heroku), but now we are all static site Nuxt on Netlify and it’s SO MUCH better. Headless WordPress as backend. As long as you/client can deal with the build times it’s fantastic from a speed, SEO and uptime POV. Also just way cheaper to host it. Incremental Static Generation seems like the ideal solution, looking forward to that with Nuxt 3.
You are actually running a Wordpress site with a reverse proxy cache with forever TTLs. The fact that the cache is implemented as HTML on a file system is incidental. You could deliver the same speed, SEO, and uptime with any such technology, such as putting a regular Wordpress site behind Cloudflare or Fastly with forever TTLs and running a cache warming script.
Recently, while talking to a friend who hosts her website on Wix, I noticed that she was using it as a portfolio website and I asked what wix-specific features she was using and her answer was "domain management" and this got me thinking...what if the steps were pretty straight-forward, could someone who is not a web dev or technical get a website up on github? So I started putting together a template that looks very similar to her website and also includes instructions on how to host it on Github Pages for $0/year versus what she is currently paying Wix at around $100/year. The template is here: https://github.com/usmanity/rose
There’s no harm at all in putting effort into building a tool which might suit the former, on top of the latter. That’s more or less how all software for non-technical people is built: simplifying a more complex technical solution to make it accessible for the intended audience.
My partner (who is very non-technical) went through registering a domain, making a Github account, installing VS Code, slapping some HTML together by hand, and putting most of these things together before she even mentioned to me that she wants a website.
Of course it took us a bit more work to polish it, but 90% of the work was hers (I helped with things like setting up Hugo, Netlify, writing some template code, fixing the CSS on mobile, etc).
The biggest pain point of making a static website is just wrangling the generator. We wanted an image gallery with a couple selected works showcased on the front page, and the combination of various Hugo features to achieve that exact effect turned out to be non-obvious. There was probably a way to make it simpler, but we haven't found the time to find it.
I think most non-technical people can easily get the first 90% of the work done by themselves with the currently available tools, it's just the other 90% where you find yourself struggling ;)
I'm thinking of taking what should be a simple fast business website where the only dynamic bit is a contact form from Squarespace due the ridiculous need to the current site to run about 40 JavaScript scripts to load it.
I can definitely hand generate the small amount of content in it and could work out the contact form easy enough, but having no experience with websites don't know where to start with things like hosting, etc.
In practice most static websites aren't static because they include a bunch of stuff from elsewhere. But if you stay away from that pitfall and focus for a bit it is perfectly possible to have a really static website, my blog has been based on that for a couple of years now and I've never looked back after making that decision. It doesn't even have an analytics tracker, no outside fonts or anything that could slow it down. The end result is lightning fast performance, minimal memory footprint for the pages and rock solid because serving files over the web is a 'solved problem'. As an extra bonus by not including a bunch of eye candy and other unnecessary items I don't waste resources on the side of the viewer(s), and there is next to no chance of this setup developing a security hole in a framework.
Highly recommended if you want a low maintenance rock solid transmit only website.
Besides my blog, Pianojacq.com is set up much along the same lines, but there is a major difference, it interacts heavily with peripherals (MIDI) and needs a database (locally, not remotely), the end result is a highly interactive but still 100% static website.
For convenience I prefer wordpress. I am more likely to remember how to use a site in 5 years if maintaining for someone else if it is wordpress than some static site generating script. That is just me though.
For a personal blog the idea of going real old school and editing the files (maybe a simple bash script to bulk update menus etc.) is appealing though. Since my metric is “can I come back in a few years and figure out what I did”
I love WordPress but wish it had pods built-in and a visual bilder more like Elementor where you can design everything including templates and content types.
I think the worst part of dynamic (JS) websites is the air gap between client and server. Its so painfully obvious compared to a static site. The Data flow feels like client->transform->api->transform->server->db->server->transform->api->transform->client, the song and dance just gets old after a while.
I stupidly started my API interface as just a million unstructured JSON endpoints, each one with its own requirements / json structure / cache strategy / etc. The amount of work that essentially feels like boilerplate is mind-numbing.
I haven't looked around for better alternatives very much, but I would be greatly interested in technology that makes the client/server boundary feel nonexistent, without needing to totally upend your current stack with a brand new framework like Next or something. Just a drop-in library that makes the server feel like another clientside API. I have no idea how that would be done without excessive coupling of the client/server though.
There have been lots of attempts to blur client and server. The latest Next is one, Meteor is several years old now and does something similar iirc, and a few other less popular ones. I don't think a drop in lib is possible by its nature tho, client and server have to cooperate.
I've found random endpoints works reasonably well tho. You can change the server response and it only affects one client page. Easy to maintain.
If you want more structure, you can also try GraphQL.
This is my approach too for my photography portfolio. It’s undoubtedly more work for a single edit/update/addition but over time the simplicity means that I’m not doing maintenance or migrating content between different tools that it’s less work in the long run.
Before CloudFront I had a setup that used custom HTTP server code to capture requests directly as S3 objects, then wait on an SQS queue for the response which would indicate the location of an S3 object containing the entire response verbatim. (Yes, each server could handle exactly one request at a time, call it a learning experience.) This was the closest I ever felt I got to a “static” website.
The goal was to eliminate redundant computation, and it did that, but there are much better approaches these days. What I ended up with was just bespoke, persistent caching.
I agree with the author, and I think it mostly comes down to the original article author being young and not really having much perspective on the reliability, stability and backwards-compatibility of HTML/CSS/HTTP and traditional file servers over time. The fact that you can pick up an HTML file from 1993, trivially drop it on any type of web host, and have it be accessible on any browser on any platform more or less exactly as it looked then is not to be underestimated. Compiling and running any source code from 1993 that hasn't seen maintenance since then is going to range from slightly annoying to a monumental effort requiring essentially a full rewrite. This is exemplified by this comment:
> There have now been a significant number of attempts to "platformitize" dynamic computation — "serverless" programming like AWS Lambda, WASM runtimes like those provided by CloudFlare and Fastly, full container deployment via Fly.io and cloud providers. None of these have an API nearly as stable as the filesystem API used to host static files, but it's a matter of time before APIs end up well-established.
Containers and WASM are certainly interesting vectors of standardization, but even if everything goes fully to plan by the most bullish proponents, you still have a much more complex environment where incentives to break backwards compatibility make it exceedingly unlikely that you'll be able to just grab a container and WASM binary and drop them into any commodity host 30 years from now. Of course HTML/CSS rendering could break too, but incentives are skewed more in favor backwards compatibility. Also, because it's just a simple, human-readable file format, slight breakage doesn't render the archive useless. Heck, even the destruction of the internet and all servers doesn't render the file useless—you just need to be able to decode ASCII/UTF-8 to extract value.
I think a better thesis for the original author would have been that static site generators are not necessarily simpler than dynamic sites generated at request time. I agree with that wholeheartedly, but the beauty of the SSG is that if at any time I want to stop maintaining the code, I have a ready-made servable version of the site that is trivially archivable in perpetuity. Standardized data formats expressed in simple files will always be more durable than code.
If you guys are looking for some very simple shell script that generates nice and easily extensible static pages you can check my script:
https://github.com/henriqueleng/shite
I worked a lot with both SSR and SSG. I think SSG is the correct choice for most sites. There may be reasons to use SSR like if it is the only alternative to problematic client side rendering or in some cases security.
I’m skeptical cheap and easy is something inherent with static sites. I think a dynamic setup can have both those qualities, but there isn’t yet CMS/blog software that hits the sweet spot: self-contained, simple to setup, manage, and host.
Dynamic sites are a strict superset of static sites. A static website will always be easier to host, and it will always have a smaller attack surface simply because less code is needed - especially custom one-off code.
There is no custom executable to run, there is no need for authentication, there is no need to write files to disk, there is no need to parse user-provided content, there is no need to access a database, there are no custom libraries to update... A static website can literally be "upload and forget", and that simply isn't the case for dynamic sites.
Write your own. You know what subset of features you care about, only implement what you need how you need it, and don't worry about making it extensible for others. You'll never need to read any documentation or worry about breaking changes when you update it, and you'll know exactly what it is doing.
I wrote my blog's static site generator 10 years ago and I'm still using it. If I want something it doesn't do, I change it. It's still a single-file perl script and it's easier (for me) than any other option.
Indeed. These days we have 24x7 net connections on both our mobiles as well as home LAN/desktop whatever, but you still can't get a single, 1-click, drop-dead simple program that you can use to both author as well as serve web-pages and images right from your own device. Thanks largely to dynamic IPs, NAT, firewalls, ISP upstream throttling and developer apathy (since their livelihood may depend on the complexity of this), Joe wanting to publish his thoughts or photos still either has to sell his soul to FB/Insta/whatever or fork over significant cash for the likes of Wordpress/Squarespace, OR spend days to teach himself the nitty-gritty of buying a domain, fiddling with your DNS, setting up dynDNS, connecting all this to GitHub pages or whatever, fronting the whole thing with Cloudflare.
Exposing your home devices to the internet is a really bad idea, because it exposes you to doxxing, DDoSing, whatever security issues your 5-year-old smartphone may have, and a plethora of other risks.
People used to do this in the 90s and early 2000s, but it turned out to be a hilariously bad idea security-wise. Even the enthusiasts these days use a dedicated server somewhere in a datacenter. Services like Github Pages and Cloudflare exist for a reason, and it isn't because anyone is trying to gatekeep web hosting.
Some of us have been doing this since the 90's and don't see and end in sight. The issues are not bad compared to your average customer service call. I prefer knowing what is going on with the technology I manage. Most websites are way over complicated on the backend. It's very easy to self-host.
Other than DDoS, all other issues you mentioned apply equally to all internet facing apps like Whatsapp etc. A static file server can be hardened to the point where only DDoS remains a significant threat and that hardly applies to an average Joe. IF my little site of a few boring pages & pics gets DDoS'ed, then I can move to Wordpress, Cloudflare or whatever.
There’s also a nice middleground - you can just proxy your site through Cloudflare. This masks your IP and protects against DDoS, the only caveat being you need to move your DNS to Cloudflare.
My usual model is to configure the servers to be hosted locally and rsynced anywhere else. I'm currently going through this exercise right now. I start by making everything good local and then can publish anywhere.
I love having a completely independent working local copy. My environment is perfect for my needs or I change it.
Basically my deployment plans and disaster recovery are indistinguishable.
This isn't the only reason. People who are trying to get attention will flock to where attention is directed. For better or worse, centralized platforms are a good place to do that. It's the same reason protestors, charity drives, people handing out free tickets to a new concert venue, do this in downtown parks and public squares, not their front yards. Popular photographers hold shows in galleries, not their own living rooms.
I built a little micro CMS backend for myself that I’m working on open sourcing which is pretty much this. It parses markdown files with frontmatter into html on startup and stores them in an in memory SQLite database. So you have no real content rendering overhead, and you get the little extra bonuses like full text search with FTS4 by default. I use it with sveltekit and it’s pretty cool, strikes a nice balance and is hilariously fast.
You should try one of the free CDNs like Cloudflare or Netlify. Usually they're free for small sites. Integrates with static generators and git and makes the whole thing very simple.
There are so many out there these days. It seems more likely to me that you just haven't found a static site generator which suits your workflow and technology preferences.
I no longer think about the server or the CMS. I no longer need to keep it updated. I got rid of the heavy database and the elaborate caching setup. Now it's just a static file server. If it was just the blog it would be even easier to host. The website is more reliable and requires virtually no maintenance.
The best part for me is that I can work offline. It's just me and my text editor, so even my tiny Macbook 12" feels blazing fast.
Version control is also incredibly valuable. I can review my changes or roll them back. I can also find/replace across the entire content with regex. Text files are easy to work with.
I wrote about how it feels and why it works here: https://nicolasbouliane.com/projects/ursus