(An Opinionated) Site Colophon

Escaping the blog format: the how, and more importantly, the why of this website.

A small, round photo of a smiling man with a mop of dark hair and a pointy nose.

by Brett Coulstock. .


When you design a website, what you wind up with is a dependent on what problem you're trying to solve, and what technology you're using to build it.

The website of someone who is obsessively interested in trilobites, and wanting to share that interest, is going to be fundamentally different to the website of someone who is developing their “personal brand” to position themselves as a “passionate” UX-designer-with-a-newsletter.

I had a blog for a while, from about to about . It was mostly a showcase for my graphic design work, and a record of some of the holidays we'd taken. It became stagnant after we started building a house, an activity that absorbs all your free time.

For the last few years I'd been thinking about starting it afresh. To have a blog is to have something to say, or something to show, and I started to feel I had some writing I wanted to share.

Escaping the Blog

I decided to go with a static site generator. I like Python, so I picked up Pelican.

And I found myself fighting it.

It wasn't working for me. I mean, Pelican worked fine, and did what it said on the tin. I recognised my frustrations were not about the technology but about structure and process, so instead of switching to another static-site generator or Wordpress or similar, I examined my needs and questioned my assumptions to find out why it wasn't working, and why it was so hard.

One of the big things I realised I was grappling with was longevity. Not in the Cool URIs Don't Change sense, or the This Page is Designed to Last sense, or even the Write Everything in Plain Text sense, but in the basic sense of having a site that is not perceived as stale, broken or dead.

The blog is the default shape of the personal website, and has been for a long time. When I started thinking about resurrecting this site, I operated on the unquestioned default assumption that it would be a blog.

But there's this push-pull thing that happens with blogs. They're animals that need care and attention. Post too often and it winds up being low-value chicken-feed; don't post often enough and it will feel neglected, and that whatever your excited impetus for starting it has disappeared.

I don't want to read anything by anyone who wasn't excited to write it.

It would be ideal if I could leave the site to its own devices for any length of time — a week, a month, a year, or more — and it would still keep its value, still feel like something alive.

I don't want it to become a chore.

I really don't want the hassle of software upgrades, or learning yet another new platform.

So I decided to build my website, my personal home page along these principles:


Avoid a chronological feed of posts or articles.

All content pages should have a date.

The site should have an RSS feed to alert interested users of added content.

High Value Content

Only write if you have something to say, a point to make or information to communicate.

Don't write “posts”, write articles.

You're not looking for friends, or user-engagement, or a job. You're trying to create something of value.


Keep the look of the site clean and simple, as “timeless” as possible.

A basic design, eschewing the design-trends of the day, can be progressively redesigned over time, instead of jumping from one trend to the next.

The aesthetics belong in the .css file.


Keep the technology simple and low friction. For me, this means writing HTML and CSS by hand with a little assistance from PHP. None of that has changed in 15 years. It'll all still work in another 15, easy.

Keep the dependencies low. No JavaScript. No web-fonts. No external CSS or other libraries. No cookies. No tracking. No comments.


Don't bother with taxonomies like categories and tags — they're for bloggers to index their chronological sites.

The index page of the site will be the index page of the site.

Help search-engines and others index your site by creating uncluttered, standards-compliant pages with rich metadata.

I love the web, the old web, the web of documents. That's the web I want to help build. The one that's like being in a huge library, the one that's full of essays from everyone about everything, their lived experiences, their cool ideas.

Not the forever doom-scrolling meme-laden jittery clickbait prole-feed of social media.

The quiet web. The small web.

(I hesitate to say the indie web although many of our goals and values overlap significantly; I've just seen too many indie web-sites that start with a 👋 waving-hand emoji and maybe some 🚀rocket-ships — what does that even mean? — and are breathlessly excited to let me know that they are “passionate” about helping companies to “craft experiences”).

As an aside, so far I have found one other site that largely follows the same principles that I arrived at: datagubbe.se from Carl Svensson: it is content-rich, linked and separated into categories on the index page.

What I really love about Carl's site is the dates are vague yet evocative: “Early 2023”, “Summer 2021”. My only problem with that is that seasonal dates are dependent on which planetary hemisphere you happen to be occupying. But in a similar spirit I changed my approach to displaying the date to just the month and year, rather than any specific day. The full date is still in the time tag on the page and the RSS file for interested parties. Anyway, it's really interesting seeing the parallels of congruent thought and evolution.

Under the Hood

So I do web-design like it's (arbitrary year chosen for rhyming purposes).

This site is hand-coded, and it's pure HTML, which is how I write most documents.

However, after enduring the tedium of hand-managing the metadata (and I'm a huge fan of metadata) and all the duplication that happens if you add OpenGraph, Twitter and Dublin Core, I added a little — strictly local — helper code.

I store the basic metadata in a JSON file, then I assemble the header and footer of the page with PHP, which is then rendered out to a complete HTML page using wget, driven by a Python script, and then uploaded with FTP.

The Metadata Format

For example the entry for the page Doctor Who - The Sontaran Experiment Audio Description looks like this:

  "dw-sontaran": {
    "date": "2023-05-12",
    "file": "doctor-who-the-sontaran-experiment-1975-audio-description.html",
    "title": "Doctor Who: The Sontaran Experiment (1975) Audio Description Scripts",
    "desc": "Unofficial audio-description scripts for the two-part 1975 Doctor Who serial The Sontaran Experiment in CSV, HTML and SRT format.",
    "keywords": "tv, script, 1975, audio description, accessibility, science fiction, bbc, doctor who",
    "type": "video.tv",
    "licence_url": "http://creativecommons.org/licenses/by/4.0/",
    "licence_name": "Creative Commons Attribution 4.0 International License",
    "print_title": "Doctor Who: The Sontaran Experiment (<time>1975</time>) Audio Description Scripts",
    "print_desc": "Unofficial audio-description scripts for the two-part <time>1975</time> Doctor Who serial The Sontaran Experiment in <abbr>CSV</abbr>, <abbr>HTML</abbr> and <abbr>SRT</abbr> format.",
    "categories": ["accessibility"],
	"published": 1

That's pretty much all the metadata needed, other than things that don't change which are hard-coded, such as author name, locale, and encoding.

For the actual document, at the top of the file is this fragment, which extracts the data associated with the cited key from the JSON file and assembles the doctype, head and the first section of the body, mostly the h1 tag and subheading.

$fields = get_json("dw-sontaran");

And at the bottom to assemble and include the footer:


Everything in between is just plain HTML. The visible page title, the h1 tag and subheading are also automatically generated.

So when I get round to adding JSON-LD metadata, I won't have to redo every single page's metadata.

And lastly, another little piece of php generates the RSS feed XML file from the JSON data, based on the value of the published key.

That's my “content management system”, that's my “static site generator”. It's very small, simple and maintainable; no larger and no more complex than it needs to be.

Filed under: Articles / Essays