/ astro


This weekend I replaced chromakode.com’s backend with a new static site generated by Astro.

When I first stumbled upon it, I found Astro’s marketing page quite impenetrable. It was unclear to me what it was. In brief:

Astro is a templating/component language for mostly-static content sites which uses JSX-style syntax and composition. It provides an integrated build system for CSS / JS / etc as well as support for lazy-loading JS-heavy components.

It’s a wonderful feeling to stumble upon a great new tool for a familiar problem area. For me, Astro ticked a lot of boxes, particularly:

  • A component system which scopes and dedupes CSS / JS, enabling separation of concerns and granular reuse between templates.
  • Built-in pagination.
  • Markdown flat file data source (my dir of .md files was drop-in!)
  • Low config with good defaults, yet few limitations of route structure and layout.

So, this weekend I took Astro for a spin in a long-overdue rework of my personal site.

First impressions using Astro

It’s hard to assess any tool without building a real app, so here’s my anecdata. Adopting Astro took me about 8 hours: 80% of the time was spent reworking old logic/layout, 20% learning and using Astro. This is a terrific result. It took only hour or two of tinkering to learn Astro’s concepts. From then on I made rapid progress. Here’s the resulting source.

To get this good of a component system, it used to be necessary to pull in JSX / React. YMMV, but after nearly two decades of cranking on web templating systems (even writing my own!) I personally find the ergonomics of JSX to be unmatched; particularly how un-magical the semantics are. The surprising and hard to explain thing about Astro is how it’s not React: it’s a lower level static templating system that you integrate your client stack on top of.

Astro’s component system is eminently familiar if you’re coming from a React or Vue backgound, and that’s a very good thing. Astro Components render entirely on the backend but seamlessly integrate build-time logic. Effectively, Astro is a backend demake of popular React static site generation patterns. In this respect, it reminds me a lot of Gatsby.

Astro’s DX is impressive for a project which had its first public release a little over a year ago. astro.new and its StackBlitz integration were a fantastic way to peek at project layout before starting my own. There’s a CLI flow for installing addons which walks step-by-step through each specific code change it will make. Build times are nearly instant thanks to Vite under the hood, though I ran into cases where restarting the dev server was required to reflect structural changes to Astro Components.

Mostly, the assumptions baked into Astro saved me time, but I did run into a puzzle with Markdown-driven pages that ate a couple hours. Each Markdown file specifies a “layout component” which wraps the rendered Markdown and outputs an HTML file. I also wanted to render full blog posts in the index page. Rendering them using Astro.glob() yielded a list of entire pages, multiple <head>s and all. I couldn’t remove <head> from the layout component, because the single page render needed it. I suspect it’s possible to opt out of Astro’s default Markdown routing via a parameterized route, but this feels like fighting the tool.

Overall, I’m genuinely excited about Astro’s future. I see it replacing many cases where I’d reach for Next or CRA. It maps exceedingly well to the set of challenges I’ve experienced building contentful sites. A testament to that is that the low complexity of building this site with Astro. Simplicity pays huge dividends: there’s less to learn, less to build, and so importantly, fewer details a maintainer (me) will have to piece together many years from now.

Farewell for now, “been”

This post also heralds the end of been / wake, my homegrown lifestream collector. It had a long, 12 year run. Born in the era of FriendFeed, been’s purpose was to hoover up my various activity feeds across the web and archive them into CouchDB. wake was a simple Flask web frontend which rendered been events (including markdown files) into a blog-style feed.

My hope was that by archiving activity that mattered to me (e.g. Last.fm scrobbles, Github commits, reddit comments, tweets) into been, I’d gain a durable backup of my activity across the web. In practice, I never used the data for much, and the relevance of these data sources faded. The writing has been on the wall for years: one after another, feeds broke as services discontinued them or added API key requirements.

This is perhaps emblematic of a web that’s become less syndicated and much more centralized, and social spaces which are less public. I’m looking forward to the next swing of the pendulum.

/ checkbox


Hi xkcd visitors! Today we released “Checkbox”, our 2021 April Fool’s project. I proposed the original concept for this one and built the frontend. The backend and editor tooling was made by davean and Kevin Cotrone, with content written by a collection of amazing folks credited in the header of the comic and the one and only Randall Munroe.

This year was a doozy. We specced and scrapped several different ideas in the months leading up to today. We finally settled on today’s concept just 3 days ago. The need to do something simple was a really useful constraint, and we leaned into the idea of making something primitive but deep. I’m really proud of how it turned out!

Here’s a few stories from development.

Morse all the way down

If you take a peek at the JS code, comic.js, you’ll find that it’s written in morse code!

".---- -.-. -.. - .. ... ...-..- -.-.-- ..--- . ---.. ..- -.-- -. ----- -----
----- ---.. ---.. ..-. ..--.- -.... .---- --... ---.. .--- -..- ..-. ....-
-..-. -....- .-.-.- --- ... ..--- --. -.-.-. .---- .-.-. -..- -. .--.-. -----
.-. ...-- -.--. -.- -.-. --.- ..- -....- .--. ...-..- .... --.. .---- ..-.
-.-.-. ---.. . -.-. -... --. ----- --- ...- .. ...- .-... ..--.- ---... -.-.--
.. ..... .-- -.--. ----. -.-- -. .--. .---- .--- -...- ... -..-. ---.. -..
-.-.-- .... -... -. ..... -..-. . - .-..-. --..--".split(";D").map(morse.run);

When you paste the code into a morse translator, unfortunately the result is not very helpful:


What’s going on here?

Initially, I’d hoped to faithfully translate the JS code to morse and back. When I started implementing this, I noticed that my character table did not contain curly brace characters. Hmm, this was going to be a problem.

I considered adding new characters to the morse table, but this seemed like it could become tedious, and ran the risk of overlapping with other more obscure morse sequences I wasn’t aware of. My next idea was to interpret sequences of 8 dots and dashes as binary bytes, which seemed like a pretty good solution, though a bit verbose. I took a break from thinking about this, and then had an idea: it’s pretty common to encode arbitrary data into letters and numbers, so would something like base-36 work?

Now we were getting somewhere! As a slight enhancement over base-36, I pulled in the excellent base-x library, which accepts an arbitrary list of characters to encode/decode binary data into. I threw our 56 character morse code table into it, and we were in business!

So, the JS code is actually encoded twice: once in base-56, and then in morse code. If you count UTF-8 — which actually came into play when I needed to add an emoji to the source! — it’s running through three layers of encoding. :)

A UI challenge disguised as a statistics problem

A major design challenge from the outset of this project was accurately interpreting users’ morse code input. Morse code is based on tapping out a rhythm with a base tempo for dots, multiplied by 3 for dashes, and 7 for spaces. Humans are not great timekeepers, especially unpracticed ones — and our assumption was the majority of our users would be tapping out morse for the first time. Given a sequence of checkbox presses, how do we determine the tempo they intended to be morsing at? This is a pretty interesting and hard to solve problem!

My initial prototype of the checkbox interaction was in the form of an Observable notebook. I wanted to get something put together quickly to make sure the experience of tapping out a conversation with a checkbox was compelling, and most importantly, actually usable! The experience would be ruined if it wasn’t possible for a new user to learn how to communicate with it.

This initial prototype uses the average time period between presses and releases to determine the base tempo. With a little tuning, it worked, but it had two significant issues: firstly, it has no way of knowing whether the initial presses it’s seeing are dots or dashes. If the input skews towards dashes, the threshold will be off, and dashes will be interpreted as dots. The second problem is that as more dots and dashes are observed and the threshold is adjusted, the morse decoding changes over time (you can see this in the Observable prototype).

The first thing I tried to increase input accuracy was tune the prototype with really lenient thresholds. I increased the pause time allowed between characters because in my own testing, I found myself looking back and forth between a morse chart and the screen, and I was feeling rushed. I later discovered this is called Farnsworth Timing, and it was invented in the 1950s! Even so, the prototype was undeniably unreliable and a little frustrating to use, even though I could watch tne debug data as I tapped.

Originally, I didn’t envision anything in the comic besides the checkbox — the utter minimalism was a part of the gag. However, actually using the checkbox convinced me we’d need to somehow show users their input. Otherwise, it’d be too easy to typo a letter unknowingly and get a wrong response from the right intent — a super frustrating experience. I didn’t want users to have to reach outside the confines of the comic (e.g. looking at the console, or a user script) to use it. Whatever we did, it had to be ergonomic. The fact that we might not be able to interpret a morse input until more characters had been entered made giving users feedback on their typos much more difficult.

In the final day as I prepped the JS code, this problem remained unresolved. I asked davean for his algorithm wisdom on a quick and adequate way to calculate the time period. He suggested using k-means clustering to find the dot, 3x dot, and 7x dot clusters in the input data. He had also considered implementing fuzzy matching on the server side to allow for input errors, but there simply wasn’t enough time to do that justice in a couple days.

It was a couple hours before midnight on Mar 31, and I was finally ready to start on a k-means implementation. I felt that the prototype algorithm would be good enough for the first few hours of the comic being up, and I could follow up with a better recognizer when it was ready. And yet, the issue of indicating typoes still daunted me. Even with a better way of analyzing the timings, until the user had input at least a dot and a dash, the best we could do is apply a default tempo and guess. If we couldn’t reliably show users the impact of their actions, we couldn’t effectively teach them morse code.

With an hour or so left I decided to focus on adding some kind of “HUD” that would indicate to users if they had just input a dot or a dash. To do that, I needed a deterministic recognizer. So in the end, I just set a fixed tempo. These two choices together were a breakthrough! The problem all along was that users might not have a consistent timing in their input — but the UI could show exactly when a dot became a dash, or when a pause between presses formed a space. Even the animations could help reinforce the rhythm, by making them multiples or divisions of the dot period. The characters slide to the left in sync with the ideal tempo, so it feels very natural to tap in sync with that.

I’m really proud of how this final interaction turned out. The HUD takes an incomplete sequence of presses, extrapolates the effect of the user pressing at the current time, and encodes as morse. A crucial win is this not only makes it crystal clear when your dot becomes a dash, but also when a gap between presses becomes a space, which we can indicate before the user touches the UI again. This works great for users composing their messages using a code table or pencil and paper, because they can see the screen fill in with their intended patterns.

Another cool outcome: the comic is far more lenient than traditional morse code. A user merely needs to wait until the preview displays what they want. It’s not necessary to have consistent timing: you can input consecutive dots much faster than equal timing, and dashes can be any length as long as they reach the minimum threshold. This flips the task from keeping time to a WYSIWYG model, of sorts.

I think it’s really interesting how this initially appeared to be a statistics and data analysis problem but turned out to be solved in a much better way by UI. This is particularly exciting to me as a generalist, because I often have an internal dialogue running like:

“What if I was better at this specific discipline? Does knowing a little about a lot of things really make a difference when the complexity I can tackle in any particular field is limited?”

This is an example where working different angles of the same problem helped me to understand which problem was really necessary to solve, which resulted in a better solution.

Unable to send the letter “E”?

Shortly before launching, we discovered a really strange bug: if you tried to send just the letter “E”, you wouldn’t see the intended response. The server did not recognize the input, even though we specifically handled “E”. Also, two “E”s, or any other combination of letters, was working normally.

So we started digging into both the client and the backend to see what was going on. We took great pains to make the API for this project use morse code in the transport. If you take a look at the network inspector, you’ll notice that the URLs requested have morse code in them:

chromakode@atavist:~$ curl 'https://xkcd.com/2445/morse/.../...._..' -v
> GET /2445/morse/.../...._.. HTTP/1.1
< HTTP/1.1 200 OK
< Content-Type: text/x-morse;charset=utf-8
-.. -... ...-- ...-- ....- .- .- -... -....- ----. ..--- .- .---- -....- .---- .---- . -... -....- ---.. ----- ----- .---- -....- ---.. -.-. .---- -.... ....- ..... ....- ..-. -... ----- ..--- .- / .... . .-.. .-.. --- -.-.-- / .- -. -.-- -... --- -.. -.-- / --- ..- - / - .... . .-. . ..--..

The letter “E”, as it happens, is the simplest character to enter in morse code: it’s a single ”.” — hence how we discovered this relatively obscure bug so quickly.

Back to the problem at hand. We noticed that the client wasn’t sending the morse for the letter “E” along. Instead, the comic would request as if nothing had been entered at all: an empty path. Investigating a step further, I verified that the client was generating and requesting the right URL with the morse letter “E” in it! What was going on?

Then, an even stranger thing happened. I copied and pasted the correct URL into my browser and pressed enter, and right before my eyes, it deleted the ”.” from the end of the URL and returned a different result. This immediately set off alarm bells in my mind, thinking that this period at the end somehow made it a malformed URL. Could it be a quality-of-life feature to correct typos when a URL is at the end of a sentence? No, that didn’t add up, because it had originally failed when requested by JavaScript, a context in which URLs are not human input.

Ok, next idea. Was there a spec somewhere that described URLs ending in a period? My googling eventually led me to this StackOverflow post by user “speedplane” (which at the time of writing has one vote):

“As other answers have noted, periods are allowed in URLs, however you need to be careful. If a single or double period is used in part of a URL’s path, the browser will treat it as a change in the path, and you may not get the behavior you want.”

Evidently, a bare . or .. in a URL will be interpreted by browsers similar to UNIX filesystem paths: as referring to the current and parent directory respectively. This appears to be particularly relevant for relative URLs. So in addition to “E”, the letter “I” (morse: ”..”) would also be affected.

We fixed this issue by adding an extra separator character before the period, making the urls /_. and /_...

If you’re interested in learning more about how this was made, we’ll be releasing the code on https://github.com/xkcd when it’s ready. You can also “check” out the comics from previous years there. I’m partial to Alto, our infinite menu system. You can find more of my work on github.com/chromakode and @chromakode.

/ slow-news

Slow news

For the past 3 years, I’ve been addicted to the news. My usual days are punctuated by checking the NPR Newscast (via the wonderful NPR One) every couple hours, reading articles linked through Twitter, Facebook, and Reddit, and listening to topic-focused news podcasts throughout the day as I exercise or do chores.

I’ve been reflecting on what motivates me to devote so much time and attention to news. I want to know what is going on so that I can feel connected to and orient myself within the world. The past few years have brought a dismaying array of social, economic, and political challenges that seem to only accelerate. Often, catching up with the news compensates for a feeling of restlessness from the mounting problems my country and planet faces.

In America, the dominant sources of news (media corporations and social media) are monetized based on advertising. Advertising demands attention. The more frequently and longer you engage with a broadcast medium, the more ad impressions result. Likewise, on social media, the more time you spend scrolling through feeds, the more sponsored content you’ll consume.

I recall hearing a story about how food science is engineering delicious-tasting, empty food. Bland crackers and chips are enhanced with seasonings that create addictive, habit-forming snacks. Our taste buds love the flavor of these snack foods, but they provide little nutritional value. Reflecting on social media, I fear a similar effect is happening: we have engineered our information streams to be addictive streams of interesting but worthless information.

From 2011-2014 I worked at Reddit. I recognized the role addiction played in the product, and rationalized that we were channeling that addiction into teaching people things and exposing them to new ideas. I was addicted myself, and often interrupted my coding or thinking to feed my own info addictions. When I catch myself boredly scrolling through my feeds searching for something interesting, I sometimes wonder how I have come to so devalue my attention and focus.

Whether the addictive qualities of social media were intentional or an emergent quality, they are now actively cultivated because they produce business value. Designs and algorithms are optimized to increase the amount of attention users give to media products, and changes that reduce engagement are rolled back.

I believe that this is causing us to create information streams that are fundamentally unhelpful. Social media doesn’t benefit if it provides information that is useful; the incentive is only to take our time. The design of infinite, randomly assorted feeds engages us all in the sisyphean task of trawling for what matters to us. There will always be more to see. There is no goal to satisfy that will end our search. We have optimized ourselves out of a sense of agency.

A similar dynamic exists with broadcast news. The more frequently we tune into our news sources, the more ad impressions will result. To keep us coming back, there has to always be a new story, a new update, new information to disseminate. So there always is. Similar to how an audio compressor makes everything sound loud by increasing the volume of the quiet parts, news keeps our attention by providing information with a steady level of significance.

As I’ve reflected on this, I’ve come to refer to this kind of news as “fast news”. Fast news is Twitter, hourly radio broadcasts, cable news, and frequently newspapers. Fast news is pieces of stories as they develop in real-time, constantly becoming superseded by new information. Fast news is zoomed in on somebody somewhere else right now. Similar to social feeds, it’s addictive because it’s endless and inconsistent: the next important event may be just around the corner.

There is a serious problem with fast news: it turns us into spectators. With a focus on the immediate, fast news presents us with information that we can barely react to before it changes again. Fast news surfaces facts before they can be processed into patterns. Every story has a long tapestry of events and dynamics that preceded it, which we miss when we constrain our view to what’s happening now. Similar to engineered foods, fast news feeds our desire for information without serving the reason we consume it: a need for information that is actionable and enables us to direct our lives.

Being able to receive and act on news as it happens can have tremendous value, such as in the case of natural disasters or organized protests. However if we look at the amount of time we invest in consuming fast news compared to how many choices it presents us with, there is an obvious disconnect. Ironically, by the time we receive fast news, it’s usually too late to do anything about it.

An alternative could be news that exposes opportunities we can participate in. Behind every disaster is a years-long recovery effort. Behind every political move are larger influences, inspirations, and public drifts in sentiment. Slow news changes the focus from individual events to ongoing topics. By weaving events together into continuous stories about larger causes, those causes become things we can debate, influence, and change. Slow news is not about describing the present, it’s about understanding the future.

Unlike fast news, the amount of slow news to consume is finite. If little in the big picture has changed, there is not much new to speak of. This presents a challenge: without a constantly-changing perspective, why will an audience return to it? A successful slow news source must justify itself by the practical value of the information it provides, not by how interesting it is.

Changes in the world, even from acute events, are usually the result of a long ongoing effort. It’s not possible to orient this effort at the pace of fast news. An ongoing value of slow news is to provide focus and context in the face of change. When our perspective is zoomed out, and not changing so quickly, more attention can be paid to the actionable aspects of ongoing, slower transformations.

How we think about ourselves and the world fundamentally changes how we function, and our information sources play an outset role in this. I am seeking to reduce the priority of fast news in my life while staying informed during these changing times. What sources are you following to accomplish this? Are there any that you would consider to be slow news?

I’d be interested to hear your thoughts. Drop me a line at slow.news@chromakode.com.