/ astro-view-transitions

Astro View Transitions

Astro 3.0 has been released!

When reading through the changelog, I’ll admit to getting a bit nerd-sniped by the new View Transition system, and spent some time tonight figuring it out instead of hacking on Coalesce.

A few notes from implementing it here:

Transitions feel really novel on the web

It’s amazing what a big difference a smooth crossfade can make. After using the web for almost 25 years, I’ve become accustomed to the flashy, stuttery transitions between pages. It’s what makes UIs feel like the web. I can still imagine the IE 6 “click” sound when pages flashed to white on transition.

I love how page transitions can be declared with just a little CSS. This is a really clever API. Tons of creative possibilities here.

This could significantly change the way we design web pages

Without the customary flash to indicate that a page transition has taken place, web designs will need to find new ways to indicate to users that they’ve moved. Native apps tend to use persistent navigation affordances like tabs and sidebars to address this. I suspect we’ll see more content-heavy site designs incorporate visual cues for where you are in the hierarchy (listings vs. articles).

View transitions are already changing the way I think about designing page layouts. For example, on my site, the color bars at the top have position: sticky so that they shrink out of the way when you scroll. This is also a cue that you’re at the top of the page. With a crossfade transition, though, it’s not always visually clear that you’ve warped to the start of a different scrollable space.

Chrome is leading with great DX

The Chrome Devtools for debugging view transitions is brilliant. I love how the ::view-transition pseudo-elements appear in the element inspector. As with many Devtools things, my only lament is it’s so buried that many developers won’t know it exists.

Image loading is a source of jank

Transitioning between images has some pitfalls: if the image displayed on the destination page isn’t cached, it won’t be loaded in the transition, causing jank. The departing image fades into a white rectangle, and the image pops in afterward.

This is a bit awkward with thumbnail galleries, since the thumbnail needs to be the same image as the destination (ideally you’d want a small quick to download thumbnail). I experimented with using a <picture> element in hopes that a high res version would be swapped in after transition, but there was still a white flash.

The best solution for now was to settle on a medium sized image used in both the thumbnails and destination pages.

Sometimes transitions themselves are jank

I haven’t figured out the best way to “key” transition targets selectively: I’m finding widespread use of view-transition-name creates undesired transitions in some cases.

For example, the thumbnail transitions in the video look great if both the start and destination positions are onscreen, but in many cases that’s not the case. For instance, my front page shows a fullscreen splash, which causes the thumbnail to zip offscreen. It’d be useful if there was a way to bail out of a transition based on distance or if the destination is out of viewport bounds.

Another case where I get undesired transitions is clicking links in the post index at the bottom of the front page. Because there exists a thumbnail with matching view-transition-name way up at the top of the page, there’s an extraneous animation when landing on the post.

I tried using both CSS and JS to control view-transition-name, restricting its use to when thumbnails were clicked. The problem was that this broke back button transitions since the front page doesn’t have a matching view-transition-name on initial load. It feels like making this more stateful or requiring more JS is a step in the wrong direction, so hopefully a viewport bounds restriction will be viable in the future.

/ zfs-recovery-with-zrepl

Disaster recovery with ZFS and zrepl

A laptop with a blue BIOS screen reading: Default Boot Device Missing or Boot Failed. The photographer is making a silly surprised face in the screen reflection.

(Originally shared on Mastodon)

Last week, I pulled open my laptop to send a quick email. It had a frozen black screen, so I rebooted it, and… oh crap.

My 2-year-old SSD had unceremoniously died. This was a gut punch, but I had an ace in the hole. I’m typing this from my restored system on a brand new drive.

In total, I lost about 10 minutes of data. Here’s how.

Preserving data with zrepl

I don’t back up my drives, I replicate them.

Last winter, I set up my first serious home network storage. Part of this project was setting up periodic backups of the computers I do creative work on. After surveying the options, one approach stood out: ZFS incremental replication.

One of the flagship features of ZFS is the ability to take efficient point-in-time snapshots while it’s running. You can then send only the changed data to other machines.

To automate taking snapshots and sending them to my NAS, I’m using a really cool piece of software called zrepl (by @problame). I configured it to snapshot and send my entire filesystem every 10 minutes.

Since the snapshots are incremental, this is fine to run in the background on my home network to keep the replica up to date. The last run took 14 seconds to transfer and sent about 64 MiB.

Screenshot of a terminal running the zrepl status screen, showing a finished replication run with a full progress bar and a list of datasets replicated, all marked done.

Restoring from ZFS snapshots

Restoring the system was a learning process, and unfortunately quite manual. I let the 625 GiB ZFS receive operation run overnight.

My snapshots are encrypted by the original computer (this is cool because the NAS can’t read them!). So I also needed to restore the encryption “wrapper key” to be able to use the backups.

Initially when I tried to load the key, the decryption failed, and my stomach dropped. When plans don’t work, there’s a moment of clarity where all of the process failures become glaringly obvious. Thankfully, after peering closer at the output, I’d copied over the key file for the wrong system. A moment later I had confirmation I could decrypt the data. Such a happy moment!

To rebuild my system, I followed the OpenZFS guide for setting up an Ubuntu 22.04 root filesystem from scratch via live USB.

This was a priceless resource for getting back up and running. It’d intimidated me in the past, but it’s so thorough, and I learned a ton going through the process. This is the best hand-on guide I’ve seen for modern partitioning and chrooting in a Debian environment.

The end result was a beautiful moment: my laptop booted back up to right where I’d left it. Even my browser tabs restored my unfinished work from the previous night.

Planning for mayhem

There’s this classic series of Chromebook ads from 12 years ago where computers are repeatedly destroyed in elaborate ways. Each time, the host gets a new laptop and picks up where they left off, with no data lost.

That ad has been in my imagination for over a decade. I finally achieved my dream of having a similar disaster recovery plan. And it worked!

Setting ZFS up initially had a really high starting cost: it took a full filesystem swap. Maintaining it takes fairly knowledge-heavy and manual processes. But it certainly has unique benefits.

This is the first time I can recall losing an SSD in over 15 years of using them. It was fantastic luck that I’d set up replication before my first one failed. 😇

If you’re curious, the offending drive was a WD_BLACK SN850 from my original Framework order. I’d heard unsettling stories on the Framework forums of this drive spontaneously dying or becoming unbootable. I guess it was my turn to roll some unlucky numbers.

A photo of the SN850 which failed after less than 2 years.