/ leaps


There’s a lesson I seem to repeatedly forget, applicable to both life and design, in the nature of leaps forward. By “leap” here, I refer to an evolutionary change in the quality of an experience or approach which simultaneously solves many problems at once. The big “ah-has” that transform entire problem spaces and change the ways we think about possibilities.

A leap typically has the ability to take things we knew would be good, but opens them up in ways we couldn’t imagine before. For example, the potential of smartphones was long known before they became widely available. Authors and researchers imagined the possibilities of portable computers and ubiquitous connectivity decades before they became everyday utilities. Sci-Fi predicted things like the internet, Google, widespread social networking, and cryptocurrency. However, what it didn’t predict is the things that come after. The results of the leap.

As designers and engineers, we are constantly asked to quickly sort through possibility spaces, finding the good or elegant options. When searching a possibility space, we often use heuristics, basing our perception of options on experience or existing data that models expectation. Leaps obscure the options that these models fail.

Sometimes you can’t measure an experience after a leap based on what has come before it.

Why did social networking take hold when it did? Why did YouTube and Netflix start to work after so many video streaming services failed? Why are peer-to-peer sharing economies like Uber or AirBnB working now, rather than 10 years ago? Many people felt certain all of these things would eventually work, but why now? It seems easier to look back and remember the reasons these things wouldn’t work, rather than notice the changes which allowed them to.

Perhaps the reason leaps are so difficult to pin down is that their effectiveness comes from a lot of little changes, rather than a few big ones that humans find easier to reason about. Many small reductions in frictions that touch our lives and the lives of others in little ways throughout our day.

I often make the mistake of forgetting leaps when I judge a new idea. While it is necessary to be able to rapidly weed out bad ideas when searching, I miss good ideas too because I didn’t see the leap. And there’s the crux of the problem: many of those little reductions in friction can’t be noticed until you try.

A leap of faith? Or rather intuition?

/ deja-dup-s3-hangs

Fixing hanging deja-dup S3 uploads

After configuring deja-dup to back up to S3, I hit a snag: the process seemed to hang during the upload phase.

To obtain more information, I found that you can enable verbose output via an environment variable (why it isn’t a verbose command-line parameter is a mystery to me):

DEJA_DUP_DEBUG=1 deja-dup --backup

The first S3 upload would start and hang, eventually printing the error:

DUPLICITY: . Upload 's3+http://[...].vol1.difftar.gpg' failed (attempt #1, reason: error: [Errno 104] Connection reset by peer)

It turns out that this is a transient error for new S3 buckets while the DNS changes propagate through AWS (reference). Indeed, the full error contents of curling the bucket described a temporary redirect, which was probably not being handled properly by deja-dup/duplicity/python-boto. After waiting about an hour, the problem was resolved and my backup process went smoothly.

As a side note, after tinkering with the IAM profile a bit, this is the minimal set of permissions I could find for the duplicity account:

  "Statement": [
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": "arn:aws:s3:::BUCKETNAME"
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": "arn:aws:s3:::BUCKETNAME/*"
/ imaging-a-linode

Downloading an image of a Linode drive

Recently, before rebuilding samurai, I wanted to download the old drive image as a final resort in case I’d forgotten to snag any files. I was a bit disappointed that there was no way to simply download the raw image using the web interface, but there are dark arts of dd that can fill this gap. Linode provides their own instructions for the procedure, but I discovered a few tricks worth saving:

  1. Running a separate rescue distro per Linode’s suggestion and then opening it up to root ssh seemed a bit ugly to me. However, imaging the system live and in-place could lead to corrupt images if the disk is written to. Remounting the live root filesystem read-only with mount was not possible, but there is an old Linux SysRq trick you can use to remount all mounted filesystems read-only:

    $ echo u > /proc/sysrq-trigger

    While it’s still a good idea to stop any nonessential services, this allowed me to proceed with the system online and using my existing ssh keys. I also lived dangerously and kept my soon-to-be-doomed nginx process running during the imaging. >:)

  2. Since most of my drive was empty zeroes, passing the data through gzip was a massive savings in both transfer and disk usage at the destination:

    $ ssh samurai "dd if=/dev/xvda bs=64k | gzip -c" | dd of=./samurai.img.gz
  3. Un-gzipping the image file the straightforward way leads to a problem: gunzip does not appear to support sparse files. In my case, I didn’t have 50GB to spare for 1.5GB of uncompressed image data, so I needed a way to decompress to a sparse file. There’s a nice trick using cp and /proc’s file descriptor mapping to pull this off:

    $ gunzip -ck ./samurai.img.gz | cp --sparse=always /proc/self/fd/0 samurai.img

So there you have it! Gzipped online disk imaging with a sparse output file. Note that the image itself won’t be bootable, as Linodes boot a kernel specified outside of their drive storage. You can read the data by mounting the image loopback via the instructions Linode provides.

I’m sure this is all common knowledge around sysadmin circles, but I wasn’t able to find all of the pieces in one place during my googling. Hopefully this recipe may prove useful. :)