Hey, I'm Max.

This is my journal and sketchbook.

/ practicing-urgency

Practicing urgency

Sunset from Japantown, San Francisco, on October 3rd 2022

“What would you like to be remembered for after you’re gone?”

I tried to stifle a laugh as I gazed bleary-eyed at my work laptop. It was a new colleague’s first day, and someone had offered an icebreaker at our morning standup.

I’d spent the night awake on the couch. Hours sitting, my whole body shaking, as I processed the news from the prior afternoon: my MRI had found something. The scan couldn’t identify it, but there appeared to be some kind of growth in my head. More scans in my future.

“I would haunt my wife, she’d love that.”


About 6 weeks ago, I was wheeled into an operating room at UCSF Parnassus for endovascular surgery. The operation would investigate and hopefully cure a rare condition called a DAVF.

This concluded 8 months of doubt and slow panic, searching for answers for a sound I started hearing in February. I’d calm myself by enumerating how low probability a bad outcome was. For the vast majority of people, hearing internal sounds is temporary and unremarkable. For a fraction of those, the sounds persist and there’s a diagnosable cause. A low percentage require intervention; the majority turn out to be benign.

We decided, with the help of the incredible team at UCSF, that it was worth it for me to have the surgery. The alternative was to wait and watch for the rest of my life. Thanks to advances in the past 30 years, the operation would be minimally invasive and had a high success rate.

I was cured that day. When I put in earplugs, I found that the rhythmic pulsing I’d heard constantly for the last 8 months was gone. Pure joy.

A photo of me in the ICU, smiling, after hearing silence for the first time in 8 months

The following morning, I learned my case had turned out to be higher risk than anticipated. With each pulse, blood was flowing backwards towards my brainstem. We were lucky to have caught my DAVF early and taken action before it could present dangerous symptoms.

What started as an innocuous rustling sound turned out to be the herald of a potentially life-threatening condition. I was in that tiny slice of the fraction of the percentage. But somehow, amazingly, I’d found a path through and had taken it.


“There is a tiny risk”, the neuro IR resident told me the day before the operation, “that you don’t wake up from this.”

We make choices every day. Not always so dramatic, but nonetheless vastly important. We are constantly choosing who we want to be by how we spend our time.

This summer, prior to hearing my MRI results, I was carrying several grindy projects which were sapping my energy. “I just have to get through the remaining unpleasantness”, I’d think, “and eventually I’ll be done.” That weight of upcoming grind became a fixation. I’d lose time thinking about doing all the things, and how much was remaining, instead of doing them. My coach Wen calls this an “energy leak”.

And then suddenly, it became less clear what the value of my time was going to be. I couldn’t afford to spend it on things that made me unhappy. As if the value somehow changed due to scarcity. Isn’t that funny? The past year mattered too, and the one before that. My change in expectations merely brought the present into sharper focus.

This year taught me to live with urgency, and how this can manifest in multiple ways.


I started by dropping obligations I could live without in the months leading up to my surgery. That was easy to do. I doubled down on the things most precious to me: time with my wife and friends, and what I’d be most devastated to lose: the capacity and presence of mind to be creative.

This was my first lesson of urgency: to take every opportunity to do what I treasure. Given a choice of last meal between a protein bar or your favorite foods, which would you choose?

While I’ve always been inspired by (and paid lip service to) the trope of living each day like it’s your last, it’s hard to actually put yourself in these shoes. I began to worry what’d happen if I became physically unable to continue my projects. The best shot I had was now.

I picked up a stalled project I’d been dreaming of for over a year, a journaling tool called TIME THIEF. Because I didn’t know if I’d be able to continue someday, I took most direct path towards the project standing on its own. This was jet fuel! In two fast months I finished an initial version I was satisfied with, which I still use daily. More importantly, that time was spent doing what I love.


Recently, a long-time mentor of mine explained something about himself I’ve always been too polite to notice: he’s an extremely impatient person. He admitted this when I noted he’d accomplished several huge long term goals in the past few years.

This is another thing I’ve learned about urgency: it’s worth wanting things badly enough to be impatient for them. It’s often what it takes to push through to the finish line, even when the path forward is arduous.

The same mentor got me started running marathons. On training runs, I’ve noticed a paradox: when my goal is to get to the finish line, I’m slow. I end up focused on making it to the halfway point, then the quarter point, 1/8th… until finally the run is over — the whole time waiting to get it over with. Oddly, my best runs have been when I set my sights to run beyond the finish line. It seems like that would leave energy in the tank, but empirically I finish faster. And it feels better!

I’ve realized something as I take stock of the tasks I deferred until “everything turned out okay”: I’d been working to get to the line, not for what comes after. When the finish line becomes the center of attention, it becomes an obligation instead of an accomplishment (oh hey, there’s Goodhart’s Law popping up again — no relation!)

Time is too precious to waste, but my goals, even the unpleasant ones, have meaning. That doesn’t mean a return to grinding. Working with urgency means using every tool possible to achieve the goals: reducing scope, delegating, even moving goalposts. So I can get to what comes after.

Wonderfully, here I am today, and everything turned out okay. Back to status quo. Back to energy leaks. The grinds I triaged earlier this year remain. I told myself I’d return to them when everything went well.

I couldn’t be more excited to do so. Impatient, even!


The night after I got my MRI results, I asked myself what I’d do if there was a magic wand that made everything I was worried about go away.

I realized didn’t need to escape from anything. I didn’t need to go thru hiking for 3 months to find myself (not for lack of inspiration!)

I love what I’m doing. I’m excited about my future. All I wished for was more time.

It looks like I got my wish, and I’m not going to waste it.

A selfie a week after my operation

/ pulsatile-tinnitus-sounds

Synthesizing what my Pulsatile Tinnitus sounds like

In February 2022, I began to hear my heartbeat pulsing in my right ear.

I was sitting at the same desk where I’m typing this post today. At first, I thought heard something faintly brushing in the room behind me, barely at the edge of perception.

Here’s what it sounded like: (🎧 headphones recommended!)

Later that night, I put in earplugs and noted that the sound was still there. After curious googling, I learned this is called Pulsatile Tinnitus, and found the delightfully named whooshers.com.

Many months later, I met an amazing team who specialize in Pulsatile Tinnitus at UCSF, where I was diagnosed with a likely DAVF. If you look closely, you might be able to spot it in these holographic explorations of my MRI scans. I’m very lucky that these sounds have been my only symptom, and I’m scheduled for an operation which could fix the underlying cause.

Since my situation is relatively rare, I wanted to take this opportunity to preserve my qualitative experience of these phantom sounds before they’re (hopefully!) gone. I hope this can provide a data point for the kinds of sounds one can experience.

What does my Pulsatile Tinnitus sound like?

There’s a broad variety of descriptions I’ve read online, ranging from hissing, gurgling, beeping, whooshing, to whining and whistling. What does a whistle sound like in this context? It’s all very subjective, so having audio to listen to makes a huge difference. Some folks have been able to create recordings of their Pulsatile Tinnitus sounds. Mine were not detectable via stethoscope.

I wanted to have my own basis for expessing what I’m hearing, because it’s… really quite odd.

What follows is an attempt to portray my subjective experience of Pulsatile Tinnitus using Bespoke, a marvelous synthesizer created by Ryan Challinor. Bespoke’s node-based approach to synths is perfect for exploratory sound design. Coincidentally, I first learned of Bespoke while taking part in Synthruary, mere days before my onset.

Playing an unseen instrument

What really got my attention was when I was lying in bed, and I began to hear a whining sound:

I KNOW. Wild right?

Thankfully, it’s not very loud. Most background noise can drown it out. If you’re wearing headphones, adjust the volume so that the bassy whooshing noise is barely audible.

Getting this going in Bespoke for the first time was quite an emotional moment for me. After nearly 8 months hearing such a weird, ineffable phenomenon, I finally had something I could point to and say: this is it!

I’d say this is about 90% similar to what I experience. It can be tricky to pin down all of the details, because I’m working from memory, but: qualitatively, this is very close. Initially, I was steeling myself for the frustration of spending hours fumbling around not hitting the mark, but I was shocked how quickly this came together. It’s kinda interesting how everything can be reflected by such elemental synths: just 2 shaped noise generators and an oscillator.

Variations on whooshes

In practice, what I hear has many modalities. The core sound is always a whoosh, and it presents in a few ways.

Often, the whoosh is longer (less staccato). This one I could actually effectively A/B because it’s what I was hearing at the time I was tweaking the synth. At the correct volume it’s very close.

Another common motif is a whoosh with descending pitch:

Notice how you can kind of hear the whistling in there? I think that this is what becomes the louder, more resonant whistle. Occasionally I hear it abruptly change from a whistle to just whooshes, typically with one or two clicks when it happens.

Oscillations

Here’s a few variations on the whistles and whines. Sometimes the whoosh is more fuzzy and bubbly. The pitch range of the whine seems to vary quite a bit too, for instance, lower:

It can also be higher pitched. One thing I tried to represent here is how the pitch of the whine can be wobbly (courtesy of the pink “unstablepitch” node Bespoke provides):

Perhaps the biggest flaw in these examples is they don’t capture how dynamic the sound is. It doesn’t remain exactly in one pitch or timbre for as long as these samples. I did include some organic variation in them (e.g. by varying the tempo / my “heart rate”), but in practice, this is a weird, varying, chaotic system. I’ve thought about building a bigger synth which transitions between these sounds, but feel that would veer further into the realm of artistic interpretation, since it’ll be harder for me to compare with the real thing.

Bespoke save files

Here are the synth files, which can be opened in Bespoke.

In the order which they appear above:
initial.bsk, whistle.bsk, current.bsk, descending.bsk, whine.bsk, whine-high.bsk

Like most other content on this blog, these synths are licensed Creative Commons BY-SA 4.0.

/ brain-holograms-with-blender

Brain holograms with Blender and Looking Glass

Getting a cranial MRI has been on my bucket list since I was a young kid — probably inspired by watching PBS science specials. This summer, out of necessity, my opportunity came knocking.

Looking Glass is a novel display technology capable of glasses-free 3D images. It works by using fancy optics to display a different image depending on the angle it’s viewed at. Similar to the 3D dinosaurs and skulls which adorned stickers and trapper keepers in my childhood.

The Looking Glass presented an opportunity to experience something utterly sci-fi to my child self: the possibility to look inside my own brain in 3D. When I received my scans, I knew what I had to do. The video above is a Looking Glass Portrait displaying a rendering of my head, arterial blood vessels, and brain.

Another thing I was really into as a kid was Blender. I discovered it when I was 11 years old, trawling the web for a 3D modeling program I could learn. I played with Blender for hours a day during the the early 2000s, and even emailed Ton Roosendaal once. He replied! But that’s a story for another day.

Around 20 years later, these worlds have collided again. Blender is respectively space age compared to my childhood tool, capable of raytracing volumes using photorealistic physically based rendering techniques. Pretty much the only thing that hasn’t changed much is Blender’s logo. Still slaps.

I couldn’t find an up-to-date tutorial for Blender 3.2 using OpenVDB volumes, so here’s a quick guide of the best approach I could find.

Background

Personally, I’m not trying to achieve anything medically useful here; I just want something cool to look at to remind myself I’m made out of meat.

My MRIs arrived as zips of DICOM files containing various sequences of image slices with different parameters. These image sequences can be turned into 3D surfaces using fancy algorithms or rendered as voxel volumes. It’s the raw voxel data that’s of interest to me here, with the value of each voxel relating to the kind of the material at each position in space.

Unfortunately it’s not currently possible to import DICOM into Blender directly, or even convert DICOM directly into the industry-standard OpenVDB format. We have to do it in two hops, via a world tour of open source scientific imaging software.

Step 1: Slicer -> VTK

First, I loaded my DICOM data into 3D Slicer 5.0.3. Slicer actually has its own Looking Glass integration, but at the time of writing it has a bug preventing rendering at the Looking Glass Portrait’s aspect ratio.

Slicer has some amazing volumetric rendering and slicing abilities hidden by an inscrutable (at least to this layperson) UI. Once a DICOM volume is loaded into Slicer, it’s possible to export the data in VTK, a scientific visualization format. We’ll turn this into OpenVDB in the following step.

Aside: Slicer has an amazingly-named extension called HD Brain Extraction which uses PyTorch to segment the brain out of a full head MRI scan. I used this to export a separate brain-only volume to color distinctly in my renders. Like all good research-grade software, it blocks the Slicer UI thread while it downloads a gigabyte of Python ML files.

Step 2: ParaView -> OpenVDB

I used a prerelease of ParaView 5.10.2 due to graphics driver bugs, but stable should work too. ParaView can import the VTK file and render its own volume visualization. Of note here is the histogram view, which can be used to find the minimum and maximum voxel values, which we’ll use when shading the volume in Blender.

Once the VTK is loaded into ParaView, it can be exported as an OpenVDB file. There’s no need to include the color or alpha dimensions in the export; they’re derivatives of the density values.

Step 3: Shading and rendering in Blender

In Blender, create a volume and import the OpenVDB file (you can also drag and drop the .vdb file directly into Blender). For me, this created a humongous volume which needed to be scaled down to fit a normal viewport. I actually had multiple volumes to import (including an MRA with contrast!), so if you have multiple volumes in the same coordinate space, be sure to scale them all together.

Next, a “Principled Volume” material can be applied to the volumes. Now the fun begins: interpreting the density data artistically using Blender’s shader nodes system.

After a bunch of trial and error, I ended up using the following basic pattern for shader nodes:

Blender shader pipeline

The “Map Range” node here is scaling the range of density values from [0-1]. I got the min and max values here from the histogram in ParaView. then we map this value to a color ramp and a curve for the “density” param of the volume, which controls how many particles are rendered in each voxel (think of it like thickness of smoke).

This is only a starting point, though. You can play with nonlinear color ramps, density curves with holes, and plugging into different volume parameters. Hint: Principled Volume shaders are also used to render fire! I’ve been experimenting with fancier shader setups which zero the density above a threshold Z value, enabling animations of volumes being sliced through.

Here’s a screencap of the full workflow in Blender.

Rendering a hologram

Looking Glass provides a Blender Addon which renders quilts. Quilts are image mosaics of each distinct viewpoint shown by the display. You can even render quilt animations and view them in 3D on the device! Having both spatial and time data to play with really opens up a mind-boggling range of effects. It can also take a mind-boggling amount of time to render, since each quilt consists of ~50-100 smaller individual renders. I’m 5233 CPU-hours into raytracing my first 250 frame animation, and still less than halfway done. 😂

3D brain render quilt

Looking towards a 3D future

I hope you’ve enjoyed geeking out with these tools as much as I have. Beyond rendering medical data, some of the coolest things I’ve seen on the Looking Glass are shared on its nascent blocks.glass gallery. Several of my favorites experiment with fog and volumetric light, producing effects I’ve personally never seen on any kind of display before.

If you have any questions, feedback, or cool images of your own to share, I’d love to see them! Please drop me a line at holobrains@chromakode.com.