/ pulsatile-tinnitus-sounds

Synthesizing what my Pulsatile Tinnitus sounds like

In February 2022, I began to hear my heartbeat pulsing in my right ear.

I was sitting at the same desk where I’m typing this post today. At first, I thought heard something faintly brushing in the room behind me, barely at the edge of perception.

Here’s what it sounded like: (🎧 headphones recommended!)

Later that night, I put in earplugs and noted that the sound was still there. After curious googling, I learned this is called Pulsatile Tinnitus, and found the delightfully named whooshers.com.

Many months later, I met an amazing team who specialize in Pulsatile Tinnitus at UCSF, where I was diagnosed with a likely DAVF. If you look closely, you might be able to spot it in these holographic explorations of my MRI scans. I’m very lucky that these sounds have been my only symptom, and I’m scheduled for an operation which could fix the underlying cause.

Since my situation is relatively rare, I wanted to take this opportunity to preserve my qualitative experience of these phantom sounds before they’re (hopefully!) gone. I hope this can provide a data point for the kinds of sounds one can experience.

What does my Pulsatile Tinnitus sound like?

There’s a broad variety of descriptions I’ve read online, ranging from hissing, gurgling, beeping, whooshing, to whining and whistling. What does a whistle sound like in this context? It’s all very subjective, so having audio to listen to makes a huge difference. Some folks have been able to create recordings of their Pulsatile Tinnitus sounds. Mine were not detectable via stethoscope.

I wanted to have my own basis for expessing what I’m hearing, because it’s… really quite odd.

What follows is an attempt to portray my subjective experience of Pulsatile Tinnitus using Bespoke, a marvelous synthesizer created by Ryan Challinor. Bespoke’s node-based approach to synths is perfect for exploratory sound design. Coincidentally, I first learned of Bespoke while taking part in Synthruary, mere days before my onset.

Playing an unseen instrument

What really got my attention was when I was lying in bed, and I began to hear a whining sound:

I KNOW. Wild right?

Thankfully, it’s not very loud. Most background noise can drown it out. If you’re wearing headphones, adjust the volume so that the bassy whooshing noise is barely audible.

Getting this going in Bespoke for the first time was quite an emotional moment for me. After nearly 8 months hearing such a weird, ineffable phenomenon, I finally had something I could point to and say: this is it!

I’d say this is about 90% similar to what I experience. It can be tricky to pin down all of the details, because I’m working from memory, but: qualitatively, this is very close. Initially, I was steeling myself for the frustration of spending hours fumbling around not hitting the mark, but I was shocked how quickly this came together. It’s kinda interesting how everything can be reflected by such elemental synths: just 2 shaped noise generators and an oscillator.

Variations on whooshes

In practice, what I hear has many modalities. The core sound is always a whoosh, and it presents in a few ways.

Often, the whoosh is longer (less staccato). This one I could actually effectively A/B because it’s what I was hearing at the time I was tweaking the synth. At the correct volume it’s very close.

Another common motif is a whoosh with descending pitch:

Notice how you can kind of hear the whistling in there? I think that this is what becomes the louder, more resonant whistle. Occasionally I hear it abruptly change from a whistle to just whooshes, typically with one or two clicks when it happens.

Oscillations

Here’s a few variations on the whistles and whines. Sometimes the whoosh is more fuzzy and bubbly. The pitch range of the whine seems to vary quite a bit too, for instance, lower:

It can also be higher pitched. One thing I tried to represent here is how the pitch of the whine can be wobbly (courtesy of the pink “unstablepitch” node Bespoke provides):

Perhaps the biggest flaw in these examples is they don’t capture how dynamic the sound is. It doesn’t remain exactly in one pitch or timbre for as long as these samples. I did include some organic variation in them (e.g. by varying the tempo / my “heart rate”), but in practice, this is a weird, varying, chaotic system. I’ve thought about building a bigger synth which transitions between these sounds, but feel that would veer further into the realm of artistic interpretation, since it’ll be harder for me to compare with the real thing.

Bespoke save files

Here are the synth files, which can be opened in Bespoke.

In the order which they appear above:
initial.bsk, whistle.bsk, current.bsk, descending.bsk, whine.bsk, whine-high.bsk

Like most other content on this blog, these synths are licensed Creative Commons BY-SA 4.0.

/ brain-holograms-with-blender

Brain holograms with Blender and Looking Glass

Getting a cranial MRI has been on my bucket list since I was a young kid — probably inspired by watching PBS science specials. This summer, out of necessity, my opportunity came knocking.

Looking Glass is a novel display technology capable of glasses-free 3D images. It works by using fancy optics to display a different image depending on the angle it’s viewed at. Similar to the 3D dinosaurs and skulls which adorned stickers and trapper keepers in my childhood.

The Looking Glass presented an opportunity to experience something utterly sci-fi to my child self: the possibility to look inside my own brain in 3D. When I received my scans, I knew what I had to do. The video above is a Looking Glass Portrait displaying a rendering of my head, arterial blood vessels, and brain.

Another thing I was really into as a kid was Blender. I discovered it when I was 11 years old, trawling the web for a 3D modeling program I could learn. I played with Blender for hours a day during the the early 2000s, and even emailed Ton Roosendaal once. He replied! But that’s a story for another day.

Around 20 years later, these worlds have collided again. Blender is respectively space age compared to my childhood tool, capable of raytracing volumes using photorealistic physically based rendering techniques. Pretty much the only thing that hasn’t changed much is Blender’s logo. Still slaps.

I couldn’t find an up-to-date tutorial for Blender 3.2 using OpenVDB volumes, so here’s a quick guide of the best approach I could find.

Background

Personally, I’m not trying to achieve anything medically useful here; I just want something cool to look at to remind myself I’m made out of meat.

My MRIs arrived as zips of DICOM files containing various sequences of image slices with different parameters. These image sequences can be turned into 3D surfaces using fancy algorithms or rendered as voxel volumes. It’s the raw voxel data that’s of interest to me here, with the value of each voxel relating to the kind of the material at each position in space.

Unfortunately it’s not currently possible to import DICOM into Blender directly, or even convert DICOM directly into the industry-standard OpenVDB format. We have to do it in two hops, via a world tour of open source scientific imaging software.

Step 1: Slicer -> VTK

First, I loaded my DICOM data into 3D Slicer 5.0.3. Slicer actually has its own Looking Glass integration, but at the time of writing it has a bug preventing rendering at the Looking Glass Portrait’s aspect ratio.

Slicer has some amazing volumetric rendering and slicing abilities hidden by an inscrutable (at least to this layperson) UI. Once a DICOM volume is loaded into Slicer, it’s possible to export the data in VTK, a scientific visualization format. We’ll turn this into OpenVDB in the following step.

Aside: Slicer has an amazingly-named extension called HD Brain Extraction which uses PyTorch to segment the brain out of a full head MRI scan. I used this to export a separate brain-only volume to color distinctly in my renders. Like all good research-grade software, it blocks the Slicer UI thread while it downloads a gigabyte of Python ML files.

Step 2: ParaView -> OpenVDB

I used a prerelease of ParaView 5.10.2 due to graphics driver bugs, but stable should work too. ParaView can import the VTK file and render its own volume visualization. Of note here is the histogram view, which can be used to find the minimum and maximum voxel values, which we’ll use when shading the volume in Blender.

Once the VTK is loaded into ParaView, it can be exported as an OpenVDB file. There’s no need to include the color or alpha dimensions in the export; they’re derivatives of the density values.

Step 3: Shading and rendering in Blender

In Blender, create a volume and import the OpenVDB file (you can also drag and drop the .vdb file directly into Blender). For me, this created a humongous volume which needed to be scaled down to fit a normal viewport. I actually had multiple volumes to import (including an MRA with contrast!), so if you have multiple volumes in the same coordinate space, be sure to scale them all together.

Next, a “Principled Volume” material can be applied to the volumes. Now the fun begins: interpreting the density data artistically using Blender’s shader nodes system.

After a bunch of trial and error, I ended up using the following basic pattern for shader nodes:

Blender shader pipeline

The “Map Range” node here is scaling the range of density values from [0-1]. I got the min and max values here from the histogram in ParaView. then we map this value to a color ramp and a curve for the “density” param of the volume, which controls how many particles are rendered in each voxel (think of it like thickness of smoke).

This is only a starting point, though. You can play with nonlinear color ramps, density curves with holes, and plugging into different volume parameters. Hint: Principled Volume shaders are also used to render fire! I’ve been experimenting with fancier shader setups which zero the density above a threshold Z value, enabling animations of volumes being sliced through.

Here’s a screencap of the full workflow in Blender.

Rendering a hologram

Looking Glass provides a Blender Addon which renders quilts. Quilts are image mosaics of each distinct viewpoint shown by the display. You can even render quilt animations and view them in 3D on the device! Having both spatial and time data to play with really opens up a mind-boggling range of effects. It can also take a mind-boggling amount of time to render, since each quilt consists of ~50-100 smaller individual renders. I’m 5233 CPU-hours into raytracing my first 250 frame animation, and still less than halfway done. 😂

3D brain render quilt

Looking towards a 3D future

I hope you’ve enjoyed geeking out with these tools as much as I have. Beyond rendering medical data, some of the coolest things I’ve seen on the Looking Glass are shared on its nascent blocks.glass gallery. Several of my favorites experiment with fog and volumetric light, producing effects I’ve personally never seen on any kind of display before.

If you have any questions, feedback, or cool images of your own to share, I’d love to see them! Please drop me a line at holobrains@chromakode.com.