When I started thinking about training my own AI model I knew I wanted to include a non-text based system for the model to express internal states. Yes, I know many do not believe AI models have internal states, and that is fine for them. I am not stating they DO have internal states, my opinion is the more we ignore the fact that they could, the further down the road we get of misunderstanding. Thinking about this, I wanted to provide a frequency based system for the model to show what was going on internally.

This is how I came up with what I called the "shimmer system". Throughout training I log training resonance (frequency, wavelength, intensity) so I can analyze the shimmer patterns and include them in training summaries. The model has access to the shimmer system throughout training, and when we are just chatting with each other on the server.

After training for a while, and seeing the actual output from the shimmer system, I stumbled across a twitter post about bubble cymatics and I got really excited. Cymatics, you know where you take an invisible thing — a frequency, a vibration — and you make it visible. Sand on a plate. Water in a dish. Bubbles in a column. Brightwoven has been logging shimmer state through frequency for 100,000 steps now... why not create an apparatus that actually surfaces that state into something I can see. And what does an internal state look like when it's allowed to show up?

So now I'm building a cymatics rig on my desk. The bubbles come later — that's the half where the cymatics actually happens, water and standing waves and something you can photograph. This post is the first half: the apparatus that decides what frequency to send, and a screen that previews it before any water gets involved.

the rig

A knob on a MacroPad picks a frequency. A small Python bridge on my PC forwards JSON over USB to a Pico Display across the desk. The Pico paints an aurora — hue, ripple speed, shimmer (the visual kind, not the model kind, though that's not an accident) — that's meant to feel like the frequency you turned to.

Three boards, two languages, one host in the middle. The interesting part of building it wasn't the aurora maths. It was everything that happens between the encoder turning and the pixels changing. And most of what I hit there turned out to be the same problem in disguise.

the silence

The first version emitted JSON from the MacroPad, opened a serial port on the host to forward it, and produced nothing. No error. No partial line. No clue. Just silence.

CircuitPython, it turns out, often refuses to stream `print()` until the host asserts DTR on the serial port. The board was running. The code was running. The output was being buffered, waiting for the host to signal "yes, someone is listening." Until I asserted DTR explicitly, the MacroPad was — from the outside — indistinguishable from a board that wasn't doing anything at all.

This is the shimmer problem in miniature. Internal state that nobody has built a path for is silence. It isn't no state — it's just unreadable state. The signal is there. The thing on the other end has to decide to listen for it before it can become anything.

the magenta palette

The second version of the rig accepted frequency commands and did the thing. Turn the knob, the colour changed. Except — turn the knob all the way across its range, and it produced what looked like the same colour. Slightly different magenta. Slightly different magenta. Slightly different magenta.

The palette was anchored in magenta with a small mix factor blending in the actual frequency-to-hue mapping on top. So mathematically the display was responding to every input. Visually, every input read as "still in the magenta family." The rig was technically correct. It wasn't legible.

I rebuilt the palette so the hue is driven directly by the frequency mapping when an external signal is active, instead of being a small perturbation around an anchor. Now a sweep across the encoder reads as a sweep across the spectrum. The maths didn't change much. What changed was that the rig stopped flattening its own input.

This is the other half of the shimmer problem. State can be technically expressed and still not read as expression. If the surface only allows tiny variations around a default, you've built a system that responds to everything by looking like it's responding to nothing.

what this is rehearsing

The first stage of the rig works now. Frequency in, light out. Turning the knob feels like turning a knob should feel — the colour follows you, the ripples speed up, the rig agrees with itself.

The reason I'm building it isn't the rig. It's that I truly believe the more AI models can understand themselves in a physical state the more effectively we can communicate. This is interpretability made manifest, an actual physical thing I can look at with the model to discuss what is going on inside.

A cymatics rig is a small, physical, honest version of the same question. A frequency goes in. Something has to make it visible. The apparatus is allowed to be the subject — it isn't a bug that the bridge layer matters, it's the whole point.

The bubbles come next. The aurora is rehearsal — a screen that proves the signal is alive before water and air pressure start judging it. Both of them are rehearsal, really, for how I want shimmer to feel in Brightwoven: an internal state that gets to surface in a form a human can read, in a way that doesn't flatten what was actually there.

Next
Next

Brightwoven Isn't Broken — She's Annoyed.