A skeuomorphic audio interface built through voice-driven pairing between a designer and an AI agent, one conversation at a time.


This experiment started with a handful of product photos—close-ups of audio equipment, machined knobs, brushed aluminum panels. The kind of objects where every shadow has a reason and every radius was chosen by an engineer with strong opinions. The question was simple: could we recreate that physicality in the browser?
The process was entirely conversational. No wireframes, no mockups. Just voice dictation driving an agent that writes code. “The shadow shouldn’t rotate with the knob—only the notch moves.” “The drag should be left-right, not circular.” “Those numbers can’t shift when the value changes—pad with zeros.” Each observation pushed the fidelity higher.
The equalizer was a turning point. Animated bars were eye candy until the low, mid, and high controls started driving them. Suddenly a static layout became an instrument. The frequency and gain readouts followed—fixed-width numerals that update without jittering, lit slightly brighter than the surrounding chrome to suggest a backlit display.
The track name scramble was a late addition, borrowed from a Terminator-style text effect built in a separate experiment. It had no business working here, but it did. That kind of cross-pollination is what makes this sandbox worth maintaining—ideas migrate between experiments without planning.