Operation Stylophone

True to my rules of last week, my first milestone was to reach a stage where I could play a keyboard and get OSCar making its first noise. Voilà:

This isn’t impressive at all: I was calling it my ‘Stylophone’ milestone as there’s no voice management; no sense of what a program is; no envelopes, LFOs, or anything. I detect a new note from the keybed, set up all the timers, program in a wavetable, and out comes a waveform.

It’s not in key yet, because the pitch logic isn’t complete. It isn’t even coming out of the line output: you can literally see me pressing a 3.5mm jack to test pins so the voltage there goes straight to a loudspeaker. Without the mixer, VCFs, or VCAs working, the proper line output socket is currently just decorative (the horrible HF whistle you can hear at the start is a consequence of this hack. The loudspeaker signal isn’t ground referenced properly).

If I weren’t an experienced designer of synths I’d be a bit embarrassed to have this little to show from a week’s work: people tend to talk up their achievements online and I think I owe it to the world to post something honest. It’s been a slow week, but I’m having to make sure that everything gets done the OSCar way.

The OSCar hardware is the only reason why any of this is remarkable. The sounds are coming from a platform that very closely resembles the original synth. The firmware you can hear running has very carefully been ported and unit-tested to prove that it does exactly what an OSCar would if you probed the correct pins. The two oscillators you hear are exactly the original’s organ and sawtooth waveforms.

This week in a progress-o-gram

The section in ‘kbd’ that I’ve finished annotating is the original voice manager: the logic that describes how the notes that are being played get allocated to OSCar’s two voices (or one voice, depending on which mode you’re in). When you stop playing a note, the same routine checks through all the notes that are still held down, and lends one back to the voice. Then it tells the envelope generator if it should retrigger, depending on which mode it’s in.

Particularly if you’ve ever played an OSCar in duo mode, you’ll know that the voice manager is far from perfect. If you take your finger off the second voice, the voice manager doesn’t tidy up the voices at all, so it just continues to play until either you take your hand away completely or play another voice. I was seeing bugs just by inspecting the code, and I know that Chris must have been aware of them, but there’s only so much you can fix with six bytes free. Now I have a decision to make about how I fix the voice manager while respecting the sound and feel of the original instrument.

The part marked ‘sums’ (terrible name, Chris) that I’ve turned green this week is the timing logic. OSCar uses seven digital timers internally, three of which are allocated to each of its two voices. It gets so complicated because OSCar uses fixed-size wave tables, so the sample rate changes with every note: two timers have to be maintained to keep the note’s pitch correct. One of these works at the note frequency, and another works at a multiple of it, counting down from 256. Some logic elsewhere in the circuitry uses this counter to convert the right sample from each wave table at the right time.

When the pitch gets above about 400Hz, it becomes too high for the hardware to traverse the wave table quickly enough. The third timer then comes into play to begin skipping samples on purpose, effectively shortening the wave table from 256 samples to 128, 64, and so on.

All of this is governed by some reasonably subtle logic Chris wrote so that the sample-skipping compromise happens in a different place whether you’re ascending or descending in pitch, which makes the change in timbre less obvious.

So much for a week’s progress. The difficulty of all this integration is one excuse I might give myself; the other is that this was the first attempt to dovetail the ported synth code with a new hardware abstraction layer, so I took a few attempts to get into a pattern that looked like it’d scale manageably to the rest of the code.

In completing Operation Stylophone, I also reached a realisation about how to progress from here. The original mission was to read and annotate all the firmware first: a kind of ‘breadth-first’ strategy. This was invaluable for mapping out the synth and scoping the work but, once I’d done about two thirds of it, it was clear that this isn’t the right way to continue. There are two reasons.

1. There’s only so much detail I can hold in my head. Revisiting the timer logic had me looking through and revising my notes just so I could remember what I’d found out a few months ago. It’ll be more efficient to write C routines against routines I’ve only just annotated, to keep them fresh in my memory.

2. I glossed over important details the first time around. The code is packed with twists and turns that manage edge cases. Occasionally I’d miss these, either by mistake or because I was too tired or bored to traverse them, so again I really ought to be writing and testing working C as I go.

Now I’ve got a Stylophone to improve, there’s a choice about where to go next: my rules just suggest I should serve playability, and pretty much anything I do from here will improve that.

Right now, it makes most sense to work on the voice manager, and then do something sensible and rudimentary with the VCA and VCF, so I can use the line socket, listen to both voices at once, and assign them notes more sensibly.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *