Blog

  • Little green wires

    I fell into the gap at Waterloo Station yesterday. A couple of people got off the Bakerloo Line train before me and stood, blocking the doors, staring blankly at the arrows on the walls until they knew which of the two possible directions to take. I, too, was preoccupied with the direction signs, but needed to snake past this couple before the train doors slammed in my face. Distracted by such concerns and obstacles, and wary of running behind for my appointment, my foot plopped between the train carriage and platform, and down I went.

    Fortunately nothing got hurt except my dignity, and that’s just scar tissue these days. But the point is that things that cannot happen to somebody with a brain do happen, and usually because the brain has to be elsewhere.

    Designing hardware involves thousands of decisions and deliberate actions competing for attention, and fills the brain at all times. Most projects therefore involve a period of crossing what I called in my 2021 ADC talk the ‘Valley of Despair’. It’s a first encounter with the physical world that keeps you humble as an engineer because: one, you’ve made stupid mistakes no competent engineer would ever make; two, it’s still going to take you a large, unspecifiable amount of time and ingenuity to find them.

    Pay no attention to the wire about to fall out of its crimp connector. I’ll worry about that when notes fail.

    If you’re lucky, you can just desolder a component and put something else in its place. Otherwise, out comes the scalpel, usually, and the green 30-gauge wire, always. The green wire count on Issue 1 of OSCar Rebirth is only four. But the time I invested in chasing the problems, making running repairs to the two main boards I’m looking after, testing alternative fixes, and getting the PCB CAD data ready for the next go, runs to probably twelve to sixteen hours.

    Animated GIF of fiddling between revisions 1 and 2 of the PCB (internal layers not shown). All in all, the first guess wasn’t bad.

    Interesting production engineering side note

    if you squint at the GIF above, you’ll notice that I renumber the components between revisions. I can do this because I hand-populated these boards and will probably hand-populate the next ones. Once the design work has gone to a factory, they get very upset if you do this and you quickly learn not to. Component numbers, or designators, are the primary keys in many of their databases. When you arbitrarily change them, some poor sod has to match every component that’s changed, by hand, to their pick-and-place data and procurement spreadsheets.

    There’s no consequence at all for changing numbers on a whim when you’re hand-building, of course, except that you’ve got to make sure you’re working with the right revision of schematic drawing. In fact, an orderly X-Y numbering regime makes placement and debugging easier, so you should.

    Giving away early, untested draft data to a factory is therefore something to do somewhat carefully and reluctantly: after a few revisions when renumbering is prohibited, finding a resistor on a PCB starts to feel like looking for a window on an advent calendar. You soon need machine assistance, and it means yet another bloody screenful of data to juggle on the computer when you’re already performing a complex task.

    I’ve been proclaiming for years that I can tell, from a long glance at a company’s PCBs, how they operate politically: which business processes rule in their executive suite, how eagerly their designers are looking for a new job, what sort of business strategy has been imposed on them this year, and whether they’re in any form of trouble. Obviously, if you can also see the product the board goes inside and have almost completed a third decade in the industry, that gives you a lot more clues, but the organisation of component designators is a tell to recognise. In it, you can see the autonomy of their R&D team, how well their systems work internally, and how much pressure they’re under to rush a design to production. And if the components are labelled off a hierarchical design (R132_4), you’ll know it’s the Wild West there, and nobody from production even dares to talk to R&D anymore. Factories hate it.


    There is a class of problem that is too taxing to repair with a scalpel and wires. Either you have to scrap the circuit board, or bodge it with yet another circuit board, or just live with the consequences. One such fault found its way into this design, and popped into my head about two days after sending the first PCB for manufacture. Fortunately it was one I could live with, so I did: the control voltages for the first-issue board top out at 3.3V rather than 5V, which limits the range of control over the filters and amplifiers. There are a few ways to fix this, but the most economical and easiest to control is simply to add the right sort of buffer chip. It’s U3 in the flickbook gif above, just above the hatched area. They’re made in the billions, have been in production for about forty years, and cost next to nothing. Best of all, this actually simplified the layout and allowed me to replace eight discrete resistors with two packs, so it barely adds cost. Just don’t ask me to wire one into a rev. 1 prototype, unless you’re sponsoring me for charity.

    I’m going to blame Altium CircuitStudio for two of the green wires. See how pin 14 doesn’t quite connect with the wire on the schematic? Neither did I.

    The package is supposed to warn you about wire terminations missing the grid, so this should never happen. When it forgets, it makes errors that much harder to find because you come to rely on the schematic corresponding to the PCB. In Chris’s day when both were drawn by hand, you’d trust it far less.

    The same part in Chris Huggett’s schematic. Soft pencil on yellowed A3 cartridge paper, 1983. Badly photographed in lowish light and scaled up by a blogger, 2026.

    So that was about four hours.

    The long, horizontal wire was a simple naming error which meant that two areas of the design that were supposed to connect across pages didn’t. That can happen to anybody while you’re having ideas and making and reverting changes: a stupid mistake but forgivable if it’s only once in a design. And the fourth piece of green wire is because ST once thought better about the USB peripheral inside its microcontrollers. To announce your presence as a USB device, you connect the D+ line to +3.3V through a 1.5 kiloohm resistor. Only then will the host start trying to talk to you. Newer microcontrollers build that resistor onto the chip so you can switch it in or out of the circuit in software without wasting a precious external pin and having to place a physical resistor on the board. This one, unfortunately, doesn’t, and I got caught out.

    But anyway, after a weekend of finding these problems, the VCFs and VCAs are working, so I can use the line output and filters. It’s starting to sound more like a synth.

    Along with a demo like this is a stark reminder of what else is missing. For what they are, the filters sound right, but they’re stuck in series low-pass mode, and control mappings aren’t in place yet so, in the video, I’m just setting control voltages directly. Also, the VCAs cannot be dimmed down to silence, which means I’ll be looking for the workarounds in Chris’s firmware that achieve this by (probably) turning down the DACs and the noise generator when the volume is low, and finally zeroing the wavetable completely so that volume zero means zero.

    Anyway, next I’m porting the voice manager while it’s fresh in my mind so that I can play the thing properly. Then, I suppose I ought to work on the filter switching, control voltages and modulators so that those start to sound right. All of which means starting to sketch out the program data and patch management for this synth, which is not going to be quite the same as the original OSCar. These days, MIDI has to work much harder, and we can and should store more than 36 patches.

    The PCB still isn’t signed off completely: there’s the trigger input to test, but that’s comparatively low-risk and the whole synth can be demonstrated with that feature missing.

    Epilogue: I have nobody to blame but myself for falling into the gap. But thousands of other passengers a year are also blaming themselves after coming a cropper on the London Underground. The posters that warn people of danger, all of which I’ve read, aren’t much good if everything about station design, from signage to acoustics to crowd management, bombards passengers with noise, then forces them to make split-second choices or else become an obstacle for others. Would it be unreasonable to fit an electronic sign near every train door, so you know which way to walk to your exit or interchange before you’re even at the station? They can have that idea for free.

  • Operation Stylophone

    True to my rules of last week, my first milestone was to reach a stage where I could play a keyboard and get OSCar making its first noise. Voilà:

    This isn’t impressive at all: I was calling it my ‘Stylophone’ milestone as there’s no voice management; no sense of what a program is; no envelopes, LFOs, or anything. I detect a new note from the keybed, set up all the timers, program in a wavetable, and out comes a waveform.

    It’s not in key yet, because the pitch logic isn’t complete. It isn’t even coming out of the line output: you can literally see me pressing a 3.5mm jack to test pins so the voltage there goes straight to a loudspeaker. Without the mixer, VCFs, or VCAs working, the proper line output socket is currently just decorative (the horrible HF whistle you can hear at the start is a consequence of this hack. The loudspeaker signal isn’t ground referenced properly).

    If I weren’t an experienced designer of synths I’d be a bit embarrassed to have this little to show from a week’s work: people tend to talk up their achievements online and I think I owe it to the world to post something honest. It’s been a slow week, but I’m having to make sure that everything gets done the OSCar way.

    The OSCar hardware is the only reason why any of this is remarkable. The sounds are coming from a platform that very closely resembles the original synth. The firmware you can hear running has very carefully been ported and unit-tested to prove that it does exactly what an OSCar would if you probed the correct pins. The two oscillators you hear are exactly the original’s organ and sawtooth waveforms.

    This week in a progress-o-gram

    The section in ‘kbd’ that I’ve finished annotating is the original voice manager: the logic that describes how the notes that are being played get allocated to OSCar’s two voices (or one voice, depending on which mode you’re in). When you stop playing a note, the same routine checks through all the notes that are still held down, and lends one back to the voice. Then it tells the envelope generator if it should retrigger, depending on which mode it’s in.

    Particularly if you’ve ever played an OSCar in duo mode, you’ll know that the voice manager is far from perfect. If you take your finger off the second voice, the voice manager doesn’t tidy up the voices at all, so it just continues to play until either you take your hand away completely or play another voice. I was seeing bugs just by inspecting the code, and I know that Chris must have been aware of them, but there’s only so much you can fix with six bytes free. Now I have a decision to make about how I fix the voice manager while respecting the sound and feel of the original instrument.

    The part marked ‘sums’ (terrible name, Chris) that I’ve turned green this week is the timing logic. OSCar uses seven digital timers internally, three of which are allocated to each of its two voices. It gets so complicated because OSCar uses fixed-size wave tables, so the sample rate changes with every note: two timers have to be maintained to keep the note’s pitch correct. One of these works at the note frequency, and another works at a multiple of it, counting down from 256. Some logic elsewhere in the circuitry uses this counter to convert the right sample from each wave table at the right time.

    When the pitch gets above about 400Hz, it becomes too high for the hardware to traverse the wave table quickly enough. The third timer then comes into play to begin skipping samples on purpose, effectively shortening the wave table from 256 samples to 128, 64, and so on.

    All of this is governed by some reasonably subtle logic Chris wrote so that the sample-skipping compromise happens in a different place whether you’re ascending or descending in pitch, which makes the change in timbre less obvious.

    So much for a week’s progress. The difficulty of all this integration is one excuse I might give myself; the other is that this was the first attempt to dovetail the ported synth code with a new hardware abstraction layer, so I took a few attempts to get into a pattern that looked like it’d scale manageably to the rest of the code.

    In completing Operation Stylophone, I also reached a realisation about how to progress from here. The original mission was to read and annotate all the firmware first: a kind of ‘breadth-first’ strategy. This was invaluable for mapping out the synth and scoping the work but, once I’d done about two thirds of it, it was clear that this isn’t the right way to continue. There are two reasons.

    1. There’s only so much detail I can hold in my head. Revisiting the timer logic had me looking through and revising my notes just so I could remember what I’d found out a few months ago. It’ll be more efficient to write C routines against routines I’ve only just annotated, to keep them fresh in my memory.

    2. I glossed over important details the first time around. The code is packed with twists and turns that manage edge cases. Occasionally I’d miss these, either by mistake or because I was too tired or bored to traverse them, so again I really ought to be writing and testing working C as I go.

    Now I’ve got a Stylophone to improve, there’s a choice about where to go next: my rules just suggest I should serve playability, and pretty much anything I do from here will improve that.

    Right now, it makes most sense to work on the voice manager, and then do something sensible and rudimentary with the VCA and VCF, so I can use the line socket, listen to both voices at once, and assign them notes more sensibly.

  • Staying motivated

    It does seem odd to make part 2 of this OSCar series about how not to give up. Especially as this is one of my actual jobs.

    Here’s the thing: I spend a lot of time working with just myself and the ghost of Chris Huggett. This project will have taken me over a year by the time the synth is ready to demonstrate (hopefully at Superbooth in May). Even Paul Whittington, who is in charge of the OSCar brand, spends most of his time running PWG and dealing with the problems of supply chains, distributors, retailers, and customers. Being a company director is, after all, a picaresque adventure which is largely out of your control but always your responsibility. Anyway, long story short, his focus is not always on making the next product.

    Meanwhile, I go for weeks without having anything playable, or even explicable, to show for my efforts. This stage of the project, with a desk full of hardware with a few desultory blinking lights and some quickly-moving voltages, is largely about getting the psychological barriers out of the way.

    This post is not:

    1. The usual ‘go out and walk occasionally’ post: I went and took a walk a fortnight ago.
    2. A ‘beast mode’ post about how to distil measurable productivity from every mean minute of your life on this planet. Honestly, those people think they’re winning, but in any healthy society they would envy the homeless.
    3. A post exploring technical best practice. That theme, I hope, will be mostly implicit in everything I write.

    If this about anything, it’s accepting the fact that most of the things that make our planet a nice place to live were embroidered on a fabric of pissing about in company time. (You’ll find this idea expressed in myriad ways by everybody whose books you like to read, whose music you like to listen to, and whose company you enjoy.) So this, and writing about it, is basically how I keep myself in a frame of mind to finish a long, solo project that began with the intention of bringing people some joy. If you’re serious about that, it can feel like a zero-sum game.

    Designing and building the hardware part was fun in its own right: I’m lucky to appreciate this stage of proceedings as a miniature challenge, on a par with a cryptic crossword. There are many right ways to translate a design to a new platform, and this choice is quite satisfying.

    The doldrums for me began when I had a large amount of 1980s-era firmware to port before my synthesiser would do anything at all. I can’t just replace it with bits of the Mantis that I’d already written myself: authenticity to the original OSCar is the principal point of the exercise. First, appraise it critically and completely. Then, translate it into C for a new platform.

    Firmware is where all the complexity is hidden, and people care about whether it’s done just right. The problem is that it wasn’t, really: it was written by an electronic engineer in 1983. A masterpiece in terseness it may be: scarcely a byte is wasted and the EPROM has only six left. But bugs jump out from inspection as well as in use: some weren’t noticed; others might have been, but what can you fix with six bytes free? And the user interface is of its time, which means it’s appalling.

    But owning the firmware is tough. It’s high-difficulty, high-stakes, and needs to be approached with the kind of reverence that requires leaving the desk occasionally to drink tea while subjugating one’s ego. What follows is a few personal rules I made to make it more bearable.

    Rule one. De-scope features that aren’t important until the end

    The sequencer and arpeggiator are big parts of OSCar, but the synth is playable and demonstrable without them. 90% of the fun is in the way the thing plays, sounds, and responds as a simple instrument. We could take the prototype to Superbooth, let people play it, and those features would be missed, but not desperately. So I’ve decided to afford them no thought until I have to.

    Meanwhile, the cassette interface probably won’t make the final unit because there’s no point in having it there. More on that later, and probably in a later post.

    Rule two. Playability always wins

    This is really an extension of rule one. Once I’d brought up the basic hardware, and proved to myself (for example) that the power supplies, timers, USB, and MIDI all work, the question was how to port Chris’s firmware. There’s thousands of lines of it, and it all has to end up inside somehow.

    What, then, to do next? If I’m optimising for my own motivation, the right answer is whatever gets us closest to having an enjoyable musical instrument. So I started with the wave tables and oscillators, just so I can bodge a connector onto a test pin and hear the synth making a noise. Then I got the pitch conversions working so those waveforms can be made to respond to musical pitch. Next, I’ll get the key scanner logic going so I can play the keyboard. That will provide a very bare experience with no presets, no filters or envelopes, no pitch bend, and no real voice management. It’ll sound like a Stylophone. But it also gets me to a stage where I can play to people, get some pleasure out of my work, and hear all subsequent progress.

    There are a few awkward corners of hardware that still aren’t completely tested or don’t work properly. But, as no deadline looms, even locking the PCB design is a secondary priority to having something that plays. At worst, I’ll find something off with the filters, and have to spend a day or two fiddling with resistor values and green wire. But a confrontation with my failure at the hands of the physical world is less humiliating if I can suck it up while playing a line of Bach. Also, having something playable makes debugging somewhat easier.

    Rule three. Make progress visible

    OSCar lends itself well to a progress-o-gram. I last messed with these when I was writing my PhD thesis at the end of 2004:

    It gave me a visual evidence of the thesis word count growing. I posted it on this blog so people would hold me to account. (It was eventually submitted on 30th March 2005 at 45,508 words, 65 figures, and 108 references. Not a long thesis, but it got me there.)

    With OSCar, there are 8 kilobytes of Z80 code to inspect, take ownership of, and port onto the modern platform. (8 KiB doesn’t sound a lot, but written in assembler on an 8-bit CISC processor, believe me it is.) Porting the firmware is a process of two distinct stages. The first job is assimilation. I start with a combination of my own source code (starting with an automatic disassembly the ROM and, with the help of the hardware schematics, working out what every line did over several hundred hours), and Chris’s source code, which we recovered several months later. The exercise is to determine exactly what’s going on inside the synth.

    You’d think it’d be easier to junk my own disassembly and just continue with Chris’s source code, but in practice I spend little time with his work. The reasons can be explained in a later post.

    A couple of things make progress-o-grams easier than they used to be. First, we have Python for this kind of scripting nonsense. Back in the early 2000s, I had to write my own GIF compressor in Object Pascal. We also have Claude so, rather than scrape the resources together to generate my own chart, I can just write a spec and see what it comes out with.

    Here, then, is where we stood a couple of days ago:

    The module names are from Chris’s source code: my version, of course, came out as one big lump. The striped areas are the ones I’m leaving until last because of rule one. The reason why I went through the whole cassette interface is because we have one old cassette, full of drop-outs, that contains original presets followed by a couple of demo songs by an unknown band, and then an album by The Police. At the time, we thought this was our last stand.

    When users bought the MIDI retrofit, they were provided with a pre-recorded cassette that allowed them to load the original factory presets onto a new battery-protected RAM chip, because there was no longer any room for them in ROM. In a world where the compact cassette was the only reusable storage medium for music, it’s unlikely that anyone besides the Oxford Synthesiser Company bothered to preserve this data for posterity.

    At least a complete copy of the instructions survives

    I decoded the old cassette routines, just so I could recover what I could from the cassette. It is, to say the least, a nonstandard modulation scheme. Again, that’ll be another post.

    Fortunately, on a floppy disk backup of Chris’s dated 1983, I found the source file for the original ROM-based presets that we’d thought to be lost to history. Problem solved.

    But the graph is important, because what often looks like endless, contextless, meaningless toil suddenly turns into painting a wall. As I scroll between different date-stamped versions, the wall turns red to orange to yellow to green, and I gain some small hit of dopamine that cannot be found in the minutiae of the envelope gating and triggering routines.