Follin from Grace

Now, I work in the synthesiser business because I wanted to be a musician, but also because I wanted to mess with electronics. This thing that I call a career is largely a consequence of being too scared to commit fully to either track, and I’m compelled to try to make a living from what I’d happily do for free. Even so, occasionally I get bored and research things like this.

In the last few months I happened across mentions of a familiar but rather obscure type-in music demo by a composer called Tim Follin, originally published in a British magazine in 1987. I thought this was an ancient curio that only I’d remember vividly from the first time around, with the unprepossessing title Star Tip 2. But apparently it’s been discovered by a younger generation of YouTubers, such as Cadence Hira and Charles Cornell, both of whom went to the trouble of transcribing the thing by ear. (You should probably follow one of those links or you won’t have the faintest grasp of why I’ve bothered to write this post.)

My parents — lucky me! — subscribed me to Your Sinclair at some point in 1986. It was the Sinclair magazine you bought if you wanted to learn how the computer really worked. So I was nine years old when the August ’87 issue flopped onto the doormat. One of the first things I must have done was to type in this program and run it. After all, I had my entire adult trajectory mapped out at that age. Besides, it’s a paltry 1.2 kilobytes of machine code. Even small fingers can enter that in in about an hour and a half, and you wouldn’t normally expect much from such a short program.

When I got to the stage where I dared type RANDOMISE USR 40000, what came out of the little buzzer was a thirty-eight second demo with three channels of audio: a prog rock odyssey with detune effects, crazy virtuosic changes of metre and harmony, even dynamic changes in volume. All from a one-bit loudspeaker. It seemed miraculous then and, judging by the breathless reviews of latter-day YouTubers with no living experience of 8-bit computers, pretty miraculous now. And the person who made all this happen — Tim Follin, a combination of musician and magician, commissioned by an actual magazine to share the intimate secrets of his trade — was fifteen years old at the time. Nauseating.

After three and a half decades immersed in this subject, my mathematics, Z80 assembler, music theory, audio engineering, and synthesis skills are actually up to overthinking a critical teardown of this demo.

The code is, of course, compact. Elegant in its own way, and clearly written by a game programmer who knew the processor, worked to tight time and memory budgets, and prioritised these over any consideration for the poor composer. This immediately threw up a problem of authorship: if Tim wrote the routine, surely he would have invested effort to make his life simpler.

Update: I pontificated about authorship in the original post, but my scholarship lagged my reasoning. I now have an answer to this, along with some implicit explanation of how they worked and an archive of later code, courtesy of Dean Belfield here. Tim and Geoff Follin didn’t work alone, but their workflow was typically crazy for the era.

From a more critical perspective, the code I disassembled isn’t code that I’d have delivered for a project (because I was nine, but you know what I mean.) The pitch counters aren’t accurately maintained. Overtones and noise-like modulation from interactions between the three voices and the envelope generators are most of the haze we’re listening through, and the first reason why the output sounds so crunchy. And keeping it in tune … well, I’ll cover that later.

Computationally, it demands the computer’s full attention. The ZX Spectrum 48K had a one-bit buzzer with no hardware acceleration of any kind. There is far too much going on in this kind of music to underscore a game in play. Music engines like these played title music only, supporting less demanding tasks such as main menus where you’re just waiting for the user to press a key.

The data

In the hope that somebody else suffers from being interested in this stuff, here’s a folder containing all the resources that I put together to take apart the tune and create this post:

Folder of code (disassembly, Python script, output CSV file, MIDI files)

The disassembly

The playback routine is homophonic: it plays three-note block chords that must retrigger all at once. No counterpoint for you! Not that you’d notice this restriction from the first few listens.

Putting it in the 48K Spectrum’s upper memory means it’s contention free. The processor runs it at 3.5MHz without being hamstrung by the part of the video electronics that continually reads screen memory, which made life a lot easier both for Tim and for me.

So there are the note pitches, which appear efficiently in three-byte clusters, and occasionally a special six-byte signal beginning 0xFF to change the timbre. This allows the engine to set:

  • A new note duration;
  • An attack speed, in which the initial pulse width of 5 microseconds grows to its full extent of 80 microseconds;
  • A decay speed, which is invoked immediately after the attack and shortens the pulse width again;
  • A final pulse width when the decay should stop.

This arrangement is called an ADS envelope, and it’s used sparingly but very effectively throughout. In practice, notes cannot attack and decay slowly at different speeds in this routine, because it alters the tuning too much.

Multichannel music on a one-bit speaker

The measurement of symmetry of pulse waveforms of this kind is called the duty cycle. For example, a square wave has high and low states of equal length, so a 50% duty cycle.

80 microseconds, the longest pulse width used in Star Tip 2, is a very low duty cycle: it’s less than three audio samples at 44.1kHz, and in the order of 1–2% for the frequencies played here. Low-duty pulse width modulation (PWM) of this kind is the most frequent hack used to give the effect of multiple channels on a ZX Spectrum.

There are many reasons why. Most importantly, it is simple to program, as shown here. You can add extra channels and just ignore existing ones, because the active part of each wave is tiny and seldom interacts with its companions. Better still, you can provide the illusion of the volume changing by exploiting the rise time of the loudspeaker and electronics. In theory, all a one-bit speaker can do is change between two positions, but making the pulse width narrower than about 60 microseconds starts to send the speaker back before it has had time to make a full excursion, so the output for that channel gets quieter.

The compromise is the second reason for the crunchy timbre of this demo: low-duty PWM makes a rather strangulated sound that always seems quite quiet. This is because the wave is mostly overtones, which makes it harmonically rich in an almost unpleasant way. Unpicking parallel octaves by ear is almost impossible.

The alternative to putting up with this timbre is to use voices with a wider pulse width, and just let them overload when they clash: logically ORing all the separate voices together. When you are playing back one channel, you have the whole freedom of the square wave and those higher-duty timbres, which are a lot more musical.

Aside from being computationally more involved, though, as you have to change your accounting system to cater for all the voices at once, you strengthen the usual modulation artifacts of distortion: sum and difference overtones of the notes you are playing.

So, on the 48K Spectrum, you have to choose to perform multichannel music through the washy, nasal timbre of low-duty PWM, or put up with something that sounds like the world’s crummiest distortion pedal. (Unless, like Sony in the late Nineties, you crank up the output sample rate to a couple of megahertz or so, add a fierce amount of signal processing, really exploit the loudspeaker excursion hack so it can play anything you want it to, and call the result Direct Stream Digital. But that’s a different story. A couple of academics helped to kill that system as a high-fidelity medium quite soon afterwards by pointing out its various intractable problems. Still, when it works it works, and Sony gave it a very expensive try.)

There’s one nice little effect that’s a consequence of the way the engine is designed: a solo bass B flat that first appears in bar 22, about 27 seconds in. The three channels play this note in unison, with the outer voices detuned up and down by about a twelfth of a semitone. We’re used to this kind of chorus effect in synth music, but the result is especially gorgeous and unexpected on the Spectrum’s speaker.

You don’t get many cycles in 0.2 seconds for bass notes, but here’s the three detuned voices in PWM, with the pulse troughs drifting apart over time.

A tiny bit of messing with the code

I didn’t do much with this code on an actual ZX Spectrum, but it’s possible to silence different voices on an emulator by hacking the OUT (254), A instructions to drive a different port instead. You can’t just change them for no-operations or the speed and pitch changes. POKE 40132,255 mutes channel one; 40152,255 mutes channel two; 40172,255 mutes channel three.

Python data-to-MIDI conversion

Are you running notes with a slow attack or decay? If so, the chord loop runs somewhat faster than it does when your code bottoms out in the envelope sustain stage. Are you now playing a higher-pitched note? Your chord loop now runs a little slower on average because the speaker needs to be moved more often, so all your other notes go slightly flatter.

The pitch of every note in this piece of code depends on everything else. Change one detail and it all goes out of tune. I had visions of Tim Follin hand-tuning numbers in an enraging cycle of trial and error, which would somewhat have explained his devices of ostinato and repetition. But it turns out from his archive, now online thanks to Dean Belfield, that he possessed some compositional tools that were jointly maintained by his associates. Having written the calculator in the opposite direction, I can confidently say: rather them than me.

To get the note pitches out accurately, you need a Spectrum emulator, so I wrote the timing part of one in Python. It counts instruction cycles, determines the average frequency of chord loops given the configuration of envelopes and pitches for every set of notes, and uses these to extract pitches and timings directly. The Python data-to-MIDI script takes the data straight from the Spectrum source code, and uses MIDIUtil to convert this to MIDI files.

The code generates three solo tracks for the three voices, follin1.mid to follin3.mid, and all three voices at once, as follin123.mid. The solo voices include fractional pitch bend, to convey the microtonal changes in pitch that are present in the original tune. (By default, the pitch bend is scaled for a synthesiser with a full bend range of 2 semitones, but that can be changed in the source code. Pitch bend is per channel in MIDI, not per note, so that data is absent from the three-voice file.)

The MIDI files also export the ADS envelopes, as timed ramps up and down of MIDI CC7 (Volume) messages. Because the music is homophonic, they are identical for every voice, meaning that the composite track can contain them too.

Microtonal pitch

There are some interesting microtones in the original data. Some of the short notes in the middle voice are almost 30 cents out in places, but not for long, and it seems to work in context. As does the quarter-tone passing note (is it purposeful? Probably not, but it all adds feeling) at bang on 15 seconds.

Cadence Hira’s uncanny transcription is slightly awry in bar 10 here: the circled note is actually a B-half-sharp that bridges between a B and what’s actually a C in the following bar. Meanwhile the bass channel is playing B throughout the bar. But she did this by ear through a haze of nasty PWM and frankly it’s a miraculous piece of work. Two things count against you transcribing this stuff. First, the crunchy timbre issues we’ve already discussed that stem from the way this was programmed. Second, the sheer speed of this piece. Notes last 100ms: your ear gets twenty or thirty cycles of tone and then next! If you’re transcribing by ear you have to resort, as Cadence has, to filling in the gaps with your music theory knowledge of voice leading.

Most of the other notes are within 5 cents of being in tune. Once you expect them, you can just about hear the microtones in any of the multiple recordings of the original (example), but only if you slow them down.

A big CSV file is also generated for the reader’s pleasure, with:

  1. The time and duration of every note in seconds;
  2. The envelope settings for each note;
  3. All the interstitial timing data for the notes in T-states (more for my use than anybody else’s);
  4. Exact MIDI note pitches as floats, so you can inspect the microtonal part along with the semitones.

Because the detail in the original tune was quite unclear, it wasn’t possible to be sure whether the quirks in tonality were intentional. Originally I did not bother to include microtones, but added them to the MIDI a week after posting the first draft.

I’ve now satisfied my initial hunch that the microtones aren’t deliberate, but a phenomenon of the difficulty of keeping this program in tune. They also happen to be quite pleasant. But going beyond conventional tonality was not essential to appreciating the music, or to recreating the intentions of the composer.

Getting the timing right

Including exact note timings has proved interesting for similar reasons to tonality: the whole notes (semibreves, if you must) are about a sixteenth-note (a semiquaver, if you insist) shorter than intended because of the disparity in loop speeds. That is definitely unintentional because the specified durations are round numbers, with no compensation for different loop speeds. Again, the feeling of a slightly jagged metre works well in context.

But the care I have taken in accountancy seems to have paid off: you can compare this MIDI against an emulated Spectrum and the notes match, end to end, with a total drift of less than ten milliseconds.

The exact reconstruction: mix 1

As a way of avoiding real work, here’s an ‘absolutely everything the data gives us’ reconstruction of Star Tip 2 — microtones, envelopes and all — played through a low-duty pulse wave on a Mantis synth.

In other words, this is simply the original musical demo on a more advanced instrument: one that can count cycles and add channels properly. The only thing that I added to the original is a little stereo width, to help separate the voices.

That B flat unison effect (above) is unfortunately a wholly different phenomenon when you’re generating nice band-limited pulses, mixing by addition, and not resetting phase at the beginning of every note. It’s gone from imitating something like a pulse-width modulation effect (cool) to a moving comb filter (less cool).

The quick and dirty reinterpretation: mix 2

This was actually my first attempt. My original reluctance to do anything much to the tune means I didn’t labour the project, using plain MIDI files with no fancy pitch or articulation data.

But, because I’m sitting next to this Mantis, I put a likely-sounding voice together, bounced out the MIDI tracks, rode the envelope controls by hand, and ignored stuff I’d have redone if I were still a recording musician.

Adding a small amount of drive and chorus to the voice creates this pleasing little fizz of noise-like distortion. It turns out that some of that is desirable.

Epilogue

Now I’ve brought up the subject of effects and production, neither of these examples are finished (or anywhere near) by the production standards of today’s computer game soundtracks. But I’m provoked by questions, and I hope you are too. First, philosophy: could either mix presume to be closer to the composer’s intentions than the original? Then, aesthetics: does either of these examples sound better than the other?

This brings me to the main reservation I had about starting this project in the first place: that using a better instrument might diminish the impact of the work. Star Tip 2 continues to impress because of its poor sound quality, not in spite of it. Much of its power is in its capacity to surprise. It emerged from an unassuming listing, drove the impoverished audio system of the ZX Spectrum 48K to its limit, and was so much better than it needed to be. But the constraints that dictated its limits no longer exist in our world. An ambitious electronic composer/engineer would need to explore in a different direction.

Exploring and pushing technical boundaries, then, is not the only answer. An equally worthy response would be a musical one. Twenty-four years after Liszt wrote Les Jeux D’eaux A La Villa D’este for the piano, Ravel responded with what he’d heard inside it: Jeux d’eau. One great composer played off the other, but the latter piece turned out to be more concisely expressed, more daring, and quickly became a cornerstone of the piano repertoire. It’s hardly an exaggeration to say that it influenced everybody who subsequently wrote for the instrument. (Martha Argerich can pretty much play it in her sleep, because she’s terrifying.)

I’m not a composer, though, and definitely not one on this level. A magnificent swing band arrangement of a subsequent Tim Follin masterpiece? Somebody else’s job. Designing the keyboard used in that video? Creating the Mantis synth used above? Wasting a weekend on an overblown contemplation of a cool tune I typed in as a child? Definitely mine.

ADC Open Mic 2023 : Where the ideas come from

When an artist produces a work about an artist producing a work, it’s hard not to detect a cry for help.

My dearest cousin Geoffrey,

I have run out of the food of inspiration, and am now digesting myself. Am going quite spare. Any crumb of an idea that you might spare me, might spare me.

It occurs to me that this postmodern fad for self-reference in literature might be getting rather stale.

Yours etc.,
my dearest cousin Geoffrey.

The question ‘Where do your ideas come from?’ is an inside joke among writers. An oyster can’t tell you how to make a pearl. The grit gets in, who knows how, and the rest is nature.

But a creative endeavour needs an irritant or stimulus of some kind: the thing that gets you to the point where gradually revealing the work is enough to propel you forward.

Chuck Close was (until recently) a painter, but he might be just as famous for giving the world an aphorism: ‘Inspiration is for amateurs: the rest of us just show up and get to work’. It suggests, if you’re suitably attuned, that you can just pluck a starting point out of background noise.

Here’s one starting point:

A couple of years ago, people couldn’t go on holiday, so they spent their holiday money on guitars and microphones and plug-ins. And all was well, as long as our loved ones stayed alive, and we didn’t need microchips to build hardware with.

This year, the music tech budget is right back on holidays. Or it’s blown on something frivolous and stupid, like not freezing to death in winter. Everybody here is working hard to get back to where we were. And we’ll probably end up there anyway, but not by being complacent or losing ground.

A big consequence of having a slow year is that it makes public companies cheaper to invest in. Year by year, it’s getting more probable that any given person in this room has spent part of their waking life in the service of a private equity company, who have decided, in their own language, to go long.

These companies are like buy-to-let landlords in London, who convert every cubic foot of enclosed air into a mezzanine with a mattress on it. Like landlords, there are many exceptions, but not enough to soften the stereotype.

Inside a PLC, everything long-term, everything commercially risky, every corridor and stationery cupboard, and all but the most predictable project with the nearest horizon is under pressure to deliver a rapid return, or else be dissolved and absorbed into a more immediate hit.

Quarterly growth targets. They’re what inspired so many of us to get into audio.

The pressure towards banal uniformity in everything, everywhere, is well documented. This is just one of many causes, and ought to make us a bit angry. But I’m done as an employee, and I’m done being angry about things I can’t change, and experimental data suggests that being able to sit on a stage and whine about banal uniformity is a hard-won and delicate privilege.

Besides, when short-sightedness and inertia overtake a competitor it’s a great day for me.

But, as deadly sins go, anger is a powerful creative force. There’s always been money in anger, as demagogues and journalists know, and now there’s a whole corner of the tech world that exploits it to the extent that it’s called the rage economy.

Especially there, though, we end up with banal uniformity. For all the buttons it pushes in our brain stems, social media is flatly unsatisfying and becoming more so. A restaurant where all the meals are free because someone’s chewed them already. And then taken a cocktail stick and written ‘Try Grammarly’ in the dribble.

Who else here has deactivated a social media account in the last few weeks?

[hard to tell with the lights shining in my face and the auditorium unlit, but perhaps 10-15% of hands go up]

I’m left with LinkedIn and my God.

‘My company has a new product out. In case you missed my previous fifteen posts over the last two weeks, here’s another silent video of me playing with it in my home studio.’

‘Do you want a solution that’s truly unique and crafted to your business? We’ve got a warehouse full of the bastards.’

Is there a word for grudging capitalism? An acceptance of our fate in a bigger machine, like a stoned Karl Marx? A bearded Victorian bellowing ‘Workers of the world! Could we just keep money but, like, stop being such dicks about it?’

The second big trend that’s changing our world is jumping out of every surface here so enthusiastically that I barely need to introduce it. In common with the other innovations that have turned our industry inside out since the last time I laced a tape, machine learning changes the nature of inspiration, expression, and the art itself.

The tools for wielding it are getting so accessible that we’re running out of excuses not to play with them.

Computers aren’t creative in the same way we are, and may never be. But it doesn’t matter. Crafters romanticise the process, but success is mostly about the product. Other animals display whimsical creativity; we’re just the only primate that can hold a paintbrush properly. There’s no reason why creativity has to be the sole preserve of organic chemicals either.

So I’m going to leave you with this. I asked a friend what I should say tonight, and she then asked ChatGPT for ‘a short closing speech on the theme “keeping your enemies closer”. Intended for an audience of audio engineers who are very intelligent. Make it funny.’

For some reason, ChatGPT responded in the voice of P T Barnum on crack, getting hung up on alliteration. I’m going to spare you most of the words. Rest assured that writers are safe for a couple more months at least. It seems weird to give the final say to a large language model, but the closing words deserve centre stage, and we should probably get used to this.

May our mixes be clear,
and our enemies near
— but not too near: we don’t want feedback.

Thank you.

ADC Open Mic 2022: It’s About Time

How doth the little busy bee
Improve each shining hour,
And gather honey all the day
From every opening flower!

In works of labour or of skill,
I would be busy too;
For Satan finds some mischief still
For idle hands to do.

Isaac Watts (1674–1748)

Few subjects are as universal, or as ancient, as the desire to take control of time.

Stoic philosophy is two millennia old, and one of its obsessions is to juxtapose the grand arc of time against the human miniature. This is a major theme of the earliest chapters of the Bible, too: we are encouraged to make our stay here really count for something. Even if 15% of it is supposed to be kept fallow, and the purpose of the remaining 85% isn‘t particularly clear.

For the last three hundred years, ever more gigantic systems of productivity and habitation have transformed the way we live. There are plenty of good books about that too. Five minutes isn’t long, so let’s pretend I’ve cited them.

Productivity, anyway, has become a science. Ideally, the more value you produce, the higher your reward.

Kind of. Hike across the landscape of anybody’s waking life, and you’ll find a few seams of riches and vast plains of desert. Salary reviews are a propaganda minister’s idea of a guided tour. A person’s most marketable skills can often turn out to be entitlement and suspicion.

Anyway, Productivity As Science! It’s also why, ever since this world required us to work alongside machines, we’ve been comparing ourselves unfavourably and unhappily with them.

We grasp at ways to be more mechanical. And measure and tinker to maximise speed. We Bullet Journal and step-count and Pomodoro and Asana and Huel and hack our sleep cycles in a quest to become ever leaner and more deterministic.

And all because time is precious and non-renewable. But the balancing side is equally important and we’ll —

This minute is sponsored by Grammarly. Suppose you want to send an all-important email. Can you feel that rising anxiety? Yeah. That’s how keeping you away from your content is meant to feel. Snap! goes a little neuron. Now every time you feel powerless, we’ll be here! A corner of your brain that is forever Grammarly.

These squalid little businesses, bullying and cajoling their ways into your mind and wallet. Your lizard brain, knowing it’s being gamed, literally soaking in its own fury until you‘re completely beside yourself and wallop! You fill an important email with elementary grammatical errors.

So buy Grammarly today! Embrace the drip-drip privatisation of your insecurity. The erosion of your human agency. The certainty that your cold, dead computer will one day write better prose without your help, and become the chief editor of your every waking thought.

Grammarly.

— The other is equally important and we’ll get to it now. It is the cry for unstructured time; for slowness and chaos and intentional waste.

A whole other cluster of books dive more deeply into this seam of philosophy. That promise to break us out of the battery farm. Books like In Praise of Slow, and Four Thousand Weeks. Best to read those on company time. And Samuel Beckett’s absurdist masterpiece, Waiting for MIDI 2.0.

But everybody will at some point feel the urge to rebel against order before it becomes a prison.

Because you can train your self-discipline to ever-greater feats of endurance. You can lighten its burden by doing things you actually enjoy. But you can’t drown out the countermanding voices forever.

When you have expended your reserves of self-control — and you will — what remains is pure id: the need to rebel, to slack off, and to reclaim whatever you’ve denied yourself. The roads not taken will burst forth in unrestrained song, and there you’ll be in the glass office again listening to the lecture about the importance of ‘attitude’ and ‘culture being a two-way street’ that comprises one very long uninterrupted sentence.

Aah, you can leave ROLI, but it never leaves you.

Elsewhere, headquarters of big tech companies now look like kindergartens, full of whimsical interior design and toys and sugary food. The idea is to tug us back from the grindstone, and into the proper middle-distance.

You cannot both floor the pedal and appreciate the scenic route, or daydream on the same afternoon when you’re shipping a beta, or form memories and nurture friendships while the world outside passes in a dark blur. Imagination is fragile, but it’s probably why we’re here.

My grandfather was more successful than I am. He used to urge me that rest is just as important as work. I wondered why he thought I’d need that advice as a twenty-year-old undergraduate. But, if you end up self-employed, there is nobody to insist that you take leave, and nobody to cover the cost. It turns out to be a failure of character if you don’t reach into your own pocket occasionally, and buy yourself some stillness.

To battle hard for marginal gains is a fool’s errand. Ekeing 20% more code from a working day isn’t going to throw you into the next orbit of wealth or wisdom. But widening your social circle and deepening your well of experience might.

So, note to self: words like ‘harder’ and ‘faster’ can be left to Daft Punk. Better days and better people often begin with ‘no’.

And Paul, I’m sorry I haven’t finished that demo yet, but I wrote this monologue for ADC. It’s not what you asked for but you might like it anyway.

Thanks and apologies as always,

Ben.