So, a while back I saw a few videos of Dan Tepfer’s where he was exploring ideas like algorithmic music and visual representations of music. One video that really grabbed my attention was his exploration of canons and automated canon playback, which he posted on Instagram:
At the time, I was struggling to write a prolation canon for this string quartet I’ve been slowly, off-and-on working on for the last month or so.1 Sadly, the rules Tepfer mentions don’t apply to prolation canons, only to simpler sequential canons, so I suspect I’ll need to work out the prolation canon slowly and by relying on common sense and my ear.
However, that software of Tepfer’s caught my attention. In the course of the video, he specifically mentions SuperCollider, which I soon learned was also what he was using for other experiments and creations with algorithmic music, such as his Natural Machines project:
This grabbed my attention, because for a while now I’ve been saying that we ought to be able to do some pretty cool things with machine/human interactive performance. Obviously, if you’re an amazing pianist like Tepfer, you can do all kinds of things in solo interaction with software, but I’m curious just how far the technology could be pushed in terms of getting an algorithm to “listen” and “respond” to live musical performance by one or more human beings.
Anyway, SuperCollider’s free, so I’ve installed it and started working through some tutorials while I wait for my copy of The SuperCollider Book to arrive. So far, I’ve just started exploring, so I have nothing amazing to show for it yet, though I can twiddle some oscillators in real time, and output the audio to Audacity. The sounds on this track, in fact, were so complex that Audacity was struggling to play them back without lagging (that, or my poor old mac’s memory is just overtaxed), and I had to export them to MP3 to actually hear the output again properly without (as much) weird distortion and choppiness:
(My son’s commentary sums up that track: “It’s like a dream inside a robot’s brain!” It’s his first blush with old-fashion synth noise, and he responded by dancing around a little, then working the keys on my saxophone and thumping the drum pads on my edrum kit!)
This isn’t much like writing music in the traditional way: it’s more like inputting numbers and then seeing what the computer does with them—a bit like how one would messes with knobs and switches on a modular synth to see what the machine spits out. Still, it’s fun to adjust numbers and see what happens, and moreover, I suspect it’s a stepping stone towards more interesting things. (I can already see how one could build ways for it to “listen” for incoming MIDI data and “respond” with algorithmically manipulated versions of the same.)
For now, that’s the step that starts what I imagine could be quite a long journey: SuperCollider’s an entire programming language for algorithmic music (as well as the environment and associated audio software in which the language can be used), so there’s a lot to dig into. We’ll see how far I get with it, I guess. The book is almost 800 pages long, though how much of that is references, I’m not sure. (As usual, I was able to find a second-hand copy for cheap, but it’ll need to cross an ocean before I get to dig into it.)
Yes, I realize those are string orchestra patches. I don’t have a good sample-based vst for solo strings.↩