The word “infinite” as in “infinite possibilities”, comes up quite often when a new synth, DAW, or any other piece of musical equipment gets presented in ads, online videos or at trade shows. Usually what the marketing departments are trying to say with this hyperbole, is that the device will offer even more sonic possibilities and options, letting you create any sound imaginable and that you get a huge quantity for the price you pay. Ultimately it’s like saying that you get double the potatoes for the same price, which is a lot easier than having to venture into the land of subjective taste and perception, and ultimately seems to get the manufacturers their expected ROI.
The question is: do we really need more options? Is having “infinite possibilities” really a desirable thing? In this article (which was inspired by this thread over at Muff Wiggler, which in turn was sparked by this blog post) I’ll be looking from different angles at this complicated topic. Of course I can’t give a final answer to the many questions that arise when dealing with the notion of “infinite” and “options” (which is probably beyond my possibilities). This post is rather a collection of notes, quotes and thoughts aimed at opening up a space for discussion and hopefully give you some inspiration for your musical work.
Too Many Options
When talking about infinite possibilities, I am instantly reminded of an old freeware VST plugin called Autogun, which only features one control: a preset selector.
Enter an amazing and unexplored sonic universe, armed only with our free Autogun, search 4294967296 (Four Billion Two Hundred Ninety Four Million Nine Hundred Sixty Seven Thousand Two Hundred Ninety Six) presets for their acoustic potential.
The plugin reduces software synths to their very essence: being a preset-choosing interface, which, let’s face it, is the only thing some people do with a softsynth anyway. Why get your hands dirty when you have an almost infinite pool of ready-made sounds to choose from?
Autogun is “infinite possibilities” in their worst incarnation (though I hope Image Line was a bit ironic when releasing this) and does a great job at illustrating what so many people don’t like about modern DAWs with their countless, preset-packed software synths and effects. You get too many options, and each of these options is rigidly pre-defined by somebody else’s aesthetic vision.
When reading articles, books or talking with musicians, it’s quickly obvious that having more options is a crucial point in electronic music and relates closely to the discussion about limitations and creativity. I used to be quite involoved in the chipmusic scene a couple of years ago, and a recurring theme there was (and still is, I guess) that people got into it because of the limitations. When you pick up something like a Game Boy and use it to make music, you’ll have to deal with a whole set of very strict limitations: only 4 monophonic voices, one of which can only produce noise, a very limited set of available waveforms, a very limited tracker-based interface to write the music and so on. Many people getting into chipmusic were coming from laptop-based production and were kind of annoyed by the over-abundance of choices and options.
Anders Carlsson (aka Goto80) puts it like this in the article The Chip Sect on his blog Chipflip (a recommended read btw, even if you totally hate the very sound of chiptune):
The chipscene is a sort of bounded culture, where the members choose to live in celibacy from the temptations of technology. A community of people based on abstinence rather than affluence. They dare to say no.
People on the outside cannot understand why you would want to limit yourself to old machines. For the outsiders, more options means more freedom. For the insiders, more options means more angst. More confusion.
Now of course one could argue that a lot of chipmusic sounds the same, and that this is probably due to the limitations of the used hardware. The same could be said about many other musical genres/styles though. Great musicians will always make great music, even if they only have a frying pan and a paper clip at their disposal. Fact is: there is a strong link between limitations and music, but of course, it’s more complicated than this.
Brian Eno has an interesting explanation on why technical limitations are a good thing (from The revenge of the intuitive, thanks to Muff Wiggler user nostalghia for point us at it):
The trouble begins with a design philosophy that equates “more options” with “greater freedom.” Designers struggle endlessly with a problem that is almost nonexistent for users: “How do we pack the maximum number of options into the minimum space and price?” In my experience, the instruments and tools that endure (because they are loved by their users) have limited options. Software options proliferate extremely easily, too easily in fact, because too many options create tools that can’t ever be used intuitively. Intuitive actions confine the detail work to a dedicated part of the brain, leaving the rest of one’s mind free to respond with attention and sensitivity to the changing texture of the moment. With tools, we crave intimacy. This appetite for emotional resonance explains why users – when given a choice – prefer deep rapport over endless options. You can’t have a relationship with a device whose limits are unknown to you, because without limits it keeps becoming something else.”
In the article titled Collateral Damage (published on Wire in 2013) Mark Fell also deals with this topic. The piece starts with an anecdote about Thomas Dolby and his approach to creating sounds on a synthesizer:
Back in the early 1980s, the synth pop guru Thomas Dolby was asked on British television to describe his ideal synthesizer. Although I can’t find any evidence of this on YouTube, I have a vague recollection that his reply was something like: “I sit at the synthesizer, I imagine any sound, the synthesizer makes the sound and then I play it.”
From this description we can assume that the sounds Dolby’s synth is capable of, were only limited by the imagination of its owner. One could wonder why companies still try to sell synths as offering even more sonic potential, if Dolby’s instrument from the 1980s was already capable of producing anything its master ordered it to. Fell then moves on to talk about Phuture’s making of “Acid Trax” (commonly regarded as the first Acid House record) using the technically-limited TB303 bass synth:
The story goes that neither of them knew how to use the Roland TB303, which was in those days a more or less ignored little synthesizer known for its astonishingly bad imitation of the bass guitar. Pierre explains how he couldn’t figure out how to work the 303 – it didn’t come with a manual – so he just started to turn the knobs. The result became the sonic signature of Acid House – not just the familiar squelchy Acid sound (which often steals the limelight in the Acid House story) but also to the repeating musical sequence, the use of accents, portamento and varying note lengths.
Dolby’s approach to synthesizing sound is easily associated with how avant-garde composers like Stockhausen would have approached this matter. Academic electronic music has always been preoccupied with creating the previously unheard in a very deterministic way. Despite this, many of these sounds have a very consistent and recognizable aesthetic, which can be clearly associated with the technical limitations of the time but also with the artistic vision of the composer and the cultural ambient they were acting in. Even if we move to more contemporary expressions of the same approach, to musicians using “open canvas” programming systems like MaxMSP or Pure Data, we still find clearly distinguishable genres (though they might not always be called like this or even have a name) and aesthetic currents. This makes me wonder if those infinite possibilities, which you can indeed have with certain programming languages, are really so important for musical creativity, or if there’s more important factors at play.
Infinite is in the Details
Let’s take a step back and look at a classic instrument like the violin. We all have a very precise and distinct idea in our mind about how a violin sounds. Does that mean the violin only produces that one sound? Of course not. The more you listen to an instrument, the more it looks like a fractal. What looks like a simple structure from the far, gets more and more complex the closer you get. But it being a fractal means that there is repeating patterns, which our brain is able to cling on, so even in the finest details, even if you pluck it, scratch it or use it in more improper ways, a violin will sound like a violin. So on one hand you have the limitations deriving from the instrument’s physical characteristics (number and length of strings, construction and material of the body, etc.) and on the other hand your possibilities increase exponentially, the more you explore it. With kvsu I recently worked on a project called acousmatic strings, where we attached exciter speakers to the bodies of stringed instruments (a classic quartet) making them resonate via frequencies created in Csound. It’s pretty amazing how much everything sounds like a violin, even if you pass totally alien sounds through it.
It should be noted that this is not just about making new, unconventional sounds with old instruments. Sure playing multiphonics on a wind instrument, or preparing a piano is extremely interesting and worth all the experimentation, but what I’m talking about here is all those subtle variations that make up the instrument’s sonic palette and which, depending on the instrument, can be quite a few.
This discourse of course doesn’t just apply to traditional instruments. For example there’s a lot of depth to certain hardware synths. If you’ve ever put your hands on some of the classics, you’ll know that you can spend days playing the same note and still get something new and surprising out of it. Even simple, subtractive architectures can offer quite a wide range in these regards. That’s probably part of the analog fascination, which is often talked about. If we put this a bit philosophically, a potentiometer on an analogue synth has an infinite number of positions, which all differ from each other. Synths have a huge number of such controls and sometimes these will also be interacting with each other. What you get is a very complex system, which relies on large number of finely-tunable variables.
Of course we are only able to perceive differences up to a certain degree, so most of the time having a “big enough” number of variations, or having a virtually infinite one, doesn’t make any difference, which also explains why digital synths can be just as exciting and interesting, if their architecture is sufficiently complex and capable. Actually, to get back tot he afore mentioned Game Boy, even that extremely limited instrument, with its inexpressive interface and limited set of sonic possibilities, still offers enough to spend months in exploration (depending on the software you run of course). There is a whole set of non-linearities deriving from digital aliasing and low-bitrate artifacts, which offer a lot of material for experimentation.
Playing an instrument is all about experience, exercise and control (or the deliberate and conscious lack thereof), which in turn means that you need to learn to hear and feel the instrument. It’s a very complex cognitive process. Still, as the “Acid Trax” anecdote above shows, innovation often happens by chance and even in spite of the musician’s knowledge of the instrument. Limitations are of utmost importance here.
Mark Fell, in the final part of his article uses a sports analogy to better illustrate the relation between creativity and technical limitations:
Imagine, for example, that we could change the rules of football midway through a match. Would this lead to a better game? Would fans cheer as much if a player randomly decided that stuffing the ball up his shirt and walking into the net constituted a goal? No. In football, the laws of physics, the rules of the game, the technologies, their size, shape, weight, etc, combine to keep the system in a state of equilibrium and give it significance.
In fact, we need our technical limitations, just as the football player needs the rules of the game. Changing those rules might make the game more exciting, but it will also create confusion and force people to re-learn they’re skills (which can be both good or bad). Change the rules too much and you might end up with a totally boring, or impossible to play, game. Innovation for the sake of innovation can be a deceiving type of business.
Open-ended and Modular
As previously mentioned commercial keyboard-based synths usually offer a very wide range of timbres, registers and dynamics. Still in most cases the architecture is limited by design to certain combinations and the interface is optimized to produce some sounds more easily than others. These restrictions, together with some recognizable “quirks” in how the hardware operates, give a strong character to some instruments. You will probably be able to recognize a Minimoog when you hear one, despite it being capable of a very wide array of sounds.
The Modular Synth is by definition a much more open instrument, this of course gives you more options, more possibilities, though by its very nature it also imposes many restrictions. For instance, you can’t really save patches, polyphony is often hard to achieve, the functionality of each module is limited by the surface of its panel and both the available rack space and your budget determine what your instrument will be like.
Lachenmann once said: “composing music means building an instrument” (translating freely from memory here, since I couldn’t find an official translation anywhere online). While Lachenmann of course didn’t mean this literally, with the modular we physically build our own, personal synthesizer. While the modules are usually designed by somebody else, your choice of them and the way you patch them together is what really determines the instrument. The character of the modular, what we could call its sound, will be strongly determined by the player’s “patching style”.
The player’s approach and technique always influences the sound, to the point where you can recognize a musician like Miles Davies just by hearing a couple notes. But of course with more traditional instruments you’ll always be bound to their physical (or electronic) set of characteristics. The modular dissolves many of these bindings, shifting the sound towards the patterns residing in the player’s brain. It’s also a shift from just performing on an instrument to both building and playing the instrument. A great modularist will wisely choose the best modules to fit these patterns, and will develop clear patching and playing strategies to make them produce a very precise set of sounds. It’s about experience, exercise and control, but also about limitations, about creating that intimacy with the instrument: the modular synth being a hardware (often analogue) instrument makes up for its open-ended design. You don’t have to be content with a completely pre-defined architecture, but at the same time you don’t have to fight for intimacy in a system with too many options. That’s probably one of the reasons this instrument is fascinating so many people.
Musical programming environments like MaxMSP are even more open ended and ultimately shift the focus from interacting with an instrument to designing everything from scratch. When you can build your musical piece from the ground up, including the instruments, the tuning system and the way the music is generated, this of course means having every possible option at your disposal, but it’s also a lot of work, and you need to create the necessary intimacy yourself. Interestingly, being these programming languages very complex, the musician’s knowledge of them will often be somewhat incomplete, forcing them to a more trial&error-oriented approach. Despite being almost the ultimate instrument, which will generate any sound you can imagine, we’re sometimes a lot closer to practices like circuit bending, where interesting sounds are intentionally discovered by chance, or to Phuture stumbling upon the “Acid” sound.
As previously mentioned musicians have always been very interested in expanding the universe of conceivable sounds, in creating the “new and previously unheard”, in expanding the possibilities of music, be it electronically or mechanically/acoustically produced. But even the longing for a musical revolution can be linked to the fact that there are forces these musicians can push against and technical boundaries that still enable them to carve out that little space of intimacy in which to create. Once all of these limitations have been removed, what do we have left? We need an opposing force to push against, and we can only find new solutions by modifying the old ones. This is probably why so many MaxMSP /Pure Data / Supercollider patches end up being iterations on accepted and consolidated models and why so many Eurorack systems are built to mimic old Buchla/Serge synths from the 60s and 70s.
I guess this is just how humans work, and being music a very human form of expression, I don’t see why we should want it to be any different.
Photos by Elizabeth Busani and Hannes Pasqualini