was successfully added to your cart.

Interview with Angelo Bello

By 13/07/2018articles, interviews

Angelo Bello is a sound artist from New York, who focuses on algorithmic and computer music composition. He has extensively explored the New Gendyn Program, Peter Hoffman’s implementation (developed in the late 1990s) of Iannis Xenakis’ GENDYN algorithm. In this interview for Elli, Angelo talks about his approach to technology, and gives his perspective into aspects of computer music and algorithmic composition today.
This Fall he will release a four-track EP for Elli, of music composed exclusively with the New Gendyn Program.

How did you begin with electroacoustic/computer music?
As a teenager I was lucky to have a reel-to-reel tape deck with sound-on-sound capability that permitted me to experiment – the usual expansive echo effects and layering of tracks. The tape deck practically transformed my Yamaha combo organ into a string section (I had no synthesizer at the time). That experience of transforming one instrument into another through the tape deck probably informed later interest and work. Prior, I began lessons on the church organ at around the age of ten years – this occurred after I found a small chord organ with some sheet music in a neighbor’s garage and taught myself how to play it. I convinced my parents (through much begging), to buy a proper instrument, which ultimately was an Allen church organ that once served a church at a local university. In college, I began to seriously study contemporary music. My major was in engineering (digital electronics and RF microwaves), and if I wasn’t in a classroom, I was spending my time at the electronic music lab that fortunately had one of the original Moog modular synthesizer models, which practically covered one of the walls of the studio – an amazing instrument. It was in this lab where I experimented mostly with tape manipulations, while I naively applied – as I understood them at the time – concepts and techniques that I read of Iannis Xenakis and John Cage: random events, stochastically defined organization, chance operations, etc. – a lot of splicing. At that time, I began to compare the methodologies of (primarily) these two composers.

When did you begin working in the algorithmic or computer music side?
It wasn’t until I decided to move to Paris in 1995 to study at Xenakis’ facility Les Ateliers UPIC when I began to seriously focus on computer music. Relatively strong computing power was readily available on a personal and individual basis on the desktop. I would pull my PC around the streets of Paris to/from the UPIC studios inside a suitcase. The Internet was exploding and I discovered resources online. My algorithmic development environment of choice was – and still is – the Matlab program. I developed various modules to manipulate and process sound. However, the UPIC system and the GENDYN program both had a major and significant impact on my way of working and composing with computable sound.

Can you describe the UPIC system and the GENDYN Program?
The UPIC system and the GENDYN algorithm were both conceived by Iannis Xenakis and developed by the engineers and programmers at the CEMAMu, as part of Xenakis’ research program to investigate the scope and possibility of computers in contemporary music composition. These were systems of nonstandard sound synthesis and composition that were radically different from the methods employed by the computer music world at that time. The UPIC system used a drafting table to transform the lines and curves of a drawing directly into sound via a connected oscillator unit. Later, once computing power became less expensive, the drafting table was replaced by a desktop PC, and additional sound processing features were incorporated into the system such as a robust frequency modulation capability. It was on this PC-based version of the UPIC system in the 1990s where I spent many hours over the course of three years. I was able to generate rich and complex timbres from the system via the feedback FM option. I would “perform” entire works in real-time and capture them on tape. These captured timbres were used as transfer functions and source material for further processing or as entire and complete works in and of themselves.
The GENDYN algorithm was a way of working more rigorously within the purely computational domain and able to create an entire sonic composition in whole cloth. In other words, but to simplify it, Xenakis had specified parameters within the code of an executable program that defined various probability distribution functions that ultimately controlled the timbre and large-scale form of an acoustic event, on a sample-by-sample basis. The algorithm would be realized as an executed program at the command prompt, let the program run, and then review the results resting on your hard drive in the form of a data file after many hours of processing time. In the late 1990s, Xenakis’ algorithm was translated from the original Basic into C++ with real-time synthesis capabilities and interactive GUI by musicologist and computer scientist Peter Hoffmann. I continue to use this version of the GENDYN algorithm, known as the New GENDYN Program.

Can you talk a bit about why you chose to focus on Xenakis’ compositional tools?
Xenakis’ tools have kept me interested all these years (since 1995), because they force you to create your own worlds and frameworks. I like that space where I get to make decisions with latitude and definitiveness. He intended for these tools to be used by other composers, and, I believe, designed them knowing that composers might ask themselves this very question: “Why am I using the tools of another composer?” My own answer to that question is that for Xenakis himself, he would often say that he wanted to purge himself of even his own way of thinking, from his own work or from his own past (he had called this state of working “ek-stasis”). This, in his mind, could result in an originality and freshness. The UPIC system presents the composer with a completely blank surface – in the original incarnation, it was a blank piece of paper on a drafting table; in the later versions, it was a blank white computer screen – waiting for the lines and curves. With the GENDYN program, the challenge is to understand what you are confronted with – which is nothing but abstract mathematical relationships encoded in software – and to then create something. The way I understand this confrontation: it seems to me that Xenakis is asking you to be truthful and original at the same time. His tools do permit this to happen, like the way a blank sheet of music stave paper confronts a composer of the écriture culture of musical composition. With the GENDYN, he invented a way of synthesizing sound from literally nothing but geometry and mathematics: equations and relations that are abstract descriptions of natural physical phenomena in the form of probability distributions, points and lines. But further, these already abstract relationships are then encoded in binary logic in a computer program. The composer defines the input parameters – the initial conditions – for the aggregation of these multiple and varied mathematical relationships, and then lets the system present the output as a completed sonic organism. Peter Hoffmann’s implementation of the GENDYN algorithm was itself a landmark for me. Like the UPIC that I focused on previously, it became a platform for learning as well as a platform for a disciplined approach to formal, structured and a generative craft/art of composition.

Can you describe what informs your work today?
My early training on the organ was incomplete. After my experience at the UPIC Studios where I immersed myself in Xenakis’ world and tools, I felt there was a gap in my understanding of composition, or at least a disconnect with species counterpoint. As I immersed myself in the GENDYN, I felt like I needed to immerse myself into counterpoint and fugal analysis. I then began seeing parallels between the structure of the GENDYN method and traditional counterpoint. This is where I am now with the GENDYN.
Then of course, the connection between JS Bach and Xenakis was clear. Christoph Wolff explains in his biography of Bach that “What Bach dubbed musical thinking was, in fact nothing less than the conscious application of generative and formative procedures – the meticulous rationalization of the creative act.” Wolff could have been describing Iannis Xenakis with this statement. When the choice of options available to an individual are so wide open as to be almost infinite, it seems to me that humans are drawn to define our own set of rules and parameters within which to inhabit. Hoffmann explains further that “Art gives us the chance to come to terms with ourselves as being human beings. Art is a prominent discipline of human self-reflection and self-awareness. Algorithmic composition (not only the composition with the help of a computer, but composition by the computer) is an extreme, perhaps the most extreme form conceivable to practice art in this understanding” (my emphasis). I think part of self-awareness as humans is the ability to consciously self-govern or self-regulate, and art – in this case algorithmic music/composition – is a space where we exercise that aspect of self-awareness.

How does your own work with the UPIC and GENDYN differ from that of Xenakis’?
In 1997 I created a piece called Maya for UPIC. This piece was a recording of an actual performance of an UPIC system that had been reconfigured and transformed from being an image playback system to a real-time algorithmically defined composition instrument in November of 1997. This piece incorporated what can be called massively parallel iterative feedback frequency modulation. With the UPIC system, one can simultaneously activate as many as 64 digital oscillators in an instant of time. These 64 oscillators can either be modulators and carriers in an FM synthesis scenario; it’s possible to modulate a carrier with itself, or many other oscillators, and vice-versa. I created an elaborate construction of linked oscillators across a Page of the UPIC, resulting in an onslaught of sound that was feeding off of itself and where I was able to then adjust specific parameters in real time (such as the attenuation of one modulator among the 64 oscillators). This resulted in a form of chaotic sound synthesis with swarming whirlwind of tones and pitches across the stereo field. For my approach, I created an algorithm on a system whose original intent it was to create curves and arcs that proceeded from left to right along a time line – the system would then read and perform a drawing or picture, in a manner of speaking. In my case, there was no time line – there really was no picture. I simply had horizontal lines on the screen which represented operators, carriers and modulators, upon which I placed the playback curser that would remain immovable. I then adjusted parameters to affect the output during performance.
With regards to the GENDYN algorithm, Xenakis consciously chose to allow the algorithm itself generate an entire composition in whole cloth. After many iterations of heuristic listening he selected one among them and called it Gendy 3. In my case, because I’m working on Hoffmann’s implementation which affords me many capabilities that weren’t available to Xenakis when he developed the original algorithm, I’ve evolved over the course of a few years to working in a way that allows me to layer sounds and events on a wide and massive scale. I’ve basically implemented a form of what I call dynamic stochastic granular synthesis: hundreds of thousands to millions of granular timbral events for a given instant of time, each a fraction of a second in duration, dispersed over time horizontally (a few seconds to a few minutes), as well as vertically in as many as 500 or more tracks. Each granular timbral event is unique in the sense that it is defined stochastically – the character of each micro-event is dependent on the probability distribution functions that define its synthesis, its entire “DNA” makeup. It wouldn’t be wrong to describe this scenario to that of snowflakes (where each is unique), or human fingerprints where each is unique based on the DNA makeup of each individual. These masses of stochastic grains can be pitched, percussive or noise-like in nature.
Also with Hoffmann’s implementation of the GENDYN, I could synthesize steady pitched tones that would not vary over time – they aren’t glissandi. If you’ve heard the GENDYN algorithm, you’ll know that the sounds typically vary dramatically in pitch, glissandi, timbre and intensity. By fixing the synthesized pitches, I was then able to create a tempered GENDYN which could be tuned to any conceivable key, scale, whatever. This ability, coupled with the stochastic nature by which the GENDYN distributes events and timbres, offers a very rich platform to create.