Recording nasty hellscapes with self-oscillating pedals into Logic

I’ve been feeling a bit stuck on how to start with my piece – I have a sort of general idea as to the elements I want to include, but not a concrete idea of how I want it to sound. I felt a bit daunted by starting the piece with the text-to-speech bot, so instead I’ve experimented with some sonic textures, using the self-oscillating guitar pedals that I mentioned in a previous blog. These sounds may end up as a ‘bed’ for the bot voice to sit on top of, however I haven’t quite worked this out yet.

The whole process of recording with these pedals a bit of trial-and-error, they can usually act very unpredictably. I started off by creating a low bass drone with my chorus pedal. I did this by setting the bass knob to full, and gradually bringing the treble knob up and down, whilst keeping the depth and rate at quite a low level. This is a snippet of how it sounded:

I quite like the sound by itself, but find it hard to imagine how I could place it in a mix with other textures.

The next sound I tried was a similar chorus setting, but with my pitch shift pedal set to a fifth above, with the original signal blended in:

This creates a really gritty, low rumbling sound. It sort of sounds like the two pedals are fighting each other, having an argument. Also reminds me of the TARDIS sound from Doctor Who a bit? I definitely think this could be a texture I use in my piece.

The next sound I experimented with was very abrasive and harsh for most of the time. It was a mix of my distortion, pitch shift and reverb pedals. A lot of the time it seemed to produce a really nasty high-pitched noise (you may want to turn your volume down before playing):

I don’t think I would use most of these sounds although I do like the sound of the reverb coming in and washing everything out around 10 seconds in. This was a long take, and I did manage to get quite a usable sound in the end that I could use on top of the TARDIS-esque sound for some extra texture:

Again, very gritty and very fizzy but it does mix well with the other sound:

I also tried making some sounds with just my distortion and reverb pedal, and whilst the sounds weren’t bad, I don’t really see how I can make them fit with the other sounds I’ve made so far. I find the texture a bit one-dimensional; not bad, but not what I want, I don’t think:

One sound I made that I do like but feel as though I don’t know how I would bring it into the piece is a short glitchy sound I made by feeding my wah pedal into the chorus pedal:

Again, sticking to the Doctor Who theme this blog post seems to be taking, it sounds sort of like a sonic screwdriver. The sound itself is good, but again I don’t know how I would slot it into the piece.

I also experimented with recording my zither, and seeing if I could bring in some acoustic textures, but after attempting, and adding some plugins, it didn’t sound sufficient to my ears.

I think some of the sounds I’ve come up with have been interesting, although I am still feeling unconfident on where to take the piece. I don’t really know where I will fit the voice in, and I’m not sure about a structure or timeline for the piece at all. I’ll experiment with chopping up some of the text-to-speech bot’s sentences next time I work on the piece to see if that adds an interesting dynamic, instead of just reading the texts out in full. I could also reverse the soundfile, add strange modulation or experiment with it in a number of ways to see if it works.

Either way, I’m happy to have experimented some more, improvising with this very unpredictable technique. My favourite sound was the very bassy, rumbling ‘TARDIS’ concoction made with the pitch shift and the chorus, and maybe this could be because it’s the sound that had the least human input? I just turned the pedals on, recorded the sound for 4 minutes and listened to them fight, as it were. That’s probably why I didn’t come across any particularly obnoxious sounds that time round, and I will most likely try to build the piece around this.

Experiments with text-to-speech bots

I’ve never used artificial intelligence to help me make my pieces in the past. For this project I wanted to experiment with using AI generated text-to-speech voices to read out texts of collaborators asking me to do things for free or just generally demanding quite a lot for no pay. To test out voices I used an example from a night where I had just got off stage at a gig (at around 10pm), had already been in rehearsals for the whole day and had landed back in England from playing a gig in France at 3am the night before – I was absolutely worn out and needed to go home (and I had a large amount of equipment on me). I kept getting texts from one of the bands I’m in asking me to come straight to their rehearsal for an hour after I got off stage, I already explained to them the situation and that I was very worn out and couldn’t make it and had a lot of stuff to carry, but they kept pestering me and asking me to come. It wasn’t the end of the world, but it was a bit annoying, especially as I had already explained how I couldn’t make it.

I wanted to use different voices for the people texting me and myself, so I looked at websites that could do this. I (mistakenly) thought that it might be free/cheap to access these resources, however I started to discover that a lot of these text-to-speech bots were actually quite expensive to use (Google’s was £300 per year). I found a site called murf.ai which had a few free voices so I started playing around with a few. However, after a while I realised I had to pay to download them. I tried screen recording it but the sound wouldn’t record.

Instead I just decided to use the google translate text-to-speech voice. This isn’t ideal as I wanted multiple voices and this only came with one, however it is free and easy to use, and I could screen record it to capture the audio. I used one of the texts I received that night as a practice example:

This was ok, but I wanted to really emphasise the length of the last word, “meee”. So I put spaces in between the ‘E’s’ to make it more drawn out:

This had the desired effect. I will experiment with both and put them into my piece to see if they sound good. I’m still not 100% sure about this aspect of my piece, I don’t know if I want to include the bot voice, or any voice. Something purely instrumental may work better. However, I will give it a try as it could add something interesting.

Making the next track for the game

To make my next track for the game, I started off with a 3 note sequence using the ‘Lap Steel: Resonance Chaos’ preset made by Spitfire Audio. When I’m stuck for ideas, I often like to use a lot of the LABS sounds by Spitfire Audio, as they’re good at generating soundscapes to built ideas off.

I then added a counter-atmosphere made with the Hackney Angels preset, which I tweaked somewhat.

After the 40-second mark in the piece, I decided to bring in a melody on the electric guitar. Instead of just playing all of the notes in one take, I decided to record each note at a time and fade them in and out, as if they are kind of cascading like a sonic waterfall. This adds a lot more depth to the melody in my opinion, even if it was a pain to automate!

I think the heavily distorted sound blends well with the lap steel preset, as they both have some similar sonic qualities. When I brought the guitar melody in, I also decided to change up the notes on the LABS tracks, to make them more varied, instead of just having the same repeating loop. You can hear this most clearly at the end of the track above.

Automation for the guitar tracks

As opposed to the guitar sound I used in the previous track I made for the game, this time I didn’t really use any guitar-specific pedal or amp simulations. The sound is just the guitar plugged straight into Logic, just run through a couple of Logic’s stock distortion plugins which aren’t really designed for guitar, and the Space Designer convolution reverb plugin. I prefer this more dry sound when working on atmospheric pieces – I’m not sure why but when I’ve used it in the past it’s just seemed to blend into mixes better.

The two distortion plugins I put onto each guitar track
Space Designer Reverb for the guitar tracks

The rest of the track is quite simple. I kept all of the sonic elements the same, but added in more melodic ideas on the guitar, whilst occasionally returning to the original motif I started with. I think I am more happy with this track than the other one I made, as I feel like I put more time and effort into it. I also prefer the more chilled out, ambient aura of this piece to the anxious mood of the last one.

Cybernetic Serendipity – A look into historical examples of machine-generated art

As I have been interested in using AI generated voices as part of my piece, I’m well aware of the current controversy surrounding artificial intelligence and art, particularly around plagiarism and the concern that it will take away artists’ jobs in the future. For a while I haven’t been sure about where I stand on this debate – at first I was quite excited when I saw projects like Night Cafe start to spring up, as I had seen some friends use them to good effect for their album/single covers, and I did actually like the style of a lot of the works that were generated. However, I can obviously see the implications of this – my friends would use this art, and then one of the designers they previously used for the covers missed out on work. It does cause concern for me about my future as an artist as well. There are instances of it being used in a creative way though, such as the company PlantWave creating technology that allows the inner workings of a plant to be translated into sound and music. I have worked with a producer before who used this device, and he used it in a genuinely creative way, in the same way someone might use a loop to build a song on top of it – as a tool. If AI is used in this way, I don’t see as much of a problem with it and that’s why I’m happy to use it for my piece, even though I can see the moral wrongs of how the technology can be used.

In this blog post, I want to look at an older example of machines being used to make art and sound. In a recent lecture, we were shown the Cybernetic Serendipity exhibition which was held at the ICA in 1968:

This was a revolutionary exhibition that included pieces of art where most of the main content was generated by machines. Of course, a lot of them may seem rudimentary now but at the time they would have been groundbreaking, and there are definitely some obvious precursors to AI generated art, for example the Harmonograph by Ivan Moscovich, which would took input from whoever had an idea to create an image – as curator Jaisa Reichardt says in the above video, “it enables people who can’t draw to make pretty pictures”. This is very similar to sites like Night Cafe which allow users to input any words or scenes they like, and the AI will come up with an often surreal, but interesting result.

The drawing by the harmonograph shown in the video

Another is the machine by the Computer Technique Group, who took an image of John F. Kennedy and asked a machine to draw the picture in 6 different ways, with different pictures showing lines drawn towards the ear, or the eye.

John F. Kennedy, with lines pointing towards his ear

I had not previously been aware of this exhibition, and I find it interesting to see all these pieces of computer-generated art and look at how they contrast to today’s AI generated works. These images are all quite simple, black and white – that may be due to the limitations of the technology at the time. Today’s AI generated visual art can be as complex and detailed as possible due to its near-unlimited access to the history of art, and the ability is has to copy styles.

An AI piece of art, inspired by Van Gogh’s ‘Starry Night’ – but much more complex

I personally do find this a little scary, especially hearing recent examples of AI generated music. However, I think, and I hope, that people search for connections to other people through art and therefore art made by machines will not supersede art made by humans. Saying this, I can still see this technology taking away the jobs of certain types of creators in the future, for instance those that make foley or soundtrack for films. If an AI bot can make those sounds anyway, why pay for someone to do it? This also links back to what Rod Eley said in ‘The History of the Scratch Orchestra’ about the “machines and their machine-minders”. The machine-minders are still as intelligent as they ever were, but the machines are growing smarter and smarter.

Visiting Practitioner – Audrey Chen

Audrey Chen is a second-generation Taiwanese-American artist currently living in Berlin, who recently gave a talk at UAL as a guest lecturer, as part of the ongoing sound arts lecture series. Chen specialises in vocal improvisations using extended technique, which she often blends with analogue synthesisers. She spoke about her history and upbringing as part of a family of scientists and doctors, and told us about she broke the mould of her family by first training as a classical musician, and then venturing more into the world of avant-garde music. When she mentioned this it resonated with me – I come from a family of people with ‘straight-laced’ jobs who sent me to private school when I was younger, in order for me to follow the same mould. Chen mentioned a level of resistance from her family, which I also experienced when I was younger and first had ambitions of making a career in music. Whilst Chen still comes from a completely different background to my own, I could definitely see a couple of parallels in our upbringings, which I appreciated hearing.

Something else Chen mentioned is something I might take into consideration when starting projects in the future. In the past, when she was a single parent, she had to tour with her son and financial constraints meant that most of the time she had to perform by herself out of necessity, so that she didn’t have to split the fees with another perform – this allowed Chen to make her living as an artist. I think what I can learn from this is to maybe be more selective (and maybe even selfish!) when joining or starting new projects. The fewer the people involved in the performance, the more revenue I get to take for myself at the end so I can sustain a living more easily in the creative arts. This has definitely been a problem when I have been involved with larger bands in the past – if you are splitting a £300 fee between 6 people, you all get £50 each, but if it were just two people then it would be £150 each, which is a much more significant amount of money. I will keep this in mind in the future.

When she played us her projects, I particularly appreciated the work of her collaboration ‘BEAM SPLITTER’, with trombonist Henrik Munkeby Nørstebø:

Part of the reason I enjoy this project probably harks back to my own experiences with the trombone – it was the first instrument I ever learnt, however I was never very inspired by it, due to the mundane catalogue of ‘classical hits’ one would have to play and learn in lessons – ‘Ode to Joy’, William Tell Overture’ and the like. Totally uncreative. I think this is the experience with music a lot of people have when they take up classical instruments at a young age. I enjoy watching the videos of this project as it shows a new side to the instrument, a kind of extended technique on the trombone which I feel blends well with Chen’s use of vocal technique and analogue synthesisers.

Overall, I found this to be a useful and inspiring talk – I would have loved to have caught Chen’s performance at Cafe OTO later that day, but I had a rehearsal I couldn’t move. However, seeing her perform in the lecture was certainly interesting. Next time she is in London I will try to see one of her performances.

Foley Recording

For the foley recording, we decided to all go into the foley studio at LCC to record the tracks together, instead of just 1 or 2 of us being lumbered with the task. We were given a list by Ana of all the sounds she required for the game, so on paper it was a very straightforward process. However, we hit a problem when half the time we had booked out in the studio was spent trying to figure out why none of the signal from the foley room was making its way to the composition studio, where our DAW was set up for recording. After one and a half hours, we decided it was best for us to just hire out other equipment as we couldn’t get anything to work and we didn’t want to waste all of our time. Theo from our group hired out a Zoom H5 interface and also a MixPre6 kit. We couldn’t get the MixPre6 to work so we just plugged a stereo shotgun microphone into the H5 so we could quickly get our recordings done.

Zoom H5 Recorder
Sanken CMS-50, the stereo shotgun mic that we used for recording the foley

We managed to record a few sounds, such as footsteps, fire, water, speech, fish sounds and more, and overall I’d say this foley recording session was a success, even though we did run into difficulty at the start. Once we managed to get our setup running, we could get through the sounds on our list very efficiently, and it was useful having a pre-made list of sounds to go through, as it meant we weren’t stuck for ideas halfway through and we knew exactly what we had to do.

The plan now is for another member of the group to mix the sounds we recorded and send them to Ana to put into the game.

Initial Ideas for my Sound Piece

A couple of days ago, I had a constructive talk with Annie where we discussed the direction I might take my essay. We talked about how I could take the ideas presented in ‘Stockhausen Serves Imperialism’ and develop on them more into something that’s more relevant today. I do a lot of session work that can be low-paid (and often times unpaid), and this is often quite frustrating as it means I have to work a part-time bar job to supplement the earnings I get from playing music (London also being a very expensive city to live in doesn’t help much either). Together with going to university, I often find it can be very difficult to divide my time between these commitments. A lot of the time it can prove unwise to ask for more money from these session jobs as there is always someone willing to undercut your price or do it for free, so you have to accept not really being compensated very much at all. Annie and I discussed how I could talk about the struggle of musicians under capitalism in my essay, and look at examples of artists trying to fight this – she showed me Terre Thaemlitz’s project ‘Soulessness’, which is a case study I could potentially use.

I’d like the sound piece I make for this unit to reflect the ideas presented in the essay, and Annie suggested I could maybe make something that involves me reading or singing out messages from people I work with asking me to do bits of work for free, or just generally asking too much of me.

This would be an interesting route to go down, although I may want to use AI text-to-speech generators as well as, or instead of, my human voice. This for 2 reasons:

  1. Although I am used to singing in a more conventional musical context, I still don’t know how confident I would feel using my voice to read out these messages – I’m not a great actor. Maybe this sounds like a silly reason, but this is partly behind my choice
  2. Conceptually, I think the idea of a robotised voice taking the place of my own human voice relates quite nicely to the ideas presented in Rod Eley’s essay, when he talks about “machines and their machine-minders” taking over the jobs of musicians. This is also a discussion that is very relevant today due to recent developments in AI technology and the popularisation of sights such as ChatGPT.

So, I may use these robotised voices to read out the messages. It’s certainly an interesting concept, and maybe I will try and merge it with an underlying soundscape of chaos created by the self-oscillating pedals I discussed in a previous blog, and other sound files, to represent the cluttered nature of the way I live life at the moment as a working musician/student in London during the cost of living crisis.

Stockhausen Serves Imperialism

Having struggled to think of a topic for the upcoming essay, I recently came across a flash of inspiration during a recent trip to Donlon books on Broadway Market, next to London Fields. I’ve always wanted to visit this shop, but this was my first time in there, and it’s a shame I’ve not got in before! They had a shelf full of books about sound art, and a great selection of books on other cultural subjects as well. Since I was looking for some inspiration for the essay, I had a browse through the titles on the shelf. There were plenty of books by John Cage, David Toop and many other authors, but the title that stuck out to me was ‘Stockhausen Serves Imperialism’, by Cornelius Cardew. I read the description on the back of the book and I was further intrigued.

The back of the book that I saw in the shop

I thought that a Marxist analysis of two of the main cis-white-male figureheads of sound art (Karlheinz Stockhausen and John Cage) that students are taught about at the start of the course at LCC sounded like a good springboard for ideas for me to find a topic from, so I bought the book and have been slowly making my way through it over the past few days (I find academic reading very challenging due to my ADHD making it hard for me to focus on it for long periods of time).

The book is a collection of essays, mainly by Cornelius Cardew but also by Rod Eley and John Tilbury, and it covers a few topics from critiques of Cage and Stockhausen, critiques of Cardew’s own earlier work (Cardew had previously served as an assistant to both Stockahusen and Cage before turning on their ideals), and the essay I have just finished now, which is Rod Eley’s ‘History of the Scratch Orchestra’. The Scratch Orchestra was a project led by Cardew from 1969 – 1974 that mainly relied on improvisation and graphic scores. The idea that anyone could join the orchestra, regardless of ability, is certainly an intriguing one, and one has to admire Cardew for following through with, and committing to, this concept.

Some parts of Eley’s essay can be a little excruciating – for instance, when he makes the claim that the orchestra were victims of the “same social and cultural oppression experienced daily by black people”, because a few critics from a class of society they weren’t even trying to make music for (the bourgeoisie) wrote bad reviews of their shows. This claim, in my opinion, is just downright racist and strange.

However, he does talk about some interesting ideas in the essay – the idea that sound reproduction and sound amplification are the enemies of the working musician as they take away jobs in scenarios that might have otherwise been filled by a band or small orchestra (such as weddings, restaurants, hotel lobbies) is thought-provoking. He also talks about the homogenisation of music culture through these technological developments – pop music is millions of people dancing and listening to the same few bands, and the job of musicians in live contexts is being taken away by DJs who play records instead – Eley calls them ‘machine minders’.

There are thought-provoking ideas in this essay that I might use to develop my topic for this unit, however I definitely do find parts of this book problematic – something to write about?

Turning Pedals into Synthesisers

Recently, I came across a video by music YouTuber Michael Banfield on turning guitar pedals into oscillators/synthesisers:

I’d never heard of this technique before, and as a guitarist with an abundance of pedals, I had to try it out.

To make this work, one has to either find an amplifier with two inputs, or use a signal splitter to plug it into one input. Plug one cable into one input, and then run the other end into the input socket of the first pedal in your chain. Then, take the output from the last pedal in your chain and run it into the other input going back into the amplifier. This creates a feedback loop where the sounds come from – the technique is called no-input, but is commonly made with mixing desks and I didn’t realise that guitar pedals achieve a similar result.

Here’s my first experiment with this technique:

I’ve used the distortion pedal as the main oscillator, which I am then running through my other pedals. I find the effect the pitch shift pedal has particularly interesting – it has a very warped, distorted sound that is very aggressive. I also tried experimenting with using the tremolo pedal to modulate the rhythm of the note being produced, making the circuit sound almost like a sequenced synth:

Here’s an example of me trying to make a noisy drone sound:

I find this technique very engaging to experiment with – as a guitarist I use pedals day-in and day-out for my work, but using them in this new format is helpful for discovering new sounds I might not have made otherwise. I would like to attempt to use this technique in my piece somehow.

Making Headway on First Track for the Game

My first finished track for the background music in the VR experience is the one I mentioned in one of my earlier blogs about our first crit. Unfortunately, due to not being able to access my laptop before the the session I hadn’t been able to upload it to the google drive for feedback, however I have now finished it.

This track was primarily based off a session I booked for myself at the synthesis workbench at LCC, and started off as a repeating sequence on the Make Noise 0 Coast synth:

A section of the sequence

I recorded the sequence playing for about 3 minutes whilst gradually adjusting the controls on the synth, adding and taking away elements of the percussiveness that make up the sound.

I then placed the London Atmos 2 patch from Native Instruments on top of the sequence to provide some ambience, and added some distortion on top.

Sequence with ambience on top

This was about as far as I got before the first crits.

When I came back to the piece later on, I decided to keep it mostly the same throughout. There are a couple of reasons for this:

  1. I want to keep the tracks fairly minimal – this is so they don’t overpower the other aspects of the game
  2. It’s best if they are simple so they can be looped easily, as you don’t know how long a player will stay on a certain part of the experience

The only other element I added was an occasional piece of guitar feedback that comes in and out of the piece:

This sort of gives a sound reminiscent of radio static, which I feel reflects the mood of confusion and overwhelm that is going to be a part of the experience.

I used a lot of fuzz for this sound

Now the piece is finished, I am happy with it. Obviously, this is only the first and I aim to make more for the game, but I think the use of low, repetitive synths with a clicky-percussive noise gives off an anxious feeling which is what I wanted to get across from my sonic contributions to the piece, as it represents a lot of my feelings towards university – being neurodiverse I can often feel very overwhelmed, anxious and stressed about my work. For my next piece I will try to make something that contrasts this.