Category Archives: Contemporary Issues in Sound Art

Final Reflections

Overall, I have found it difficult to make a piece that links to my chosen essay topic for this unit. The attempts that I made to try and get it to directly link to the topic with the AI text-to-speech bot didn’t go as I would have liked them to, but this might be because I was always a little skeptical that this might work. I wanted to try it because it felt like it would make the piece seem more relevant, however I feel as though I may have approached it with the mindset that it wasn’t going to work, which probably hindered that idea.

I’m glad that I got the opportunity to experiment with a method of working that I haven’t previously used in my pieces for the course before – that is, the improvisation with the self-oscillating pedals, which was a new technique that felt relevant to my practice outside of the university as a session guitarist. Interacting with my pedals in a way that made me see them in a completely new dimension, even thought I previously though that I had explored all of their sonic capabilities, was definitely an eye-opening and engaging process for me. Having said that, I do feel as though the execution of the final piece wasn’t up to standard for me and this probably has something to do with the lack of creative inspiration I had around the topic around the time of making it.

Turning the piece into an imagined soundscape that might make up part of a broader hypothetical gallery exhibition definitely did help inspire me a bit more though, and this may be to do with the fact that by far the unit I have enjoyed most on the course so far was the gallery unit we did last term. If I’d had more time I would’ve tried to arrange and assemble a small mock-up of what I think the gallery space may look like, however I had the idea for making the sound work part of a wider exhibition piece fairly late in the process.

Whilst I am happy with some aspects of the piece, there are definitely aspects of it I might change if I were to do it again. I would maybe record some more improvisations and perhaps have more of a creative mix between all of the elements. I will allow myself some forgiveness for this, as it is my first time working with mixing improvisations together. Due to the unpredictable nature of pedals when they are used in the way I used them for the project, it is difficult to plan out what sounds one wants to get from them before the record button is pressed. I think I’m happy to leave this project in the past now, as an experiment with a technique that I had never used before, and that I may take forward into my professional practice when writing or producing with other artists as a session musician. Ultimately, I’m not sure whether the piece really got across what I originally wanted to convey, and there are definitely other pieces I’ve made in the past that I’m more proud of, but I’m glad I tried something new.

Further developments on my piece

After a period of attempting to experiment with layering the text-to-speech voice on top of the pedal improvisations, and using a few vocal transformer effects to artificially double-track the voice, I’ve decided that the two elements of the pedal improvisations and the bot voice are not going to work together well in the piece. The sonic textures clash, but not in a particularly interesting way – it just sounded amateur. I attempted for a while to make a piece just made with various ways of chopping up the text-to-speech voice, but I wasn’t very inspired by it.

I’ve instead decided to make the track wholly from my experiments with the self-oscillating pedals and imagine it as part of a wider hypothetical exhibition piece with props that help show the message I originally wanted to convey in this piece. As an artist and musician in London, it’s easy to get overwhelmed by the intensity of the city, and I think the internal chaos of the sounds of the pedals wrestling against each other gets that point and atmosphere across well enough that I don’t need to add the text-to-speech voice on top of it.

I started by using the TARDIS-like sound as a base to add other elements of my improvisations on top of it. This sound, to me, represents the low-level of anxiety I usually feel when I’m in London – a sort of rumbling of the soul which can leave me feeling overwhelmed at times. After around the 20 second mark in the piece and began to add the gritty, fizzy sound that I posted in the last blog. The first two sounds compliment each other well and they create a low and rumbling, yet also gnarly and anxious sound that reflect the baseline level of anxiety I often feel in the city:

Next I added the sonic-screwdriver-esque sound that came from the wah and chorus pedals that I posted in the last blog. The desired effect of this was to make a sound that feels like a robotic fly is buzzing around the listeners ears, which is why I repeated the sound and had it pan from left-to-right.

In the last blog I posted a sound which I described as “really harsh high-pitched noise”. Originally, I didn’t want to use it as it sounded far too abrasive to have any real use in the piece. However, coming back to it I felt as though placing it above the sounds I already had in the piece made it sound like strange, uncanny tv static which I think has its uses in the piece – so I added almost the entirety of that improvisation as well. There’s elements of it that wash out really nicely into a big spacey white noise & reverb kind of sound. I got this effect from just turning the reverb knob on my holy grail pedal up to the maximum setting. There also glitchy pitch-changing noises that come from me twisting the harmony knob on my pitch shift pedal. Here’s how it sounded on top of the other tracks in the piece:

When mixing the piece, I’ve tried to keep effects added on my DAW after the original improvisations very minimal, as I want to stay true to the original improvisations I made with the pedals. The only instance where I changed it was on one track where I added a reverb to a chainsaw-sounding track I had made with my distortion and chorus pedals. This was the original:

And this is with the reverb:

Reverb settings for the track

I did this because it helped the sound slot more into the mix, whereas before it had sounded far too harsh to add into the piece. I turned the dry signal all the way down, and only kept the wet signal up.

Most of the piece consists of interactions between all of the improvisations. Some elements come and go, whilst the low rumbling sound always stays as a reminder of the underlying stress beneath my life in London. As I mentioned before, my intention whilst mixing the piece has been to keep as many DAW-based effects off the piece as possible. I think my intentions and emotions I put into the improvisations I made should be sufficient, and I don’t really want to alter them at all.

I’ve given some consideration towards the name of this piece; as an homage to Terre Thaemlitz’s ‘Meditation on Wage Labor and The Death of the Album’ that I’ve written about in my essay, I think I may call it ‘Meditation on the Fact that I’m Trying To Meditate but the Restless Nature of the City That I Feel I Have to Live in to Forge a Career in Art and Music Gives me this Horrid, Rumbling Feeling Throughout my Body.’

In terms of a visual aspect to the hypothetical exhibition I am imagining, I think I would like to build a very small space (enough only for one person) for the listener to crouch inside with the speakers. I’ve tried brainstorming a few ideas for how I might paint or decorate the space, however I think I would just leave it completely dark and close the listener in. This would add to the overwhelm that I am trying to get across, sort of similar to a sensory deprivation chamber but not quite to that extreme. If I had more time, I might mix the piece in 5.1 sound to add to the intensity in the space.

Recording nasty hellscapes with self-oscillating pedals into Logic

I’ve been feeling a bit stuck on how to start with my piece – I have a sort of general idea as to the elements I want to include, but not a concrete idea of how I want it to sound. I felt a bit daunted by starting the piece with the text-to-speech bot, so instead I’ve experimented with some sonic textures, using the self-oscillating guitar pedals that I mentioned in a previous blog. These sounds may end up as a ‘bed’ for the bot voice to sit on top of, however I haven’t quite worked this out yet.

The whole process of recording with these pedals a bit of trial-and-error, they can usually act very unpredictably. I started off by creating a low bass drone with my chorus pedal. I did this by setting the bass knob to full, and gradually bringing the treble knob up and down, whilst keeping the depth and rate at quite a low level. This is a snippet of how it sounded:

I quite like the sound by itself, but find it hard to imagine how I could place it in a mix with other textures.

The next sound I tried was a similar chorus setting, but with my pitch shift pedal set to a fifth above, with the original signal blended in:

This creates a really gritty, low rumbling sound. It sort of sounds like the two pedals are fighting each other, having an argument. Also reminds me of the TARDIS sound from Doctor Who a bit? I definitely think this could be a texture I use in my piece.

The next sound I experimented with was very abrasive and harsh for most of the time. It was a mix of my distortion, pitch shift and reverb pedals. A lot of the time it seemed to produce a really nasty high-pitched noise (you may want to turn your volume down before playing):

I don’t think I would use most of these sounds although I do like the sound of the reverb coming in and washing everything out around 10 seconds in. This was a long take, and I did manage to get quite a usable sound in the end that I could use on top of the TARDIS-esque sound for some extra texture:

Again, very gritty and very fizzy but it does mix well with the other sound:

I also tried making some sounds with just my distortion and reverb pedal, and whilst the sounds weren’t bad, I don’t really see how I can make them fit with the other sounds I’ve made so far. I find the texture a bit one-dimensional; not bad, but not what I want, I don’t think:

One sound I made that I do like but feel as though I don’t know how I would bring it into the piece is a short glitchy sound I made by feeding my wah pedal into the chorus pedal:

Again, sticking to the Doctor Who theme this blog post seems to be taking, it sounds sort of like a sonic screwdriver. The sound itself is good, but again I don’t know how I would slot it into the piece.

I also experimented with recording my zither, and seeing if I could bring in some acoustic textures, but after attempting, and adding some plugins, it didn’t sound sufficient to my ears.

I think some of the sounds I’ve come up with have been interesting, although I am still feeling unconfident on where to take the piece. I don’t really know where I will fit the voice in, and I’m not sure about a structure or timeline for the piece at all. I’ll experiment with chopping up some of the text-to-speech bot’s sentences next time I work on the piece to see if that adds an interesting dynamic, instead of just reading the texts out in full. I could also reverse the soundfile, add strange modulation or experiment with it in a number of ways to see if it works.

Either way, I’m happy to have experimented some more, improvising with this very unpredictable technique. My favourite sound was the very bassy, rumbling ‘TARDIS’ concoction made with the pitch shift and the chorus, and maybe this could be because it’s the sound that had the least human input? I just turned the pedals on, recorded the sound for 4 minutes and listened to them fight, as it were. That’s probably why I didn’t come across any particularly obnoxious sounds that time round, and I will most likely try to build the piece around this.

Experiments with text-to-speech bots

I’ve never used artificial intelligence to help me make my pieces in the past. For this project I wanted to experiment with using AI generated text-to-speech voices to read out texts of collaborators asking me to do things for free or just generally demanding quite a lot for no pay. To test out voices I used an example from a night where I had just got off stage at a gig (at around 10pm), had already been in rehearsals for the whole day and had landed back in England from playing a gig in France at 3am the night before – I was absolutely worn out and needed to go home (and I had a large amount of equipment on me). I kept getting texts from one of the bands I’m in asking me to come straight to their rehearsal for an hour after I got off stage, I already explained to them the situation and that I was very worn out and couldn’t make it and had a lot of stuff to carry, but they kept pestering me and asking me to come. It wasn’t the end of the world, but it was a bit annoying, especially as I had already explained how I couldn’t make it.

I wanted to use different voices for the people texting me and myself, so I looked at websites that could do this. I (mistakenly) thought that it might be free/cheap to access these resources, however I started to discover that a lot of these text-to-speech bots were actually quite expensive to use (Google’s was £300 per year). I found a site called murf.ai which had a few free voices so I started playing around with a few. However, after a while I realised I had to pay to download them. I tried screen recording it but the sound wouldn’t record.

Instead I just decided to use the google translate text-to-speech voice. This isn’t ideal as I wanted multiple voices and this only came with one, however it is free and easy to use, and I could screen record it to capture the audio. I used one of the texts I received that night as a practice example:

This was ok, but I wanted to really emphasise the length of the last word, “meee”. So I put spaces in between the ‘E’s’ to make it more drawn out:

This had the desired effect. I will experiment with both and put them into my piece to see if they sound good. I’m still not 100% sure about this aspect of my piece, I don’t know if I want to include the bot voice, or any voice. Something purely instrumental may work better. However, I will give it a try as it could add something interesting.

Cybernetic Serendipity – A look into historical examples of machine-generated art

As I have been interested in using AI generated voices as part of my piece, I’m well aware of the current controversy surrounding artificial intelligence and art, particularly around plagiarism and the concern that it will take away artists’ jobs in the future. For a while I haven’t been sure about where I stand on this debate – at first I was quite excited when I saw projects like Night Cafe start to spring up, as I had seen some friends use them to good effect for their album/single covers, and I did actually like the style of a lot of the works that were generated. However, I can obviously see the implications of this – my friends would use this art, and then one of the designers they previously used for the covers missed out on work. It does cause concern for me about my future as an artist as well. There are instances of it being used in a creative way though, such as the company PlantWave creating technology that allows the inner workings of a plant to be translated into sound and music. I have worked with a producer before who used this device, and he used it in a genuinely creative way, in the same way someone might use a loop to build a song on top of it – as a tool. If AI is used in this way, I don’t see as much of a problem with it and that’s why I’m happy to use it for my piece, even though I can see the moral wrongs of how the technology can be used.

In this blog post, I want to look at an older example of machines being used to make art and sound. In a recent lecture, we were shown the Cybernetic Serendipity exhibition which was held at the ICA in 1968:

This was a revolutionary exhibition that included pieces of art where most of the main content was generated by machines. Of course, a lot of them may seem rudimentary now but at the time they would have been groundbreaking, and there are definitely some obvious precursors to AI generated art, for example the Harmonograph by Ivan Moscovich, which would took input from whoever had an idea to create an image – as curator Jaisa Reichardt says in the above video, “it enables people who can’t draw to make pretty pictures”. This is very similar to sites like Night Cafe which allow users to input any words or scenes they like, and the AI will come up with an often surreal, but interesting result.

The drawing by the harmonograph shown in the video

Another is the machine by the Computer Technique Group, who took an image of John F. Kennedy and asked a machine to draw the picture in 6 different ways, with different pictures showing lines drawn towards the ear, or the eye.

John F. Kennedy, with lines pointing towards his ear

I had not previously been aware of this exhibition, and I find it interesting to see all these pieces of computer-generated art and look at how they contrast to today’s AI generated works. These images are all quite simple, black and white – that may be due to the limitations of the technology at the time. Today’s AI generated visual art can be as complex and detailed as possible due to its near-unlimited access to the history of art, and the ability is has to copy styles.

An AI piece of art, inspired by Van Gogh’s ‘Starry Night’ – but much more complex

I personally do find this a little scary, especially hearing recent examples of AI generated music. However, I think, and I hope, that people search for connections to other people through art and therefore art made by machines will not supersede art made by humans. Saying this, I can still see this technology taking away the jobs of certain types of creators in the future, for instance those that make foley or soundtrack for films. If an AI bot can make those sounds anyway, why pay for someone to do it? This also links back to what Rod Eley said in ‘The History of the Scratch Orchestra’ about the “machines and their machine-minders”. The machine-minders are still as intelligent as they ever were, but the machines are growing smarter and smarter.

Visiting Practitioner – Audrey Chen

Audrey Chen is a second-generation Taiwanese-American artist currently living in Berlin, who recently gave a talk at UAL as a guest lecturer, as part of the ongoing sound arts lecture series. Chen specialises in vocal improvisations using extended technique, which she often blends with analogue synthesisers. She spoke about her history and upbringing as part of a family of scientists and doctors, and told us about she broke the mould of her family by first training as a classical musician, and then venturing more into the world of avant-garde music. When she mentioned this it resonated with me – I come from a family of people with ‘straight-laced’ jobs who sent me to private school when I was younger, in order for me to follow the same mould. Chen mentioned a level of resistance from her family, which I also experienced when I was younger and first had ambitions of making a career in music. Whilst Chen still comes from a completely different background to my own, I could definitely see a couple of parallels in our upbringings, which I appreciated hearing.

Something else Chen mentioned is something I might take into consideration when starting projects in the future. In the past, when she was a single parent, she had to tour with her son and financial constraints meant that most of the time she had to perform by herself out of necessity, so that she didn’t have to split the fees with another perform – this allowed Chen to make her living as an artist. I think what I can learn from this is to maybe be more selective (and maybe even selfish!) when joining or starting new projects. The fewer the people involved in the performance, the more revenue I get to take for myself at the end so I can sustain a living more easily in the creative arts. This has definitely been a problem when I have been involved with larger bands in the past – if you are splitting a £300 fee between 6 people, you all get £50 each, but if it were just two people then it would be £150 each, which is a much more significant amount of money. I will keep this in mind in the future.

When she played us her projects, I particularly appreciated the work of her collaboration ‘BEAM SPLITTER’, with trombonist Henrik Munkeby Nørstebø:

Part of the reason I enjoy this project probably harks back to my own experiences with the trombone – it was the first instrument I ever learnt, however I was never very inspired by it, due to the mundane catalogue of ‘classical hits’ one would have to play and learn in lessons – ‘Ode to Joy’, William Tell Overture’ and the like. Totally uncreative. I think this is the experience with music a lot of people have when they take up classical instruments at a young age. I enjoy watching the videos of this project as it shows a new side to the instrument, a kind of extended technique on the trombone which I feel blends well with Chen’s use of vocal technique and analogue synthesisers.

Overall, I found this to be a useful and inspiring talk – I would have loved to have caught Chen’s performance at Cafe OTO later that day, but I had a rehearsal I couldn’t move. However, seeing her perform in the lecture was certainly interesting. Next time she is in London I will try to see one of her performances.

Initial Ideas for my Sound Piece

A couple of days ago, I had a constructive talk with Annie where we discussed the direction I might take my essay. We talked about how I could take the ideas presented in ‘Stockhausen Serves Imperialism’ and develop on them more into something that’s more relevant today. I do a lot of session work that can be low-paid (and often times unpaid), and this is often quite frustrating as it means I have to work a part-time bar job to supplement the earnings I get from playing music (London also being a very expensive city to live in doesn’t help much either). Together with going to university, I often find it can be very difficult to divide my time between these commitments. A lot of the time it can prove unwise to ask for more money from these session jobs as there is always someone willing to undercut your price or do it for free, so you have to accept not really being compensated very much at all. Annie and I discussed how I could talk about the struggle of musicians under capitalism in my essay, and look at examples of artists trying to fight this – she showed me Terre Thaemlitz’s project ‘Soulessness’, which is a case study I could potentially use.

I’d like the sound piece I make for this unit to reflect the ideas presented in the essay, and Annie suggested I could maybe make something that involves me reading or singing out messages from people I work with asking me to do bits of work for free, or just generally asking too much of me.

This would be an interesting route to go down, although I may want to use AI text-to-speech generators as well as, or instead of, my human voice. This for 2 reasons:

  1. Although I am used to singing in a more conventional musical context, I still don’t know how confident I would feel using my voice to read out these messages – I’m not a great actor. Maybe this sounds like a silly reason, but this is partly behind my choice
  2. Conceptually, I think the idea of a robotised voice taking the place of my own human voice relates quite nicely to the ideas presented in Rod Eley’s essay, when he talks about “machines and their machine-minders” taking over the jobs of musicians. This is also a discussion that is very relevant today due to recent developments in AI technology and the popularisation of sights such as ChatGPT.

So, I may use these robotised voices to read out the messages. It’s certainly an interesting concept, and maybe I will try and merge it with an underlying soundscape of chaos created by the self-oscillating pedals I discussed in a previous blog, and other sound files, to represent the cluttered nature of the way I live life at the moment as a working musician/student in London during the cost of living crisis.

Stockhausen Serves Imperialism

Having struggled to think of a topic for the upcoming essay, I recently came across a flash of inspiration during a recent trip to Donlon books on Broadway Market, next to London Fields. I’ve always wanted to visit this shop, but this was my first time in there, and it’s a shame I’ve not got in before! They had a shelf full of books about sound art, and a great selection of books on other cultural subjects as well. Since I was looking for some inspiration for the essay, I had a browse through the titles on the shelf. There were plenty of books by John Cage, David Toop and many other authors, but the title that stuck out to me was ‘Stockhausen Serves Imperialism’, by Cornelius Cardew. I read the description on the back of the book and I was further intrigued.

The back of the book that I saw in the shop

I thought that a Marxist analysis of two of the main cis-white-male figureheads of sound art (Karlheinz Stockhausen and John Cage) that students are taught about at the start of the course at LCC sounded like a good springboard for ideas for me to find a topic from, so I bought the book and have been slowly making my way through it over the past few days (I find academic reading very challenging due to my ADHD making it hard for me to focus on it for long periods of time).

The book is a collection of essays, mainly by Cornelius Cardew but also by Rod Eley and John Tilbury, and it covers a few topics from critiques of Cage and Stockhausen, critiques of Cardew’s own earlier work (Cardew had previously served as an assistant to both Stockahusen and Cage before turning on their ideals), and the essay I have just finished now, which is Rod Eley’s ‘History of the Scratch Orchestra’. The Scratch Orchestra was a project led by Cardew from 1969 – 1974 that mainly relied on improvisation and graphic scores. The idea that anyone could join the orchestra, regardless of ability, is certainly an intriguing one, and one has to admire Cardew for following through with, and committing to, this concept.

Some parts of Eley’s essay can be a little excruciating – for instance, when he makes the claim that the orchestra were victims of the “same social and cultural oppression experienced daily by black people”, because a few critics from a class of society they weren’t even trying to make music for (the bourgeoisie) wrote bad reviews of their shows. This claim, in my opinion, is just downright racist and strange.

However, he does talk about some interesting ideas in the essay – the idea that sound reproduction and sound amplification are the enemies of the working musician as they take away jobs in scenarios that might have otherwise been filled by a band or small orchestra (such as weddings, restaurants, hotel lobbies) is thought-provoking. He also talks about the homogenisation of music culture through these technological developments – pop music is millions of people dancing and listening to the same few bands, and the job of musicians in live contexts is being taken away by DJs who play records instead – Eley calls them ‘machine minders’.

There are thought-provoking ideas in this essay that I might use to develop my topic for this unit, however I definitely do find parts of this book problematic – something to write about?

Turning Pedals into Synthesisers

Recently, I came across a video by music YouTuber Michael Banfield on turning guitar pedals into oscillators/synthesisers:

I’d never heard of this technique before, and as a guitarist with an abundance of pedals, I had to try it out.

To make this work, one has to either find an amplifier with two inputs, or use a signal splitter to plug it into one input. Plug one cable into one input, and then run the other end into the input socket of the first pedal in your chain. Then, take the output from the last pedal in your chain and run it into the other input going back into the amplifier. This creates a feedback loop where the sounds come from – the technique is called no-input, but is commonly made with mixing desks and I didn’t realise that guitar pedals achieve a similar result.

Here’s my first experiment with this technique:

I’ve used the distortion pedal as the main oscillator, which I am then running through my other pedals. I find the effect the pitch shift pedal has particularly interesting – it has a very warped, distorted sound that is very aggressive. I also tried experimenting with using the tremolo pedal to modulate the rhythm of the note being produced, making the circuit sound almost like a sequenced synth:

Here’s an example of me trying to make a noisy drone sound:

I find this technique very engaging to experiment with – as a guitarist I use pedals day-in and day-out for my work, but using them in this new format is helpful for discovering new sounds I might not have made otherwise. I would like to attempt to use this technique in my piece somehow.

Considering topics for my essay

Originally, I wanted to write about the fixation of youth within popular music. This is a topic that has interested me for some time, and it can be linked to many different cultural theories and phenomena. It’s really strange how anyone who seems to gain popularity from their music after the age of 30 seems to be portrayed as someone who has worked extremely hard, fighting their corner for years and fought through so many failures to get where they are, and that it’s a real testament to their work ethic they’ve managed to become popular at such an ‘old’ age. (see Jarvis Cocker’s prologue to Pulp’s performance of Common People at Glastonbury 1995). Obviously this is true, they will probably have worked very hard at their craft, and taken rejection after rejection to get where they are today. Why is this seen as an anomaly though? In any other industry, to be in one’s mid 30’s is to be considered relatively young, but still with a good amount of experience. There seems to be a particular fixation on youth, on one’s 20’s and even teens, within popular music that I find to be quite damaging. As a session musician, I have worked with people in their late 20’s who seem to be of the mindset that time is running out, they must act quickly or else they won’t succeed in their careers – they look up to these figures who have ‘made it’ past the age of 30 as is there is some sort of sliver of hope along the horizon – they constantly compare their age to other people and wonder whether they are doing the right things. I find this mindset to be very damaging, and it must be fed by some greater cultural issues – youth and attractiveness are marketable, which is most likely why this phenomena occurs. However, within other industries this really doesn’t exist so much. Film directors or actors may not have to worry about this so much, and there is definitely not the same fixation on youth that I have seen in the music industry.

I am interested in this topic, however I don’t feel it relates to the unit brief of ‘Contemporary Issues in Sound Art’ enough for me to write about it now. It would be good to potentially find a topic that relates more to the world of sound arts, as I feel like the link is a bit tenuous at the moment.