In order to produce a convincing body of work that contributes effectively to a wider discourse within Sound Arts, I need to reflect and ask myself – where am I coming from? Who am I? What do I represent? Why does what I have to say matter?
My background coming into the world of Sound Arts is that of a multi-instrumentalist who plays in 6 bands and gigs across London regularly. I’m very much originally a musician who has dipped their toes into this world to develop my practice.
At The George Tavern with my fretless bass, earlier this month.
I like to inject humour into my work wherever possible (this is a practice I’ve tried to develop over the past year). I suppose this may come from some of the trauma I experienced when I was younger – nothing crazy, but a long and messy divorce of my parents, a bit of neglect on their part, getting bullied at school, etc. I feel as though a lot of the time I enjoy to make art for people to laugh along with and not take too seriously, whilst still getting a message across that art with humour is just as valid and real as art that covers more serious issues.
I also believe I struggle with undiagnosed ADHD – I have been in contact with the disability services at UAL recently to access support, and I have been on a waiting list on the NHS to be diagnosed for around a year. My GP said I most likely have it, but it’s incredibly difficult to get diagnosed through the system once you turn 18 years old – he actually apologised to me that he couldn’t really do anything to help.
Something that interests me is the role that neurodiversity (specifically ADHD) plays in arts education, and whether universities are doing enough to assist the learning processes of students with dyslexia, dyscalculia, Asperger’s, ADHD etc.
I recently found an interesting written by Luca M. Damiani, a lecturer at LCC, that discusses the relationship between art and neurodiversity:
He specifically talks about his experiences with Asperger’s syndrome and is interested in “investigating how art and design expand perceptions of and give voice to neurological diversity”. I might like to cover something like this but from the perspective of a student with undiagnosed ADHD. I think it’s important to give voice to people with neurodiverse conditions and this is probably where I want to situate myself for this project.
For my first foray into the world of audio papers and content, I will be writing about an episode of the ‘Art + Music + Technology’ podcast, hosted by the late Darwin Grosse, and reviewing it in terms of the ‘Manifesto for Audio Papers’ as written by Sanne Krogh Groth and Kristine Samson for Seismograf.
The episode I chose to listen to was an interview with Greg LoPiccolo, a games developer who was previously involved in the production of the Guitar Hero and Rock Band video game series. LoPiccolo’s new project is a platform called ToneStone, which expands on the concept he was trying to develop with his previous efforts – that is, “to use technology to make music accessible to a mass audience”. He says that whilst his previous projects were more “like karaoke”, ToneStone allows the user to be more creative in the way that it lets them create their own music out of loops, encouraging creativity similar to that seen in games of other genres, such as Roblox or Minecraft. LoPiccolo sees ToneStone as a games design approach to learning music; in a video game, the player won’t know how to play the game at all first. They will start off with easy challenges and as they learn, the game becomes more complex. The same is true with ToneStone – more possibilities are added as the user progresses. I think this quite a good approach of helping people to begin to produce and write their own work. I never had anything like this when I was beginning to make music, and softwares such as GarageBand or Logic Pro were a little daunting to me as a 10 year old. The primary audience of this product is children and teenagers, and it’s commendable that LoPiccolo is attempting to demystify music making to a younger audience.
Being quite a typical podcast format, it can be somewhat difficult to relate it to the manifesto for audio papers. However, it does adhere to a few of the principles mentioned – for instance, it certainly follows statement 5 (“The audio paper is multifocal, it assembles diverse and often heterogeneous voices”). It is narrated from two perspectives, the interviewer and the interviewee, unlike the singular narrative of a traditional written paper. It also “evokes affects and sensations” as written in statement 4 of the manifesto. Whilst only consisting of the sound of two people having a conversation, the listener can pick up a lot from listening as opposed to reading a transcript. You get the small pauses, the tone of the voices, even the breaths – it’s a more unique, human experience than reading. Other than that, I feel as though I am straining to find more statements to compare the podcast to. One could say it’s slightly idiosyncratic in the way that you’re listening to this discussion as a non-rehearsed conversation as opposed to a carefully thought out and pre-planned written paper, however I feel this is a bit of a stretch (maybe I’m wrong, I do find the language used in the manifesto slightly impenetrable at times).
Whilst I feel it’s difficult to compare this podcast to the manifesto for audio papers, it’s a good starting point to begin to analyse audio content that dissects the world of sound art. I’ve learnt how to critically listen to a spoken audio production, which I hadn’t done before, and I think this will help me when making my audio paper.
Bibliograhy
“Podcast 379: Greg LoPiccolo” (2022) Art + Music + Technology. Available at: https://artmusictech.libsyn.com/podcast-379-greg-lopiccolo (Accessed: 2022).
Groth, S.K. and Samson, K. (2016) Audio papers – A Manifesto, SEISMOGRAF.ORG. Available at: https://seismograf.org/fokus/fluid-sounds/audio_paper_manifesto (Accessed: December 6, 2022).
Recently, I have been considering which one of the two options I would like to hand in for. Before starting my second year at LCC, I was convinced that I would mostly be focusing my attention on Sound for Screen, as for my final project last year I rescored 5 minutes of Ingmar Bergman’s ‘Persona’ and it was by far the most I had enjoyed making practical work in my time at the university. I thought that I could work really hard on this subject throughout second year, and then potentially go on to do a Diploma in Professional Studies next year, focusing my work around creating sound for films.
However, now I have been back for a few weeks, I think my plan has changed. I have really enjoyed getting stuck into electronics, building synthesisers and making crazy sounds. I have found the whole process really rewarding, and although in many ways it can feel as though knowledge of this field doesn’t come naturally to me, I can see that when I do make something I am proud of the results and want to experiment more.
I also want to make this course more relevant to my practice outside of University. I am a multi-instrumentalist who is in 6 bands (a couple are passion projects, the others are session work) and I’m very interested in making technology that can compliment my skillset as a musician. Although I know these things aren’t directly related to me studying Sound Arts, I think it’s important to shape what I learn on this course around what I am doing outside of it, and what I intend to keep doing after I (hopefully) graduate. Being able to craft my own creative electronic circuits is a hugely inspiring goal, and being able to play music on instruments I’ve made would be an impressive feat, especially as someone who has always felt scared to dip their toes into the world of electronics.
In addition to this, I believe I have a better view as to how my project would translate to a gallery exhibition for the next element if I were to hand in a piece of electronics, rather than if I were to hand in 5 minutes of a film I’d rescored. This, of course, isn’t to say that the latter couldn’t be done – it’s just that I feel more inspired to make my own instruments for people to play. I am still fixated on using the children’s toys in some form for the exhibition – maybe I could turn the cuddly toy into a touch sensitive synth? Perhaps the more you gouge it’s eyes out the more it screams? Something horrifying but stupid like that could be quite effective…
I recently watched the 2011 film ‘We Need to Talk About Kevin’ (directed by Lynne Ramsay). It’s based on the 2003 novel of the same name by Lionel Shriver, and follows the events before, and the aftermath of, a fictional school shooting by a teenager called Kevin Khatchadourian. It uses a non-linear plot that gradually unravels throughout the course of the film, and I found that the sound design by Paul Davies (also responsible for the sound on ‘You Were Never Really Here’) plays an essential part in this.
The story is mainly told from the perspective of Kevin’s mother, Eva. The very first shot is a slow zoom on a curtain next an open door, being blown gently by the wind. We can hear the sound of sprinklers, and although the audience doesn’t know it yet, we will see this shot and hear this sound again later in the film, and it serves as a motif to highlight Kevin’s horrific actions. This shot is later shown to be part of a scene where Eva walks into the family home after witnessing the aftermath of Kevin’s attack, and finds her husband and daughter also dead in the garden, with the sprinkler left on.
The opening shot of ‘We Need to Talk About Kevin’
The sprinkler sound appears many times throughout the film. For instance, when Eva finds that Kevin has painted and drawn all over the walls and surfaces of her study, the sound comes back. It seems to happen most of the time that Kevin does something he knows he is not supposed to, and is definitely linked to his actions.
Another recurring sound is one of people screaming. It first happens towards the end of the slow zoom, before the film cuts to Eva (presumably before Kevin was born) at La Tomatina festival in Buñol, Valencia; a bizarre festival where large crowds of people gather together and throw tomatoes at each other. The imagery and sound work together really well in this scene, both foreshadowing later events in the film. There is the obvious visual symbolism of people drowning in blood, however the sound design is really what tells the story in this shot. It transforms from the happy sounds of the crowd having a good time, to a more sinister sound, washed in reverb, of people screaming again. This is the sound that comes up later in the film when Eva arrives at the school to find teenagers being taken out on stretchers after Kevin has shot them with his bow and arrow.
La Tomatina shot
These are only two examples from the first minute of the film of sound that reoccur and help move the plot along, and I think the sound design is essential to the making the film what it is.
I recently read an article by Brett Ashleigh on the Screen Queens website that argues that the sound design tells the story through a feminine perspective (‘écoute féminine’, as Ashleigh describes it), which mirrors the plot as it is shown mostly through Eva’s eyes. Ashleigh argues that the sound design disengages from the traditional patriarchal linear structure which an audience would usually expect, and instead uses a feminist approach to compliment the film. What she means by this is that it “has the ability to display a narrative that depends on emotional and affective techniques rather than those based in language”.
In a way, I can understand where she is coming from – the story is very non-linear and does rely on emotion-based storytelling rather than one straight narrative. It is also shown from the feminine perspective of Eva, rather than the masculine perspective of Kevin. I can certainly see that the film can be said to be made from a feminine perspective – however, does that mean the sound design is too? I find it slightly confusing to assign linear storytelling to the patriarchy and then assign emotion based, non-linear story telling to feminism. Why are these different types of narratives being assigned different genders? I never really feel like I get a solid answer from Ashleigh’s article – she does make many points as to how the film puts you in Eva’s perspective, and how the soundtrack can help emphasise it, but we’re never told exactly why a non-linear storyline is particularly feminine. I don’t really like to speak on behalf of women, but I find it to be a bit patronising to put these two types of storytelling into different (gendered) boxes.
However, as I did mention, Ashleigh does give a few good reasons as to why the sound design can be considered part of a larger feminist piece of art. For example, when Eva walks into her house, hears the sprinkler and goes outside to see her murdered husband and daughter, the sound of the sprinkler remains outside of the real world of the film to an extent. As Ashleigh describes, “we are once again reminded that we are witnessing Eva’s subjective memory, portraying things not as they truly were, but as she has orchestrated in her mind”. This means that through the sprinkler sound the film is putting us in Eva’s shoes, making the plot “essentially female”. So, whilst I disagree with some parts of Ashleigh’s analysis, I can agree that the soundtrack is part of a film which shows the childhood of a school shooter through the feminine perspective of his mother. However, I still disagree that certain types of narrative can be inherently feminine or masculine. (It’s a film I don’t like, but Quentin Tarantino’s ‘Pulp Fiction’ uses a very non-linear narrative, and would we argue it is from a particularly feminine perspective?)
Bibliography
Ashleigh, B. (2016) A feminist approach to sound in we need to talk about Kevin, Screen Queens. Available at: https://screen-queens.com/2016/11/17/a-feminist-approach-to-sound-in-we-need-to-talk-about-kevin/ (Accessed: October 9, 2022).
For my next step into DIY electronics, I’ve made a simple synthesiser using a few components connected together on a piece of technology called a breadboard. A breadboard is a very simple piece of kit, which consists of a number of contact points connected together by metal strips. Below is an example of the breadboard I have been working with:
As you can see, there are two lines of contact points at both the top and bottom of the breadboard. These are connected by horizontal strips of metal. The contact points in the middle of the breadboard are connected by vertical strips of metal, with a break in the middle to separate the two halves of the board.
This circuit was very easy to build, and is borrowed from the 2005 book ‘Handmade Electronic Music’, by Nicolas Collins. All it consists of is an Integrated Circuit (IC), a resistor, a capacitor, a few jumper cables, a 9 volt battery and an audio cable to connect it to an interface (via a couple of crocodile clips). Here’s the finished circuit:
In the circuit what you can see is that the positive and negative (ground) terminals of the battery are connected to the top and bottom rows of the breadboard respectively. The positive terminal is then connected to pin 14 (top left) of the IC via a jumper lead, and the ground is connected to pin 7 (bottom right). The resistor is connected to pins 1 and 2 of the IC, whilst the capacitor is connected to pin 1 of the IC and the ground of the circuit. A jumped cable connected to the ground is then connected to a crocodile clip, which in turn is then connected to the audio cable. Another jumper cable is also connected to pin 2 of the IC, which then is also connected to the audio cable via a crocodile clip.
Whilst all of the physical information about the synthesiser can be engaging, what I find most interesting about it is how it sounds! The circuit will usually just make a basic square wave, however I’ve added a light-dependent resistor into the circuit so that the pitch of the wave can be altered. It alters the pitch by varying the resistance depending on how much light gets in – the more light comes through, the less resistance is put into the circuit, which allows a larger amount of electrical current to flow through. More light therefore = higher pitch, and vice versa. This means the instrument can be played in a similar way to a theremin. This is how it sounds in its most basic form:
And here’s a video of me playing the synth like a theremin (the high pitched noise is someone else’s synth in the background, mine is the much lower pitched sound):
I decided I wanted to take this setup home for the week so I could record it and mess around with a few effects in my DAW. Here’s the same sound you just heard but with a few effects (autofilter, ring modulation, pitch shift, distortion, and a couple of others) to make it sound like a strange room of tweeting birds with a chainsaw in the background:
‘Tweety’ synth
Here’s the same settings but with the autofilter turned off – this gives a very horrifying distorted sound:
‘Chainsaw’ synth
Eventually I’d love to be able to make these effects as analog circuits in their own right!
For all of the previous recordings I was using a phone torch to illuminate the light dependent resistor as I felt the pitch was too low otherwise, so I decided to change things up a bit – in this next recording I switched out the 4.7μf capcitor which I had been using for a 0.1μf capacitor. This affects the range of the pitch that the light-dependent resistor will sweep through, and in this case as it is a smaller capacitor, makes the signal higher-pitched. Coupled with a rotary speaker emulation, this can give quite a nice ‘robotic bleep’ sound:
Robots attack!
I also experimented with a touch sensitive resistor, which applies different amounts of resistance of the circuit depending on how hard you squeeze it. It was fun to experiment with, however I found it a bit less expressive than the light-dependent resistor:
‘Touch synth’ with some pitch shift, phaser and reverb
Picture of the touch sensitive synth in action
Whilst on a break from making crazy sounds, I took a trip downstairs to the charity shop from the studio I was working in. I found a couple of children’s toys that I thought could be potentially candidates as housing for my final piece for this element. I know it’s quite a long way off still, but these toys inspired me quite a bit and I think I may have a couple of ideas for what I may want to exhibit in the second element of the unit as a whole.
Where is art without a sense of humour and play, after all? And wouldn’t it be amazing to enter a devilish room of these two and their friends screaming at you?
Bibliography
Collins, N. (2006) Handmade electronic music: The Art of Hardware Hacking. New York, New York: Taylor & Francis Group.