All posts by Henry Brewer

‘Art + Music + Technology’ Podcast Review

For my first foray into the world of audio papers and content, I will be writing about an episode of the ‘Art + Music + Technology’ podcast, hosted by the late Darwin Grosse, and reviewing it in terms of the ‘Manifesto for Audio Papers’ as written by Sanne Krogh Groth and Kristine Samson for Seismograf.

The episode I chose to listen to was an interview with Greg LoPiccolo, a games developer who was previously involved in the production of the Guitar Hero and Rock Band video game series. LoPiccolo’s new project is a platform called ToneStone, which expands on the concept he was trying to develop with his previous efforts – that is, “to use technology to make music accessible to a mass audience”. He says that whilst his previous projects were more “like karaoke”, ToneStone allows the user to be more creative in the way that it lets them create their own music out of loops, encouraging creativity similar to that seen in games of other genres, such as Roblox or Minecraft. LoPiccolo sees ToneStone as a games design approach to learning music; in a video game, the player won’t know how to play the game at all first. They will start off with easy challenges and as they learn, the game becomes more complex. The same is true with ToneStone – more possibilities are added as the user progresses. I think this quite a good approach of helping people to begin to produce and write their own work. I never had anything like this when I was beginning to make music, and softwares such as GarageBand or Logic Pro were a little daunting to me as a 10 year old. The primary audience of this product is children and teenagers, and it’s commendable that LoPiccolo is attempting to demystify music making to a younger audience.

Being quite a typical podcast format, it can be somewhat difficult to relate it to the manifesto for audio papers. However, it does adhere to a few of the principles mentioned – for instance, it certainly follows statement 5 (“The audio paper is multifocal, it assembles diverse and often heterogeneous voices”). It is narrated from two perspectives, the interviewer and the interviewee, unlike the singular narrative of a traditional written paper. It also “evokes affects and sensations” as written in statement 4 of the manifesto. Whilst only consisting of the sound of two people having a conversation, the listener can pick up a lot from listening as opposed to reading a transcript. You get the small pauses, the tone of the voices, even the breaths – it’s a more unique, human experience than reading. Other than that, I feel as though I am straining to find more statements to compare the podcast to. One could say it’s slightly idiosyncratic in the way that you’re listening to this discussion as a non-rehearsed conversation as opposed to a carefully thought out and pre-planned written paper, however I feel this is a bit of a stretch (maybe I’m wrong, I do find the language used in the manifesto slightly impenetrable at times).

Whilst I feel it’s difficult to compare this podcast to the manifesto for audio papers, it’s a good starting point to begin to analyse audio content that dissects the world of sound art. I’ve learnt how to critically listen to a spoken audio production, which I hadn’t done before, and I think this will help me when making my audio paper.

Bibliograhy

  1. “Podcast 379: Greg LoPiccolo” (2022) Art + Music + Technology. Available at: https://artmusictech.libsyn.com/podcast-379-greg-lopiccolo (Accessed: 2022).
  2. Groth, S.K. and Samson, K. (2016) Audio papers – A Manifesto, SEISMOGRAF.ORG. Available at: https://seismograf.org/fokus/fluid-sounds/audio_paper_manifesto (Accessed: December 6, 2022).

Narrowing down my options for hand-in

Recently, I have been considering which one of the two options I would like to hand in for. Before starting my second year at LCC, I was convinced that I would mostly be focusing my attention on Sound for Screen, as for my final project last year I rescored 5 minutes of Ingmar Bergman’s ‘Persona’ and it was by far the most I had enjoyed making practical work in my time at the university. I thought that I could work really hard on this subject throughout second year, and then potentially go on to do a Diploma in Professional Studies next year, focusing my work around creating sound for films.

However, now I have been back for a few weeks, I think my plan has changed. I have really enjoyed getting stuck into electronics, building synthesisers and making crazy sounds. I have found the whole process really rewarding, and although in many ways it can feel as though knowledge of this field doesn’t come naturally to me, I can see that when I do make something I am proud of the results and want to experiment more.

I also want to make this course more relevant to my practice outside of University. I am a multi-instrumentalist who is in 6 bands (a couple are passion projects, the others are session work) and I’m very interested in making technology that can compliment my skillset as a musician. Although I know these things aren’t directly related to me studying Sound Arts, I think it’s important to shape what I learn on this course around what I am doing outside of it, and what I intend to keep doing after I (hopefully) graduate. Being able to craft my own creative electronic circuits is a hugely inspiring goal, and being able to play music on instruments I’ve made would be an impressive feat, especially as someone who has always felt scared to dip their toes into the world of electronics.

In addition to this, I believe I have a better view as to how my project would translate to a gallery exhibition for the next element if I were to hand in a piece of electronics, rather than if I were to hand in 5 minutes of a film I’d rescored. This, of course, isn’t to say that the latter couldn’t be done – it’s just that I feel more inspired to make my own instruments for people to play. I am still fixated on using the children’s toys in some form for the exhibition – maybe I could turn the cuddly toy into a touch sensitive synth? Perhaps the more you gouge it’s eyes out the more it screams? Something horrifying but stupid like that could be quite effective…

Use of sound in ‘We Need to Talk About Kevin’ (Feminist Sound Design?)

I recently watched the 2011 film ‘We Need to Talk About Kevin’ (directed by Lynne Ramsay). It’s based on the 2003 novel of the same name by Lionel Shriver, and follows the events before, and the aftermath of, a fictional school shooting by a teenager called Kevin Khatchadourian. It uses a non-linear plot that gradually unravels throughout the course of the film, and I found that the sound design by Paul Davies (also responsible for the sound on ‘You Were Never Really Here’) plays an essential part in this.

The story is mainly told from the perspective of Kevin’s mother, Eva. The very first shot is a slow zoom on a curtain next an open door, being blown gently by the wind. We can hear the sound of sprinklers, and although the audience doesn’t know it yet, we will see this shot and hear this sound again later in the film, and it serves as a motif to highlight Kevin’s horrific actions. This shot is later shown to be part of a scene where Eva walks into the family home after witnessing the aftermath of Kevin’s attack, and finds her husband and daughter also dead in the garden, with the sprinkler left on.

The opening shot of ‘We Need to Talk About Kevin’

The sprinkler sound appears many times throughout the film. For instance, when Eva finds that Kevin has painted and drawn all over the walls and surfaces of her study, the sound comes back. It seems to happen most of the time that Kevin does something he knows he is not supposed to, and is definitely linked to his actions.

Another recurring sound is one of people screaming. It first happens towards the end of the slow zoom, before the film cuts to Eva (presumably before Kevin was born) at La Tomatina festival in Buñol, Valencia; a bizarre festival where large crowds of people gather together and throw tomatoes at each other. The imagery and sound work together really well in this scene, both foreshadowing later events in the film. There is the obvious visual symbolism of people drowning in blood, however the sound design is really what tells the story in this shot. It transforms from the happy sounds of the crowd having a good time, to a more sinister sound, washed in reverb, of people screaming again. This is the sound that comes up later in the film when Eva arrives at the school to find teenagers being taken out on stretchers after Kevin has shot them with his bow and arrow.

La Tomatina shot

These are only two examples from the first minute of the film of sound that reoccur and help move the plot along, and I think the sound design is essential to the making the film what it is.

I recently read an article by Brett Ashleigh on the Screen Queens website that argues that the sound design tells the story through a feminine perspective (‘écoute féminine’, as Ashleigh describes it), which mirrors the plot as it is shown mostly through Eva’s eyes. Ashleigh argues that the sound design disengages from the traditional patriarchal linear structure which an audience would usually expect, and instead uses a feminist approach to compliment the film. What she means by this is that it “has the ability to display a narrative that depends on emotional and affective techniques rather than those based in language”.

In a way, I can understand where she is coming from – the story is very non-linear and does rely on emotion-based storytelling rather than one straight narrative. It is also shown from the feminine perspective of Eva, rather than the masculine perspective of Kevin. I can certainly see that the film can be said to be made from a feminine perspective – however, does that mean the sound design is too? I find it slightly confusing to assign linear storytelling to the patriarchy and then assign emotion based, non-linear story telling to feminism. Why are these different types of narratives being assigned different genders? I never really feel like I get a solid answer from Ashleigh’s article – she does make many points as to how the film puts you in Eva’s perspective, and how the soundtrack can help emphasise it, but we’re never told exactly why a non-linear storyline is particularly feminine. I don’t really like to speak on behalf of women, but I find it to be a bit patronising to put these two types of storytelling into different (gendered) boxes.

However, as I did mention, Ashleigh does give a few good reasons as to why the sound design can be considered part of a larger feminist piece of art. For example, when Eva walks into her house, hears the sprinkler and goes outside to see her murdered husband and daughter, the sound of the sprinkler remains outside of the real world of the film to an extent. As Ashleigh describes, “we are once again reminded that we are witnessing Eva’s subjective memory, portraying things not as they truly were, but as she has orchestrated in her mind”. This means that through the sprinkler sound the film is putting us in Eva’s shoes, making the plot “essentially female”. So, whilst I disagree with some parts of Ashleigh’s analysis, I can agree that the soundtrack is part of a film which shows the childhood of a school shooter through the feminine perspective of his mother. However, I still disagree that certain types of narrative can be inherently feminine or masculine. (It’s a film I don’t like, but Quentin Tarantino’s ‘Pulp Fiction’ uses a very non-linear narrative, and would we argue it is from a particularly feminine perspective?)

Bibliography

  1. Ashleigh, B. (2016) A feminist approach to sound in we need to talk about Kevin, Screen Queens. Available at: https://screen-queens.com/2016/11/17/a-feminist-approach-to-sound-in-we-need-to-talk-about-kevin/ (Accessed: October 9, 2022).

Building a simple synthesiser on a breadboard, taking it home, and experimenting with effects

For my next step into DIY electronics, I’ve made a simple synthesiser using a few components connected together on a piece of technology called a breadboard. A breadboard is a very simple piece of kit, which consists of a number of contact points connected together by metal strips. Below is an example of the breadboard I have been working with:

As you can see, there are two lines of contact points at both the top and bottom of the breadboard. These are connected by horizontal strips of metal. The contact points in the middle of the breadboard are connected by vertical strips of metal, with a break in the middle to separate the two halves of the board.

This circuit was very easy to build, and is borrowed from the 2005 book ‘Handmade Electronic Music’, by Nicolas Collins. All it consists of is an Integrated Circuit (IC), a resistor, a capacitor, a few jumper cables, a 9 volt battery and an audio cable to connect it to an interface (via a couple of crocodile clips). Here’s the finished circuit:

In the circuit what you can see is that the positive and negative (ground) terminals of the battery are connected to the top and bottom rows of the breadboard respectively. The positive terminal is then connected to pin 14 (top left) of the IC via a jumper lead, and the ground is connected to pin 7 (bottom right). The resistor is connected to pins 1 and 2 of the IC, whilst the capacitor is connected to pin 1 of the IC and the ground of the circuit. A jumped cable connected to the ground is then connected to a crocodile clip, which in turn is then connected to the audio cable. Another jumper cable is also connected to pin 2 of the IC, which then is also connected to the audio cable via a crocodile clip.

Whilst all of the physical information about the synthesiser can be engaging, what I find most interesting about it is how it sounds! The circuit will usually just make a basic square wave, however I’ve added a light-dependent resistor into the circuit so that the pitch of the wave can be altered. It alters the pitch by varying the resistance depending on how much light gets in – the more light comes through, the less resistance is put into the circuit, which allows a larger amount of electrical current to flow through. More light therefore = higher pitch, and vice versa. This means the instrument can be played in a similar way to a theremin. This is how it sounds in its most basic form:

And here’s a video of me playing the synth like a theremin (the high pitched noise is someone else’s synth in the background, mine is the much lower pitched sound):

I decided I wanted to take this setup home for the week so I could record it and mess around with a few effects in my DAW. Here’s the same sound you just heard but with a few effects (autofilter, ring modulation, pitch shift, distortion, and a couple of others) to make it sound like a strange room of tweeting birds with a chainsaw in the background:

‘Tweety’ synth

Here’s the same settings but with the autofilter turned off – this gives a very horrifying distorted sound:

‘Chainsaw’ synth

Eventually I’d love to be able to make these effects as analog circuits in their own right!

For all of the previous recordings I was using a phone torch to illuminate the light dependent resistor as I felt the pitch was too low otherwise, so I decided to change things up a bit – in this next recording I switched out the 4.7μf capcitor which I had been using for a 0.1μf capacitor. This affects the range of the pitch that the light-dependent resistor will sweep through, and in this case as it is a smaller capacitor, makes the signal higher-pitched. Coupled with a rotary speaker emulation, this can give quite a nice ‘robotic bleep’ sound:

Robots attack!

I also experimented with a touch sensitive resistor, which applies different amounts of resistance of the circuit depending on how hard you squeeze it. It was fun to experiment with, however I found it a bit less expressive than the light-dependent resistor:

‘Touch synth’ with some pitch shift, phaser and reverb
Picture of the touch sensitive synth in action

Whilst on a break from making crazy sounds, I took a trip downstairs to the charity shop from the studio I was working in. I found a couple of children’s toys that I thought could be potentially candidates as housing for my final piece for this element. I know it’s quite a long way off still, but these toys inspired me quite a bit and I think I may have a couple of ideas for what I may want to exhibit in the second element of the unit as a whole.

Where is art without a sense of humour and play, after all? And wouldn’t it be amazing to enter a devilish room of these two and their friends screaming at you?

Bibliography

  1. Collins, N. (2006) Handmade electronic music: The Art of Hardware Hacking. New York, New York: Taylor & Francis Group.

First adventure into DIY Electronics – building a *very* primitive synthesiser

To kick off second year at LCC, I have chosen two out of the three options available to me for the first element of the Specialising and Exhibiting unit. These are ‘Sound for Screen’ and ‘Expanded Studio Practice for Twenty First Century Sound Artists’. I’m not sure which of these two I would like to hand in for yet (could be both!), so I’ll be documenting my process in each of these fields until I come to a conclusion on what I want to make.

As a first exercise in the Expanded Studio Practice Unit, I experimented with constructing an extremely basic synthesiser out of a small speaker driver, some crocodile clips, a couple of paperclips and a 9 volt battery, which looked and sounded like this:

As you can hear, it’s quite noisy, but construction of it is very simple. It works by connecting the negative terminal of the speaker up to the negative terminal of the of the battery using a crocodile clip – another crocodile clip is hooked up to the positive terminal of the speaker driver, however the other end of the cable is attached to a paper clip. A third cable is then attached to the positive terminal of the battery and the other end is also attached to a paper clip. What happens when you touch the two paper clips is that a connection is made, therefore causing the speaker to make a single oscillation. What’s interesting is that when you then put both paperclips on top of the speaker cone (like I’ve done in the video), they then bounce off of each other when the speaker oscillates, causing it to oscillate again multiple times. A higher frequency of oscillations means a higher pitch comes out, hence the high-pitched noisy sound in the video.

This primitive synthesiser can be a bit temperamental, as when experimenting with it I found that it was a bit of a game of luck as to whether the paperclips would actually bounce off each other for more than half a second at a time, however it was still fun to make and I’m excited to produce more complicated circuits in the future. I’ve never really experimented with creating instruments in this hands-on way before and it’s definitely something that appeals to me.

Final Thoughts on my Score

As previously mentioned, this project was my first attempt at composing for screen. It was a completely new way of working for me, and I really enjoyed it – I found it really helpful to have a visual reference to inspire me whilst making the piece, and I think I definitely chose the perfect film to score as there was so much imagery to bounce off of.

I think this is the mix that I am the proudest of this year, and listening back to my earlier pieces I can definitely hear improvements both conceptually and technically. I’m also happy that I managed to bring the electric guitar (my primary musical instrument) into the piece, and I hope if I keep doing this it may help me develop my own unique voice within my art. I think I’ve also developed my technique in terms of using distortion in my work – this is something I have shied away from in the past as I’ve never felt like I could successfully implement it into a piece, however I feel as though I’ve been developing a good ear for how to implement it when needed.

However, I do believe there are points I could improve on. I found it hard to get the master level of the track to stay at an adequate volume – I appreciate that having a varied dynamic range in a track is part of what can build and release suspense, however there were times when the master fader would clip at 0.4db, and if I tried to turn the master volume down as a whole this made the quieter parts too quiet, in my opinion. I did try to use compression but I didn’t really feel like I was getting the result I wanted without the piece losing something.

I also think that I could’ve potentially introduced more new elements into the score for the section in the morgue. I really liked the creepy breathing sound that I used, however I feel like I could’ve had something extra come in halfway through, as in my score there is around a minute of just the drone and the breathing. When watching it I feel conscious that nothing new is coming in, although maybe this is more an insecurity on my part as the composer, and I could learn to embrace moments when there is less going on as a means to making the busier moments stand out more.

Overall, I am very pleased by how this piece turned out, and I really look forward to improving further in my second year at LCC (and doing more film scores!) Although there were definitely aspects I feel that I could make better, I’m happy to have learnt lessons now that I can bring to future units.

Finishing my Score

Today, I completed the rest of my rescore of ‘Persona’. I started where I left off yesterday, which meant It was time to score my least favourite shot – the killing of the lamb. This shot would have been a part of the sequence I would’ve taken out if I could, however because it’s right in the middle of the clip it would’ve been very difficult to do so. I find it more unpleasant than meaningful or interesting, and scoring it has definitely been my least favourite experience whilst working on this project.

I like the idea of foley being slightly mismatched with what’s on screen – it gives a sort of creepy, disjointed effect. For the sound of the lamb’s blood coming out of its throat I used the Zoom H5 recorder to record the sound of water coming out of my tap:

I also added a small convolution reverb, just to make it sound like it was coming from a slightly bigger room. I’m not sure why I preferred the sound of this, but it seemed to work better with the visuals.

Water reverb

It sort of reflects the image of the blood coming out of the lamb, but has a very calm sound to it which in a way makes the imagery seem more disturbing. I suppose it’s akin to the uncanny valley phenomenon, where robots that look sort of similar to humans but just slightly different can appear very unsettling. The idea of things mostly seeming normal but something just seeming a little off can often be more terrifying than pure gore.

The next shot of the scene is an image of someone’s hands being nailed to a crucifix. This was quite easy to score as all I had to do was record a few hitting sounds and match them up to the video. I recorded myself hitting the side of my kitchen sink with a lighter. This sounded good when I was recording it, however when I played it back it sounded a lot thinner and less metallic than when I had recorded it:

As I couldn’t find any better sounds to use in my home, I decided to stick with this and just edit it in Logic. I added EQ, distortion, reverb and a noise gate – the result sounded like this:

Whilst this doesn’t sound exactly like a hammer going through a hand (I could’ve added a squelching sound from fruit), I think it has a similar effect to the water in the previous shot. It sort of looks like it works, and it’s lined up well, but something’s not quite right – I think this does give the shot a certain charm and creepiness, even if it’s not the most accurate piece of foley.

EQ for hitting sound
Distortion for hitting sound
Reverb for hitting sound
Noise gate for hitting sound

I decided after this to give the score some room to breathe as the film moves on to a shot of a brick wall. Because the piece is very sonically busy towards the start, I wanted to let it calm down for a couple of minutes. As the film takes us inside a building (possibly a hospital or a morgue), a slow, heavy breath recorded through a contact microphone on my neck comes up in volume, as if the camera is alive – maybe we are seeing the point of view of a real person, walking through the building, staring at strangers, breathing over them. Or perhaps this could be the breath of the unconscious people in shot, lying there as we watch them sleep:

This is actually a recycled recording that I used in my first piece for the course. There’s something about it that’s so strange – it sounds like a ghost breathing down your neck. Although I try not to reuse sounds I’ve used before, I found that this recording worked well with what I was trying to achieve so I put it in the score. I did try to rerecord it however I didn’t manage to get the same sound, and after a while I didn’t see the point in recreating a sound I had already recorded anyway.

When the scene reached the shot of the boy waking up and getting out of bed, I decided I wanted to add a new sound to make it seem as though the boy was being woken up by something. I recorded some layers of harmonics on my bass, which I then faded in to get rid of the plucking sound. This creates a similar sound to an ebow, which I would’ve used if I’d had access to one. Here is the end result of what I recorded:

This sound was achieved using the same plugins that I used for the guitars at the start of the piece – a stock distortion plugin and some reverb, with absolutely no amp simulations used.

Slightly after the harmonics come in, I began to slowly bring up the volume/cutoff of the LFO synth from the beginning of the piece, just to make the clip start to feel a little more busy and overwhelming. This all to start to build up towards the climax of this scene.

As the boy gets up and walks towards the camera, more sounds start to come in to increase the noise and businesses of the piece. I recorded a short clip of me sliding my finger up the neck of my bass, looped it, and then put it onto four different tracks starting at different times:

To match the reveal shot of the boy feeling the screen with his hand, I recorded some more guitars playing different chords all at the exact same time. I think this atonal noise works well when it is synced with the reveal, as it tells the audience that something isn’t right – they don’t know the story of the film yet, and they don’t know who any of these characters are, but the unpleasant noises in the score let them know that something isn’t right.

I let this noise carry on until the end of the piece, and I recorded some more nonsensical noodling on multiple guitar channels, as well as bringing back the banging sound from the crucifixion. I then cut the piece very abruptly, as this is where the title card comes in, and that was the end! I went back in the piece to edit a few levels and add little touches of automation here and there, but after that I decided it was time to finish working on the piece – sometimes it just feels like the natural time to stop.

Here is the unlisted YouTube link to the final result (I will put this at the top of my next blog as well):

https://www.youtube.com/watch?v=gzM0LkJWAM0

Starting my Score – Processes and Thoughts

When I started making my piece today, I didn’t really know where to start – I didn’t want to recreate the atmosphere of Werle’s original score, yet because I had already listened to it so much through analysing it I found it hard to imagine the scene with a new piece of music.

I decided to just push through this feeling of not knowing what to do and found a sound I liked and just went on from there. The sound that I chose was the ‘Metamorphosis’ patch made my Spitfire Audio, a company that specialises in making free plugins and software instruments. It’s very droney and quite breathy – it comes under their ‘Glass Piano’ category of sounds.

Metamorphosis plugin

Here’s how it sounded on its own:

I decided this sound was a good bedrock to build my piece from.

Having originally started my journey in sound as an electric guitar player, I’ve been really keen to involve this aspect of my musical vocabulary in the sound art I make for the course. Around 15 seconds into the clip, there is a big white flash of light that I thought could use an ‘explosion’ of discordant electric guitars. I layered lots of tracks together with the exact same plugins, but all panned in different directions to give a full stereo sound. To lead up to this explosion, I recorded one track of me playing a harmonic on my guitar whilst pushing down on the tremolo bar. I then used automation to fade it in and to pan it from right to left, giving it a sort of swelling effect:

The explosion of guitars comes in halfway through the ‘swell’ guitar track, and sounds like this:

Two of the tracks in the explosion are played on a conventional six string guitar, whilst the other two are recordings of me sliding up and down the neck on my fretless bass. I wanted the guitars to sound very distorted, but not necessarily as though they were being played through an amp, so I took the dry signal of each track and just ran it through Logic Pro’s stock Distortion plugin.

Distortion Settings

I also added some convolution reverb to the tracks:

Reverb settings

I found that the heavy, distorted tones of the guitar were clashing a little bit with the softer tones of the Metamorphosis synth, so I added on a very small amount of distortion onto the synth and slowly automated it in to help them gel together a little better.

Metamorphosis distortion
Automation for the distortion

After the explosion I recreated the sound of the of the reels of film running through the projector, much the same as in Werle’s piece. However, although in the original Werle uses an actual audio recording of a machine playing back the film, I decided I wanted to recreate a sort of, but not exactly, similar sound using a synthesiser. To do this I opened up the Retro Synth plugin on Logic and set the oscillator to only generate white noise. I then used an LFO with a reverse sawtooth wave to create the rhythmic sound of the machine. Here is what it sounded like:

Retro Synth settings

I recorded another big ‘explosion’ of guitars to match the flashing black and whites of the film as the imagery gets more intense, and this time also recorded a few tracks of nonsensical noodling to build up a longer wall of sound to last until the children’s animation starts playing:

I particularly like the sounds of the fretless bass as it sounds almost like a low rumbling siren.

I then let the Metamorphosis and white noise synths carry on uninterrupted whilst the animation plays out. I gradually lowered the cutoff of the white noise synth so it would slowly fade out to leave the drone by itself, which gives a creepy atmosphere as the stopmotion film and footage of the spider play.

This is as far as I got today, and I think I’ve made good progress. I may only be just over a minute in, however now that I have a more concrete idea of the concept and sound of my piece it should be much easier to get going on it tomorrow.

(WordPress won’t let me upload the video of my progress so far as the file size is too big, so have linked it below as an unlisted YouTube video.)

https://www.youtube.com/watch?v=2LiILSalXTw

Rescoring Films – Researching the Practice

As this is my first time rescoring a film, I wanted to look at a famous example of this practice – namely, Giorgio Moroder’s 1984 rescoring of the classic German film ‘Metropolis’, originally released in 1927:

https://www.youtube.com/watch?v=CD-2I2BoSEg

I watched Metropolis a few years ago when I was 15, and couldn’t quite remember the original soundtrack so I looked it up on YouTube and had a quick listen:

The original score is composed entirely of orchestral instruments (not at all surprising for the period), and I found it a little jarring as I don’t feel that the timbres of those instruments necessarily match the dystopian aesthetic. Obviously there wouldn’t have been synthesisers available to use when the film was originally scored, and I’m sure at the time the music would have seemed less disjointed from the film however I think I conceptually agree with Moroder in the thinking that the aesthetic feel of the film definitely seems to suit electronic music over classical, orchestral music.

Moroder’s rescore also came with a new restoration and edit of the film, as the original had been cut considerably at the time of its release, against director Fritz Lang’s will. It features many famous singers of the time, including Freddie Mercury, Bonnie Tyler and Adam Ant. It’s vastly different from what I was expecting before watching it – I was expecting to hear a sort of ambient slow soundscape, with a few bits of synthesised foley here and there. However, having heard Moroder’s other work plenty of times in the past, I should’ve known better. Of course, there are parts of it which do follow my original expectations, such as the sequence around the 2:00 minute mark which has a good amount of metallic-sounding foley. I really like the bell-like sound that ticks with the clock around the 2:45 mark, it’s a nice addition that helps create a certain industrial aesthetic.

However, there are definitely parts of the rescore that I have a problem with. Moroder puts a lot of emphasis on synthesised pop songs that don’t seem to add anything useful to the film at all. I really didn’t enjoy hearing the vocals on these songs as I feel as though having lyrics somewhat takes away from the actual film being shown, and it’s definitely these moments that make the rescore feel a bit dated and disjointed from the film in my opinion. I think perhaps the problem is that Moroder’s score is too intrusive. In my opinion, the score would have been much stronger if he had stuck to instrumental tracks as the vocals make the film seem too separate from the score – the score should always serve the film, and not the other way round.

A lesson to be learnt from this is to keep my piece relevant to what is being shown on screen. This doesn’t necessarily mean adding sounds for everything that is shown on screen, but it should flow nicely and make sense with what is being shown. It’s also important to make sure the aesthetic of the film matches up with the general sound of the score. It doesn’t have to be period correct for when the film was made (in terms of instruments), however it shouldn’t sound so separate from it that it feels obvious that this a rescore, a separate project from the original film.

Analysis of the Original Opening Score for ‘Persona’

Before rescoring the scene from ‘Persona’, I thought it’d be a good idea to watch the original a few times and look out for aspects of the score that make it work well:

https://www.youtube.com/watch?v=s8TJ2d7-1e8

The original score was composed by Lars Johan Werle, and I’d say it’s more of a collage of different soundscapes coming one after another than a particularly linear piece which I find interesting as I have never really composed anything that cuts so quickly between different ideas. There are elements of musique concrète interspersed with a haunting string section that comes in and out of the piece. I particularly like the build-up of strings right at the end, they are a good way of introducing tension. The audience doesn’t yet know who the faces on the screen are, but I feel like the soundtrack does a good job of suggesting that there is something sinister afoot, and as we learn later on the image of their faces merging reflects the plot of the film towards the end.

I also enjoyed the sound of the projector near the beginning being constantly interrupted by strange sounds that sound almost like car horns – I’m not entirely sure why but it felt very disorientating. The sudden cut to a completely different section when the stop motion film comes in is also rather effective, and I think that the percussion makes the clip seem rather gimmicky, in a way.

Overall, I find the score to be a really strong piece of composition, and when looking to find elements that I didn’t particularly enjoy I really couldn’t find anything to fault with it. I think it’s going to be a challenge to rescore this scene and produce something that I feel holds up against the original, however I’m still excited to work with this clip and perhaps produce a score that’s more relevant to today’s sonic climate.

When I was trying to research Werle’s process, I couldn’t find much about this film which is a shame as I wanted to learn more about he made it, and what certain sounds were. However, I did come across the fact that this was one of only three films he ever scored, which I find really impressive considering just how good this score is – It’s also a shame he didn’t compose for more films. I had never heard of Werle before starting this project, and I think I will watch the other two films (‘Hour of the Wolf’ and ‘The Island’) that he scored when I have some free time, as I would like to compare them to his work on ‘Persona’.