Monday, September 2, 2013
"The Human Element" --On Dave Grohl's Sound City and Capturing Music
But is it really the human element?
Sound City, Grohl says, started out as trying to tell a story about a sound board: the Neve console. This sound board was located in Sound City, a shitty looking studio in CA where a lot of great records were made (Neil Young, Nirvana, Fleetwood Mac, etc.). Despite this reference to "the human element," we see that it is actually the nonhuman elements that are the condition for the possibility of catching this "human element." The board, combined with the room (which "no one designed") happened to produce an amazing drum sound. For anyone who has never recorded music, drum sound is probably one of the hardest things to capture on an album (live, mic-ed drums that is). In fact, I find that when I listen to a local band's record compared to, say, Tool's Lateralus, one of the main ways you can tell that the band is semi-professional or at the very least producing the album themselves is the drum sound. Drums on an album need to sound full, round, and, on a rock album, BIG. However, it is the technology -- the room and the board -- that is posited in the film as the reason for the good drum sound. The 'human element" is continuously linked to the capturing capacities of the technology.
Grohl and co. are careful, however, to point out that the technology is not to be relied on-- one still needs good songs and good musicians who practice. Indeed, this point emerges through the latter half of the film which discusses the debate between analog tape and digital tools. The way musicians talk about analog is that it is "no frills" directly onto tape. You "had" to practice and to do multiple takes -- you couldn't simply "fix" something. One of the musicians remarks that he heard a younger musician once say "you don't really even have to practice anymore--you can just put it into the machine and cut it up."
It's not that you cannot cut tape. You actually have to cut tape in order to bond different takes and such. However, the musicians make a good point for lifetime musicians like me: you really can't just rely on the technology whether it's a guitar, pro tools, weird effects. A good song, a good cut, a good album is not just the technology, but the way in which the technology interacts with 'the human'. In some ways, digital tools can be used to master the music (pun intended) rather than to capture the music happening in the room. There is an element of chance, an 'event' feel that happens when you record live -- on tape or digital.
Big Shoals' debut album has been recorded entirely "digitally," but we played the underlying tracks "live" in the studio. Lance had previously tried to record without a full band, and it didn't sound "right." It was good, but there was something missing. On this album, we've "captured" rather than mastered the music. It's a true collaboration between us, our instruments, Ryan our sound engineer, pro-tools, and the rooms in which we are recording. I'm not going to lie -- we've had to "punch in" a few notes when we missed it, but the overall feel of live playing still lingers in the mix because we were playing the damn thing live. We also did multiple "takes" of certain solos and parts. Lance would play several takes of a lick and there would be something in the take that set it apart from every other take -- an event captured.
Much of the music demonstrated and played in the documentary Sound City was recorded "live" in the studio. Rage Against the Machine tells how their debut album was recorded "like a concert" where they invited friends in to watch them play. They said they got over half the album done in one night. And if you've listened to this album -- there's something there, something captured.
As more and more artists -- particularly pop artists -- rely on technology in order to master their already-written and composed songs, we lose what Roland Barthes once called "the grain of the voice" (although it's not just the voice, but any note on any instrument -- perhaps its timbre). We also lose the "event" character of music. It's not that everything in an album has to be done all at once, but the collaboration is distributed across not only people, time, and space, but I imagine certain musicians divvying out their music like an assembly line. We call this music "mass produced" because it all "sounds the same." Obviously, in the western scale, there are only 12 'notes' so I am not saying that musicians are playing the same chord progressions. I mean that there is no sense of a "capturing." The voice captured is probably weak, uninspired, and a little out of tune that needs doctoring until we can no longer hear the vocal chords. Instead of working to get that note 'right', to capture a moment on tape or in bits and bytes, the note is played and then after the fact reintegrated into the song.
Am I merely being nostalgic? No. I do not long for the days of analog tape as if somehow that was always better. However, I am suggesting that there is a difference between the capturing of an event (even just one note) and being a "master and possessor" of notes and timbres. I'm suggesting that if we lose that element of chance produced through the collaboration of the human and nonhuman, then I believe we begin to colonize music -- to make it more human in the most Humanist of ways. To be a posthumanist musician actually means letting the nonhuman become actors (or actants) themselves rather than wielding them as 'tools'. This is why even though people like Brian Eno use primarily digital tools to make music, one could see him as a "posthumanist musician" because he introduces chance into his compositions -- a combination of skill and chance makes a music event.
"Pro-tools." It's in the name. It's a professional tool -- we wield it like a weapon or a diamond cutter -- carving out the excess in the name of perfection.
In Sound City, the exception to the rule of analog vs. digital is NIN -- Trent Reznor. Reznor, according to Grohl, "uses technology as an instrument, not as a crutch. He doesn't need it." Technology as an instrument rather than a tool. "Instrument" not in the sense of "instrumental" but instrument in the sense of a musical instrument. A musical instrument is not a tool that a musician uses. A musical instrument is a collaboration between the human and nonhuman. Things happen when you play a musical instrument that you might not have expected. I'm not simply talking about "jamming" here, but I mean the way we play an instrument. In the moment of putting your fingers to strings or keys, even if it's a song you've played a million times before, maybe you hit a chord harder than usual or do a little run that comes out of nowhere. It's not "magic" but its a collaboration between the environment, the instrument, and you. It's a subtle difference but its the difference that makes a difference between a musician and someone playing music. "Musicians" know that each performance is a unique event in which they become musicians by participating in every performance as one actant among many.
I'm far from the first academic to think theoretically about musical environments. Thomas Rickert's book Ambient Rhetoric shows how Brian Eno is a potent illustration of what he means by 'ambient rhetoric'. Rickert writes,
"In this process, not only do the boundaries between music and environment blur and blend, but the locus of creation is dispersed ti include the environment, which thus grants an active role to the technological apparatus as an element within the whole material surroundings" (Rickert 110).
This is definitely in part what I am trying to get with my reflections on Sound City. However, in Sound City, as opposed to the example of Eno, the other really determinate actants are not only nonhuman instruments, technologies, and spaces, but other musicians. When you play (and record) with other musicians, songs emerge in their performance/recording.
In another article by Thomas Rickert and Michael Salvo, "The distrubted Gesampkunsterwerk: Sound, worlding, and new media culture," the authors discuss "Garageband" --the mac's pre-installed music software. I have used garageband myself when I owned a mac and I do find it to be a powerful tool for making one's own music. Rickert and Salvo argue that Garageband helps enable what they call "worlding."
"Worlding, then, carries this double sense: It is the aesthetic realm that a visual musical work invites us to both enter and immerse ourselves, and it is the constellation of production pathways and inputs--people, communities, technologies, and networks--that are simultaneously evoked with each aesthetic world." (Rickert and Salvo 313).
The authors point out that in addition to recording traditional instruments, the program comes with preloaded beats and sounds etc. for people to (re)mix. Thus, Garageband makes everyone a (potential) composer. Garageband itself, much like the "digital tools" that Grohl refers to when speaking of Reznor, becomes an instrument: "software is no longer limited to combining or transforming pre-existing content; rather, it produces content itself no differently than a musical instrument" (Rickert and Salvo 315).
In the future, Rickert and Salvo speculate that the interface of these digital tools will become more affectively pleasing like a musical instrument. They argue that this will mean that "sound" will become more important in composing. "Sound" is different than 'music' in some ways, but inseparable from music as well. We just spent quite a lot of time talking about "drum sound" and how important it is to capture that feel.
One question is whether or not these digital tools allow one to make new sounds, or simply remix premade, poorly composed 'stock' sounds. We already hear a kind of levelling of sound happening in the production of recent pop music. Perhaps Rickert and Salvo are right that it is through these DIY tools that new sounds will be produced -- new soundworlds for songs to exist within.
But also, we do need to ask whether or not the sound, the song, the soundworld, the environment is poorly or well composed. Rickert and Salvo, although they use the example of some of the greatest musicians of the second half of the 20th century (Hendrix, Yes, The Flaming Lips), are more interested in the potential for garageband and other tools to allow nonmusicians to make music--or at least to make sound. These sounds and songs will also enter into the digital network where musicians can receive feedback (such as reverbnation or bandcamp -- Byron Hawk has spoken of music networks in his article "Curating Ecologies, Circulating Musics: From the Public Sphere to Sphere Politics in Dobrin's edited collection Ecology, Writing Theory, and New Media.).
These points are apart from a concern that underlies this entire post and myself as a musician: good music. Now, everyone says that music taste is "subjective," but I think that even within the recent theoretical millieu of academia, we have abandoned such separations of 'subject/object'. Of course I want people to make their own music (after all, it's what i'm doing) but I just hope that democratization and public "prosumerism" does not mean levelling.
And again, I don't think it does. While there's going to be a lot of shit produced, a lot more great music can now be accessed easily through Spotify, Pandora, Bandcamp, ReverbNation, etc.
The trick now is to figure out how to get people to realize they have access to great music. It's usually even free! Yet when I ask my students, for example, what they listen to, the majority of it is not local or semi-local or stuff they found via Pandora but anything that happens to play on the radio or at the club.
I'm starting to sound cranky -- and I am.
Maybe this whole post is simply an elaborate academic ruse to privilege a certain type of music making over others. Maybe this entire time my real target is all the heartless (*sigh* such a cliche, outdated metaphor) pop music and corporate rock that leaves nothing to chance and simply leaves a bad taste in my mouth. Maybe all that shit about Miley Cyrus's 'twerking' scandal with no one saying anything about the fact that she didn't sing well (and Thicke was even worse) just got to me--particularly after watching such a labor of love as Grohl's documentary, Sound City. Maybe I'm tired of people taking shitty songs and turning them into hits through spending an enormous amount of time on their production. I'm not trying to be a pretentious dick. I'm far from advocating that an older technology is far superior and more true to authentic music making. Nor am I trying to say that all popular music is bad. Shit, who knows, maybe I am saying that despite myself. Regardless, there's some DIY music that's bad too.
See. This is what I'm talking about. I can't extricate my involvement in music from any academic reflection. This is not what I'd call a 'sober' analysis of the issue. But hey, it's just my blog.
I'll end with this:
"The human element" turns out to be the element of surprise at one's own collaboration and participation in a musical event composed of other musicians, technology, instruments, and dingy rooms that just happen to make drums sound fucking badass.