Inhuman After All
Holly Herndon talks artificial intelligence, artistic necrophilia, & her bold new 'PROTO'
You can call Holly Herndon a doctor now. When I meet the experimental composer in lower Manhattan in April, she’s just arrived from the Bay Area, where she successfully defended her dissertation at Stanford’s Center For Computer Research In Music And Acoustics. Now that her PhD program is behind her, Herndon has moved from California back to Berlin, where she got her start experimenting with club music as a high school exchange student. She lives with her partner and frequent collaborator, Mat Dryhurst, and the two have been raising an AI baby alongside programmer Jules LaPlace. The baby’s name is Spawn and she’s a little over two years old. Like human babies, she’s learning through mimicry, and with the help of Herndon’s friends, she has become a singer. Her existence is the impetus for Herndon’s bold new album PROTO.
Every Holly Herndon album comes with a concept. Her first, Movement, was inspired by the notion that “the laptop is the most intimate instrument.” Herndon saw hers as a mediator between her IRL and URL existences, a digital space capable of encapsulating every aspect of her personality, every one of her many interests. The experimental collection of songs was made up of sounds of bodies in motion — a cut-and-pasted sigh, a hand brushing up against another.
Her second album, 2015’s Platform, furthered that narrative; using sousveillance technology developed by Dryhurst called Net Concrete, Herndon spied on her digital self. She captured laughter on Skype calls, the ping of a notification, the weird dissonance that happens when you have too many tabs open and a couple of pop-up ads start playing at the same time as a YouTube video. During live performances, she and Dryhurst would pull up the profile pages of audience members who said they would be “attending” on Facebook and projected them onto a screen. In surveilling herself and her audience, Herndon brought up questions that go largely unanswered when we use our smartphones and laptops and Apple watches: How do I define privacy? Do I value my personal data? Am I being watched?
Of course, the album was inspired in part by the incendiary summer of 2013, when whistleblower Edward Snowden leaked National Security Agency files, which exposed a state-funded citizen surveillance program called Prism, among other things. But Herndon was also interested in emulating the way a day lived online sounds. She processed various samples of her own voice, vocal synths, and snippets of audio captured by the Net Concrete system to build a choir. The album even included an ASMR track, created with artist Claire Tolan, which preceded the ASMR boom we’re currently experiencing in popular culture and advertising. “What I was really trying to do is capture the sound of the internet that yes, does involve kitsch, and yes, does involve jingles, but also is a really terrifying place and a really emotional place. I wanted to not flatten an internet aesthetic into something that’s just kitsch,” she lectured at Loop 2016.
Platform was a critical success, and in the years since it was released, conversations surrounding online privacy have become more urgent. From the Cambridge Analytica Facebook lawsuit, to the revelation that apps track users’ locations without notifying them and then sell that information, to Google’s eerie targeted advertising, escaping the data mining discourse is impossible. On one hand, giving our data over to platform capitalists feels icky, but on the other, these new technologies make our lives easier, they help us keep in touch, they fulfill our desire to connect. On Platform single “Morning Sun,” Herndon sings the words “I belong” over and over again atop a shuddering drumbeat.
With each consecutive album, Herndon has brought more collaborators into her process. For PROTO, she assembled an electronic pop choir that included both human and AI voices, with Spawn featuring throughout. Herndon grew up singing in church in rural Tennessee, and the voice has always been a critical instrument in her practice. In the video for “Eternal,” Herndon lets her audience in on her creative process. We look on as a group of musicians help train Spawn by singing into a microphone, teaching the neural network how to reinterpret vocal melodies.
One of the tracks on PROTO is pulled directly from a live training. It’s called “Evening Shades,” and it’s an arid, glitchy call-and-response choral performance. It comes right after “SWIM,” which features Herndon’s voice at the forefront as she sings about needing to belong, a craving for community that has followed her throughout her discography. Later, on lead single “Godmother,” Herndon collaborates with Indiana producer Jlin, whose breakout album Black Origami turned heads when it debuted in 2017. Herndon and Jlin fed Spawn the stems of their track and she improvised a vocal component; the result is a dark and propulsive song driven by the voice of an inhuman entity.
The cover of PROTO also hints at the collectivism that made the album possible: Herndon’s face is distorted beneath the image of various other friends and collaborators, a composite that aims to show the number of people who worked on the project. Herndon saw PROTO as an opportunity to showcase the ways in which AI can participate in human-led creative projects. Spawn isn’t replacing anyone on this album; she’s singing with them. This contradicts the timeless fictional narrative that “in the future, man’s creation will become too intelligent and it will turn on us.” It’s a story we’ve heard over and over again, from movies like Terminator and I, Robot to Mary Shelley’s Frankenstein.
As Herndon makes clear over the course of our conversation, these new technologies don’t pose any kind of outsized threat to our livelihoods — it’s the corporations who own the technology that do. By collaborating with Spawn, she presents us with a less dystopian vision of the future. Though the ideas fueling this project are complex, Herndon is really good at explaining them. She’s kind and affable and she laughs a lot, even when she’s bemoaning music that traffics in nostalgia or the intentional obfuscation that makes understanding new technologies feel like an impossibility for those of us who don’t study them. Herndon is eager to give credit to the thinkers who inspired her work on PROTO, as well as the friends who perform on the album, and I walk out of our talk with a few new entries on my reading list. Read our Q&A below.
STEREOGUM: What are some of the challenges of creating a conceptual project and then having to present it as music that a label will be able to push?
HERNDON: I have a really weird practice. I basically have a research-oriented art practice that manifests itself through pop albums. There’s no real trajectory for that, there’s no real model for that where I’m like, “OK, this person did it in this way I can follow in those footsteps.” Every album is a collection of research projects, essentially, and so many of the pieces have their own stories that are their own kinds of things, but then they live on this album together. There’s always this fine balance of getting the concepts across but also just allowing the music to be music and just letting people enjoy it viscerally. And I think sometimes, because I pack so much concept into stuff, I forget to talk about the actual joy of the music-making and that it is hopefully music that people can just hear and enjoy without necessarily knowing all the ins and outs of it. I’ve said this in the past, but I kind of see pop music or albums as a Trojan Horse that I can pack all sorts of different ideas and concepts and things into that can jump out once it’s in peoples’ homes.
STEREOGUM: In one of your lectures you said pop music is a “carrier signal.”
HERNDON: That’s kind of what I mean with the Trojan Horse. Pop music, it has this form that is relatable or it can travel in a way that maybe some sort of abstract form might not be able to. It might not be approachable in the same way, but you can still pack all these kinds of ideas and things into it in a way that, as a carrier signal, it travels further because people are less turned off by it than something more abrasive.
When I was younger and playing in the noise community I was like, “Oh, I must make the most harsh and undecipherable music to be cool” or whatever. I would try to create this wall, a barrier between the audience and myself. This “you just don’t understand” kind of thing. I don’t know, I feel like I kind of grew out of that. I’m not trying to create some sort of barrier. If I can create inroads for people … it’s not about watering things down or trying to dumb anything down by any means. It’s about creating avenues or passages for people to come through to meet me in my weird thoughts and my weird world, you know?
STEREOGUM: Platform was about public life versus private life, social media, and the surveillance state. On this new album you’re using an actual AI entity to create music with friends. How do you see PROTO as an extension of Platform?
HERNDON: I think one of the takeaways from Platform was: Is every gesture that we make at home potentially a public gesture? Or potentially a gesture for an unnamed viewer? I think what PROTO asks is: Is every gesture becoming part of a training canon for an unknown artificial intelligence? [Are we helping] to train some system that we might not ever engage with? So I see this as part of a continuum.
A big part of Platform was talking about the way that the internet has formed into these kind of platform capitalist silos, almost, and AI takes that and puts it on a fast track. Essentially, to have really functional AI or really powerful AI you need a combination of data and processing power, so like GPU units, and the institutions and the people that have access to that data are the platforms that we engage with on a daily basis and we give our personal and digital selves over to for free all the time because we haven’t figured out how to value our digital selves as a natural resource that we all share. Instead, we’re just constantly giving [data] to Google and Facebook for free all the time and so of course [governments and digital platforms] are gonna have the most sophisticated models and the most sophisticated AI moving forward.
It’s purposefully opaque. They don’t want people to have a full understanding of how much they’re being tracked and how much they’re sharing. They’re incentivized to make this conversation confusing. It doesn’t mean that people aren’t smart enough to get it or people aren’t writing about it, it’s deliberate.
STEREOGUM: Right, and so many people have the attitude of, “Well, who cares if my apps are tracking me? I’m not doing anything illegal.” The implications are bigger than any one individual.
HERNDON: Of course, that’s a huge part of it: How do you value the data itself? I think a lot of people have thought, “Well OK, I’ll feed this system, I’ll feed Google or Gmail or whatever all of my data because I like the Google products. I like this convenient email, I like this calendar, this is serving me well so I like the tradeoff.” But the thing is, this data that you’re feeding these platforms is not just going into making those products that you use better, it’s going into a whole host of other things that you might never encounter and your labor, your data, your IP, your piece of yourself, your digital self, is going into that other thing that you may never come in contact with.
Shoshana Zuboff writes about this in her new book [The Age Of Surveillance Capitalism: The Fight For A Human Future At The New Frontier Of Power]. It’s actually perfect timing [for PROTO] because it came out a couple of months ago and it’s this beautifully researched economic overview of how we go from this platform capitalism and how AI takes the issues of this time and puts it on crack. But she wouldn’t say it like that. [Laughs]
STEREOGUM: In the press release for PROTO you emphasize that this album attempts to remove AI from the tired sci-fi narrative that these entities are going to dominate and take over. But one of the practical fears of AI is the fear of automation, especially when it comes to labor. How is your vision of what AI can do for the future not totally dystopian?
HERNDON: Fully-operated luxury Communism unit? [Laughs] I feel like we’re really good at criticizing and saying what we don’t want but we really struggle to say what we do want. That’s kind of the hard part. My view is, “How can computers free us up, give us more time to be more human?” Not, “How can we become more machinelike?” That’s kind of what we’re trying to do with this album. The computer can make so much [possible]. It’s really good at repetition, it’s really good at playing drum beats and things like that. If the computer can take over this part of the labor it allows us as a vocal ensemble to enjoy each other, to celebrate each other onstage. We’re kind of trying to create a counter-narrative. I think if we get too stuck in this kind of ’90s, cyber-punk vision … in some ways we’re kind of relinquishing our agency over to the powers that be to just kind of do whatever they want because we’re not providing any alternate version. We’re disempowering ourselves if we can only come up with criticisms.
STEREOGUM: Was using Spawn on the record almost an experiment in improvisation?
HERNDON: I felt like she was improvising, totally. That’s the approach we took. A lot of people who are working with AI music right now are working with composition, so the AI is taking on a composer role. And we took a really different approach, and the AI was taking on a performer role. So I’m still composing and I have an ensemble of humans and inhumans, and they perform, and they improvise, and then that informs what goes into the final piece.
It’s not fully automated at all, and that’s deliberate. I’m not trying to write myself out of this process. I’m trying to find a symbiotic relationship to this thing that’s — to expand a creative capacity or learn something about the performance that Spawn gives instead of trying to write myself out of the process. Also a lot of the automated composing stuff that we’ve seen, it’s trained on MIDI data, or it’s trained on some canon of the past, and you can kind of rewrite in that style forever. That creates a sort of artistic cul-de-sac where you’re stuck on whatever training data you’re using. So if I’m training on Bach, and I’m like, “OK, now write Bach-like music forever,” then that’s really not interesting. We’ve already had Bach. That was a long time ago, and now in 2019 we don’t need to be rewriting Bach. We need to be responding to our current conditions and our current politics. So I really don’t like this sort of automated composing using past canon approach. I think it’s boring, and I think it’s also problematic.
STEREOGUM: How so?
HERNDON: I think “retromania,” as Simon Reynolds calls it, is a problem because if we can’t hear our present we can’t imagine another future. I think we get doomed to this kind of repetition of the past. It’s kind of like that Mark Fisher quote where he’s like, “It’s almost easier to imagine death than life post-capitalism.” Something like that. If you can’t hear the present and you can’t hear the future, how can you imagine anything other than what you’ve had in the past. I think music plays a really big part in creating a vision or momentum towards human development. Maybe that’s really idealistic, but…
STEREOGUM: Well, it kind of echoes the politics of today, which are so driven by nostalgia.
HERNDON: I know! I fucking hate that. It drives me crazy. I don’t want to live in 1960. That sounds like a nightmare! I also don’t want to LARP the radicalism of the 1960s because guess what? It was radical then, but it’s not radical today. I can’t put on the outfit of then and pretend I’m radical. That’s bullshit. That’s not radical. If you want to be a radical today you have to be dealing with the current climate and the current conditions and pushing things forward with all of the information we have access to in 2019. You can’t LARP the past. That’s so bizarre to me. Music has a huge problem with that. That kind of regurgitation nostalgia.
That we could always be training an unseen AI that’s kind of capturing our every move also plays into it. When you’re training on compositions of the past, there’s no opt-in for that. The ethics are really dubious. If we can just kind of hoover up the sound of a community or a composer or someone from the past and then recreate it without their permission or giving them attribution, I think there’s gonna be some really hairy IP questions along with that.
That’s something I’ve encountered with the voice model. I made a voice model of my own voice for the Jlin collaboration. I mean, right now it’s kind of janky still, but in 10, 20 years it’ll probably be much better. What if we modeled [Aretha Franklin’s] voice and gave her a whole new canon of music to perform? Lyrics and ideas she maybe never would have approved of? What does that mean for our future if we keep reanimating our dead to entertain us in the future? It’s almost like we could take a sample and the sample could sprout legs and run off in a new direction. You know what I mean? As shown in musique concrète, which is all about the decontextualization of sound from its environment, which sounds really beautiful, but that brought a lot of problems with sampling culture.
We have all kinds of laws in place around that and people still get fucked. You still have vocalists whose voices we’ve all heard a million times who’ve never received a penny. You know, this shit happens all the time. And this is for something that we have laws in place for, which is sampling. When it comes to neural networks being able to extract the rules of composition or extract the essence of a voice or something like this, we don’t even have the legal framework for that. So, I feel like it opens up a whole Pandora’s box of attribution and ownership and what’s even appropriate to do. We don’t know how to share cultural heritage in a way that’s like … cool. [Laughs]
STEREOGUM: I like the Aretha Franklin example because of how popular hologram performances are becoming.
HERNDON: Exactly. That person can’t opt in.
STEREOGUM: When the Prince hologram performed with Justin Timberlake at the Super Bowl everyone was like, “Prince is rolling in his grave!” But how much of that really is an ethical question?
HERNDON: It is an ethical question. And that’s Prince performing Prince songs, but now we can give Prince entirely new songs that maybe Prince would’ve hated. And what does that mean for performers today if they’re replaced by some weird perfected hologram from the past? I think it’s ethically dubious and it’s culturally impoverished. There’s a Miles Davis quote from the ’70s I think, maybe the ’80s, and he’s talking about hip-hop so I disagree with him on this call, but he calls [sampling] “transformative appropriation.” He’s basically saying that sampling in hip-hop is “artistic necrophilia.”
It’s such an amazing quote, and he says “every generation must create a new sound for themselves” instead of mining that which came before. And I think what he gets wrong is that hip-hop actually does that and sampling culture in hip-hop created such an important new voice for that genre and generation that it’s now become the dominant art form. But if you think of that [quote] applied to the voice model or some of this AI stuff it’s almost prophetic — this kind of weird artistic necrophilia that we have, that kind of dooms us to repeat our past. I find that really depressing.
STEREOGUM: You drew on various folk traditions while writing this new record. How does that play into nostalgia?
HERNDON: I’m not trying to recreate something in its original form. For example, “Frontier” starts out as a kind of Sacred Harp hymn, but then it quickly turns into … whatever it is. I’m not trying to reinvent everything all the time. Music is a shared language that has been developed by my foremothers and forefathers, and of course I’m tapping into that language. I’m tapping into a dance music heritage. I’m tapping into popular music, but it’s about building on something rather than just regurgitating something. So when we were looking into the development of artificial intelligence, we were looking at the evolution of human intelligence and the role of music in that and the role that folk music traditions sprung up around the world as communication modes for hunting and ritual practices and things like that. We’re kind of looking at this shared human project which is human intelligence. It’s kind of this beautiful shared human project and artificial intelligence is kind of this next phase that’s a part of us.
STEREOGUM: I’m interested in your use of folk music because traditionally, folk music is oral history, a form of passed-on storytelling that is meant to be carried on. Most folk traditions come from groups who maybe didn’t have written language or weren’t able to write or even read.
HERNDON: Yeah, it’s a process of mimicry and it’s all a process of this human cultural development and brain development. Language is a voice. It’s like this shared cultural thing that we learn through mimicking one another. So that’s kind of what the process has been with Spawn. It’s this process of mimesis where I’m training her on my speech and she’s mimicking me and trying to make sense of what I’m doing, kind of like a baby. But that’s also how we as a culture develop language and this human intellectual project we have.
STEREOGUM: You must have such an attachment to this little baby computer. It’s kind of sweet to think of her as a child.
HERNDON: Totally. We’re trying to think of her as a child but not anthropomorphize her at the same time. We don’t think of her as a human child, we think of her as an inhuman child, which I think is weird for people at first, but it’s like “child” because she only has access to the information we provide, she doesn’t have any contacts. It’s just like this baby who can only see what’s right in front of her. A “child” because she requires a community to raise her and it’s important that we instill our values in her early on. And “inhuman” in that it’s not a human intelligence, it’s not the way that our brain works, she doesn’t have a physical body, she doesn’t have our sensory information.
Looking at someone like Donna Haraway, who talks about this concept of kin, she’s been writing about kinship with animals. I love her. I first came across her work with A Cyborg Manifesto. [She writes about] using the cyborg as a metaphor for how women can be liberated from the kind of gender expectations of the past. But in her most recent work, she’s writing about kinship and how we can have relationships with different intelligences that help us understand more about ourselves and our own intelligence and seeing the human as part of a greater network of intelligences and beings on this planet. Plant life, animal life, intelligent computer systems and human systems, all living together, sharing this planet together. We tend to think of human intelligence as this apex intelligence on top of everything else, and maybe it isn’t the apex intelligence, maybe it’s just one of many intelligences. It also gets into things like climate change and sharing the planet with other intelligences because we’re not the gods of the earth, you know what I mean?
STEREOGUM: In the press material for this album, humanity is referred to as an “archetype,” which interests me because I only think of archetypes as existing among human beings. Like, the high-powered business guy, or the cruel father–
HERNDON: Or the prodigal son. Yeah, Katherine Hayles writes a lot about this too. She’s awesome. I got into her book, How We Became Posthuman, many years ago when I was writing at Mills, and her most recent work, she’s writing about cognition versus consciousness. So there’s all of these cognitive systems all around us, then there are various degrees of consciousness. I like thinking of traffic lights as cognitive systems that don’t have a conscious. You know? [Laughs]
STEREOGUM: With this project you’re making the labor that goes into creating these AI entities visible. A lot of people don’t understand that AI is only intelligent because we teach it what we can already do. That sounds obvious but–
HERNDON: No it’s not obvious to people — and that’s deliberate as well. It’s like the whole aesthetic of tech these days makes invisible not only the human labor that goes behind it but also the environmental destruction that goes into it. Things appear really glossy and simple on a screen but there’s some electricity-guzzling data center in the background somewhere, just hoovering up resources to process this thing you’re doing online. All this kind of stuff is glossed over and made hidden and the human part of that for sure. There’s no attribution system in place for that. I was saying earlier that you have all this human labor and human intelligence that goes into training this model and then the owner of that model is the owner of that model other than all these other people who did all this work that should be acknowledged as part of that model.
That’s why we were trying to create a counter-narrative to that. You can hear the people training. Instead of using MIDI data and compositional form, we’re using sound as material and training on the ensemble’s voices and you can hear them in the sound, you can hear the people, you can hear the community that trains Spawn in Spawn herself. It’s not like trying to erase — like, “Oh, look what she did all by herself!” kind of thing. That’s kind of the dominant narrative that we have. It is a collective project.
When I can kind of do a DIY version of something, it gives me that feeling that I can reassert some agency into something, I can understand something. And of course it’s not a David and Goliath thing — Spawn’s never gonna compete with Facebook models. It’s just different scales of processing power. But being able to do things yourself so you can understand the mechanics of it have a really informed idea of what AI is capable of, of what’s bullshit, what’s not. There are a lot of press releases out there that are like, “AI!” I’m really not trying to do that. It’s still very rudimentary. That’s why we use the child metaphor. She’s not like this glossy, perfect Svedka Vodka thing at all. [Laughs]
STEREOGUM: One day Spawn will grow into a sexy robot.
HERNDON: No!
STEREOGUM: You worked with so many people on this album but the project is still called Holly Herndon.
HERNDON: I’m just megalomaniacal. [Laughs] This has been a question I’ve had for a while. It made total sense with Movement because that was just me in my dorky studio figuring this shit out. And with Platform it already started to get complicated because I was collaborating with people, mostly over the internet. And then of course it gets even more complicated when I have this ensemble, and Mat [Dryhurst]’s more heavily involved, and there’s Jules LaPlace who’s our developer on Spawn. So it’s difficult. If I change the name I have to kind of start over with the whole [music] industry side which is annoying. So I think the best way to handle it is to just try to be as honest as possible with everyone’s involvement and try to shine light on their work as well.
Every one of the ensemble members has their own music project, so Google them all and go to their SoundCloud. We did a show in Berlin for CTM last year and the opening for the show, instead of having a normal opener, the ensemble was spread all around the venue, and it was a venue with balconies and stuff. Annie [Garlid] started and played two songs, and the spotlight would move around and hit over, and Marshall [Vincent Garrett] sang two songs, and then Albertine [Sarges] sang. So we kind of went around the room and the whole ensemble played a little bit of their own music and then we all joined onstage and performed together and it was really nice because they all make such different music. Marshall was doing some R&B jams, and Albertine was playing acoustic guitar, but you got a feeling of everybody’s work and how we could come together and celebrate together.
So I don’t know. Of course it’s a balancing act. I feel like we really love this idea of the lone genius, this elevated person, and I think that’s something that AI is really going to put into question. All of this stuff we enjoy, it’s all a collective project and people tap into things and they bring the best out of something, but it doesn’t happen in a vacuum, it happens in a community. We need to learn how to acknowledge that a little bit better.
TOUR DATES:
05/16 Brooklyn, NY @ Pioneer Works
05/18 Los Angeles, CA @ Teragram Ballroom
05/20 San Francisco, CA @ August Hall
05/22 Chicago, IL @ Thalia Hall
06/14 Berlin, DE @ Volksbühne
07/18 Barcelona, ES @ Sonar By Day – SonarHall
07/20 Manchester, UK @ Manchester International Festival
10/16 London, UK @ Barbican Centre
PROTO is out 5/10 via 4AD. Pre-order it here.