10 days ago Universal Music Group (UMG) pulled the AI generated DeepDrake x Weeknd track ‘Heart on My Sleeve‘ from streaming services, condemning it for “infringing content created with generative AI“. The best article I’ve read so far on the situation is this one quoting Holly Herndon in the NYT:
An AI Hit of Fake ‘Drake’ and ‘The Weeknd’ Rattles the Music World
It is in this context that I post this track. Listen to it/click though right now before it gets taken down.
We live in strange times, the topic of AI produced art (in all its forms, music, image, text) is the *current thing* . Nevertheless it is a reality that we must contend with. I said the following about the cultural war around it all last summer, and don’t have much to add:
But this future speculative paradigm will emerge out of the conversions and positions we take today.
The current media environment encourages us to take sides. To develop coarse arguments based on reactionary headlines. The biggest cultural contribution anyone can make to benefit the artists of future, today. Is to develop a nuanced position.
Developing a nuanced position means taking longer and wider views. In matters of politics, law and infrastructural progress I take the lead from Holly + Mat. I trust them, the direction of spawning.ai and deem their deep knowledge of this topic to be in the best interests of artists.
As for the aesthetic and cultural merits of AI generated art … I practice discernment.
Discernment allows people to shape their own tastes and preferences. Rather than simply accepting or rejecting culture based on one’s social background. Practising discernment cultivates a more nuanced understanding ones cultural world.
You’ll notice that I keep saying practicing discernment. Like a mediation practice, it’s an ongoing process.
So the question I ask my self about the Blinding Lights – Arianna Grande (AI Cover) track is ….
Is it any good?
I haven’t stopped listening to it. The song perfectly captures a zeitgeist – the moment both in culture and technological development we are experiencing – not right now, but this week. It was possibly made with the same google colab notebook and/or online service as the Drake track mentioned above. It was posted to Youtube at the same time 13 days ago too.
The glitchs and digital artefacts present in the AI-anna Grande vocal line are pitch perfect. They are the most interesting thing about it aesthetically. I want to talk about the spawned performance and production choices, but before I do, I want to talk about the source material.
The Weeknd’s Blinding Lights is in itself a track that defined a moment.
Released in Jan 2020 it soundtracked the global pandemic.
Drenched in cynical 80’s nostalgia as it is synths, it embodies the cultural fracking of Ghostbusters, Top Gun, Dune. It is the cultural phenomenon Stranger Things made sonic object. So many of the reviews at the time (and still) refer to the song as sounding like the 1980’s. But to me it sounds nothing like the 1980’s, it sounds far more contemporary, it sounds like synthwave.
Compare Blinding Lights to something vaguely comparable; Ultravox’s Dancing With Tears In My Eyes for example.
Ultravox use synth and drum sounds (like most songs of the decade) that are crisp and clear. There is no distortion or thick layered compression across the whole track. Blinding Light’s on the other hand is downstream from the loudness war of the 90’s and mastering for lossy MP3 encoding.
Blinding Lights evokes a feeling of how the 1980’s sound on worn out C90 magnetic tape in 2023. It’s how pop songs from the 80’s still sound on the radio over the engine and road noise of the motorway. The nostalgia it’s drenched in, arises from the sound of the decaying technical medium that its inspirational source material was mechanically reproduced on. It evokes an intangible ‘material aesthetic’.
On Blinding Lights – Arianna Grande (AI Cover) we have a track built around pure magnetic tape nostalgia, looking backwards to an era when synths still sounded like the future. Over top we have the spawned voice of Arianna Grande. A vocal line/rendition that sounds like *actual* future.
The last 3 decades of culture have been preparing us for what an AI Popstar should sound like, and AI-anna Grande sounds just like I imagined it would.
This AI version of Blinding Lights alongside the vocal performance of Holly+, Holly Herndon’s *realtime* spawning transfer tool draws a thick line under all the virtual / AI pop stars of cyberpunk in the 1990’s. Sharon Apple, from Marcoss plus or Rei Toei from Gibsons Idoru series for example.
There was a before and now we are in the after.
The Question Is Whats Next?
I haven’t been listening to Blinding Lights – Arianna Grande (AI Cover) in appreciation of its murkey moral and copyright qualities. I’ve bene listening to it because it’s a total banger.
The last time I played an AI generated song to death was (3!) years ago – DADABOTS technical masterpiece Frank Sinatra bot sings Toxic by Britney Spears. From the moment the first jangly ‘piano like’ notes begin, a sonic weirdness drops, in between all the digital artefacting you can feel the future leaking out
Blinding Lights with AI-anna Grande is the first AI mash-up I’ve heard prodcued with the current tranche of timbre transfer tools that actively embraces the digital artefacting these imperfect tools produce. It is a huge testament to who ever made it for leaning into the imperfections, especially the section from 2m10 onwards.
The spawned AI-anna Grande ‘performance’ on the Blinding Lights cover with all its glitches and imperfections fully expresses the technical and material aesthetic of the current moment.
We are now well into the second decade of Autotune as a stylistic production choice made by artists of all kinds. My current favourite example being jazz saxophonist; Cole Pulice’s use of it on opening track of the 2022’s – Scry EP.
11 years ago I described auto-tune as snap-to-grid.
the re-expressed melody has a ‘pixelated’ quality. to the listener superficially at least, autoune feels like a ‘snap to grid’ was applied. and even more so when applied to human speech, it reveals the natural melody of speech.
Tell me that this isn’t how the last decade of being online has felt?
The online world of Web2 recast us all as lossy data-shadows. Only ever as legible to the advertising engines of algorithmic platform capitalism as the database schema will allow. The navigation of these vast techno-social systems dictated by ‘the algorithm’. Followers, likes, retweets and replies, a realtime chat room with added scoreboard. Culture had changed in response, some of us now even think and behave by network logic.
It is absolutely no surprise that musicians have chosen to render the most pure instrument of human expression – the voice – as pixelated and low resolution. Auto-tune is everywhere in because it sounds how culture feels.
2020’s The Weeknd’s Blinding Lights feels like the 1980’s though the distance of a lovingly overplayed C90 found at the bottom of a draw.
If you listen closely to culture, you can detect elements of the technical material art it is created with and inspired by. Either intentionally or not.
The gitching skittering drum programming on edIT’s 2004 (one of the greatest albums of all time) Crying Over Pros for No Resaon feels like the sound of a skipping CD walkman on the way to school. It sounded upon release like the acceleration of computers, bit-rate reduction, the sound of hard disks spinning and clicking. edIT makes music that sounds like software bugs, computer crashes and what it felt like being alive during the coming age of late 90’s cyberspace.
Poor quality MP3s found on Limewire ripped and saved at the wrong bitrate (that underwater intensity) was a big aesthetic influence on the breakcore/noise scene I used to make computer music in ~07/09ish.
In the current moment, the digital artefacting found on zoom calls, livestream dropouts, at the edge of Youtube quality drops, are all ambient intrusions of cyberspace into our daily life.
I’ve written about other current sonic trends too:
The sonic and lo-fi cacophony you find on Roblox and in Zoomer music:
Over in the metaverse, millions of kids playing Roblox listen to pop songs though layers and layers of distortion – an aesthetic designed to defeat automated copyright enforcement – has bled over in to TikTok.
In late 2021 Roblox introduced new rules “Uploading distorted audio files are not permitted. Please make sure your files can be easily and clearly heard.”. Now users have to avoid both automatic content ID scanning , and aesthetic choices being imposed by the platform.
The curious ASMR adjacent world of frequency manipulation and sonic world building on Youtube:
I recently stumbled in to a niche Archipelago of Youtube. On a map of the internet, I think it’s located just off shore of the ASMR peninsula.
The genre is called ‘In the other room’, ‘In a bathroom at the club’; ‘the other room at a party’ or ‘outside the club’ whatever.
The artefacts in the vocal line of Blinding Lights – Arianna Grande (AI Cover) *sound* like this image I made of a ‘Cybernetic Meadow’ in VQGAN back in 2021 *looks*.
All the weirdness and flawed algorithmic space of the model is on show. I am fixated on the strange frog like noises, the vocal static, the edges of AI-anna Grande model unspooling into pure waveform material.
There is no snap-to-grid here, just fluid best guesses and rough approximations expressed by the convolutional neural network
A human choose to keep / include the computers flawed work on the track. It captures the moment – you can hear the malleability in the vocal line. It sounds like ‘right now’ – the blistering pace of technical development, like the culture war, it sounds like the future.
Also from back in 2011,
a waveform is made up of constituent frequencies in bits and bytes and an image is made up of pixels, bits and bytes. (..) its just all the same ‘malleable digital stuff’.
AI models don’t work like any sort of digital system we have ever encountered before.
They aren’t databases full of tables and spreadsheets full of data, making data shadows legible to algorithms doing Bayesian inference. Nothing is ever moved from one box to another. They have been trained and learned on all that stuff, sure. But AI models are just different things. Aliens.
They are new cultural technologies made up of latent space. Pure malleable digital stuff, pure possibility.
Perhaps in this post web2 social media world, artists will begin to lay off the autotune and shape and sculpt their voices like source material. I hope the snap-to-grid mentality of the last 15 years is over. Perhaps if the culture changes the platforms will have to too.
I like Blinding Lights – Arianna Grande (AI Cover) track simply because it suits my tastes. I think it sounds REALLY GOOD. I want more of it, not less.
It makes me excited for future music.
For me now, at this point in my life, writing and thinking are co-continuous. I type between 55-60 words per minute – the speed of thought.
Full Show Notes: https://www.thejaymo.net/2023/04/22/301-2314-speed-of-thought/
The Ministry Of My Own Labour
- Calls, so many calls.
- Updated worldrunning.guide to Version 0.0.8 15 essays now, 14.4k words.
- Two trips into central London. Meetings and workshops
- Dre was kind enough to mention my world runner job description over on design fiction daily
Friends over at Furtherfield gallery have just launched their summer events program The Treaty of Finsbury Park the project looks absolutely wonderful
The Treaty of Finsbury Park is an immersive fiction that looks at what it would be like if other species were to rise up and demand equal rights with humans. What that means is you, as a human, can come and take part in the fiction only by playing for and as another species (so, like, NOT as a human ok?!). The project started in 2020 and it will run until 2025 when the Treaty itself will have been created and signed by all the species communities of Finsbury Park.
A massive immersive role-play event for park lovers. Explore the park as another species, with new eyes, ears and totally new priorities. The Interspecies Festival welcomes ALL human people over the age of 9
I posted more about it on solarpunks.net or you can check out the website here: treaty.finsburypark.live. Long time readers may remember I attended a prototype Multi-species LARP run by Furtherfield back in 2019.
some of us want to write complete statements in places we own and control, and garden our own thoughts and experiences in an interlinked, searchable way.
Players are not limited to having specific body parts on their characters, with options to attach monster parts onto their characters, as well as putting human parts on monster characters
Homemade remixes, often sped-up or slowed-down, have been a hallmark of the TikTok era. In recent months, they’ve helped rejuvenate years-old songs from Lady Gaga and Miguel and driven swarms of listeners to newer releases from Lizzy McAlpine and Raye.
Around the cusp of this year, I started thinking a lot more seriously about positioning myself as a “hypermedia-first” practitioner, to the extent that I craft my work product principally on the Web, and then project that into more conventional documents.
“I don’t want to get into details but … you’ve never seen Timbaland with an AI artist,” he told Axios’ Hope King. “Who I’m enjoying now is watching Lil Miquela. She’s an AI robot, and I’m like, man, imagine if I stood beside her with a dope-ass song.”
I’m still reading all the same things I was last week, but I’ve added two more books to the list.
I have a problem.. I start reading books and don’t finish them for months. This is *everything* thats on deck right now 😬
- Monolithic Undertow: In Search of Sonic Oblivion by Harry Sword
- Storytelling in the Modern Board Game: Narrative Trends from the Late 1960s to Today by Marco Arnaudo
- Cybermapping and the Writing of Myth by Paul Jahshan
- Nine Gates: Entering the Mind of Poetry by Jane Hirshfield
- The Metaverse: And How It Will Revolutionize Everything by Matthew L. Ball
- The Elusive Shift: How Role-Playing Games Forged Their Identity by Jon Peterson (Second Reading)
- Post-Truth: Why We Have Reached Peak Bullshit and What We Can Do About It by Evan Davis (This book is centrist drivel, cant bring myself to finish it)
- Building Imaginary Worlds: The Theory and History of Subcreation by Mark J.P. Wolf (Second Reading)
- Reimagining Our Tomorrows: Making Sure Your Future Doesn’t SUCK by Joe Tankersley
- Lucifer: Princeps by Peter Grey (Second Reading)
- What Is a Dog? by Raymond & Lorna Coppinger
thejaymo.net Spotify Playlist
Lankum – False Lankum
The opening track is a modern interpretation of the Trad suicide ballad ‘Go Dig My Grave’.
The song is slow, doomy, droney and quite frankly HEAVY AS FUCK. All achieved without a single moment of distortion. The hypnotic repetition of the riff is what makes it so dark. Want to do this more with my band. Lankum are showing the way.