NPC Djs + Co-Creating Virtual Worlds

|

14 minutes

This post is part of an ongoing scrapbook series.
Exploring the emerging Dweb (Web3) ecosystem, the #supportnet, and the coming ‘Metaverse’.

This post is a breakneck overview. From NPC ‘Muzak’ DJs, to Russian billionaires funding virtual platforms, cutting edge hybrid entertainment venues and concludes with my hopes for future of architectural virtual hybrid environments.

JAI:N

Last week week @arb from trust.support sent news of JAI:N. Apparently the first ever AI-driven Virtual DJ.

Performing along side such luminaries as David Guetta, Armin van Buuren, and Carl Cox.
Coming to a Metaverse near you in 2021.

If you’d told teenage me at the turn of the millennium that those three names would be the first major names to jump in to IRL Snow Crash – I wouldn’t have been surprised.

I do personally think that there’s more potential in the Metaverse than strapping a VR headset to my face to watch some ageing DJs perform. But what do I know.

I would however, strap a headset to my face and attend a Jean Michel Jarre concert in the Metaverse in a heartbeat.

Here’s a quote from the press release about JAI:N, and the ‘making of’ video.

Alexey Kochetkov, Founder and CEO of Mubert: “The personification of AI-composed music is the most natural and groundbreaking way of introducing people to generative music. Together with Sensorium Galaxy, we’re developing artists that can create original, high-quality music in real-time based on the reactions of the crowds.”

‘JAI:N’ is an output from the wider Sensorium ecosystem which I’ll discuss shortly.
JAI:N was created in conjunction with Mubert.

Mubert

Mubert is a generative music app that creates endless beats based on genre/category. You can select things like music for driving, studying, working etc. I haven’t played with Mubert myself, but I was a big fan of RjDj back in the day. An app would produced endless beats based on the background noise from your local environment – more imaginative than Mubert. I used to just put it in my ears when I was working in an open plan office typing into a spreadsheet all day.

This review of the Mubert app by a Youtuber says something quite revealing about the state of play when it comes to music in 2020.

I don’t like music that has lyrics for working, for background noise as we say.

I will want to skip or repeat [Tracks]. That takes me away from what I’m doing. With this app there is none of that cognitive load.

Mubert takes the concept of ‘Lean back listening’ to the next level. Rhythm, tone, tenor, melody etc are all disposable. Mubert’s CEO has some wild things to say about their technology and the future of music.
As a musician they make me cringe. But as someone who studied Philosophy of the Arts I can’t get enough of this shit!

More importantly, people want to listen to artists, not only to machine-generated songs. People mainly use machine-generated songs for background music or music for audio books.

In my opinion, we can divide music into two types: background music and artist’s music. Background music is for activities like jogging or working. With that said, technology around virtual artists has some future. If we create an AI virtual artist, I think it would be interesting to see how it evolves.

We’re developing an algorithm that uses artificial intelligence to generate original non-stop music for commercial or personal use which can easily be customized and streamed worldwide. In the most basic level of the app, users tap one button to choose music for a specific activity or genre. Next, Mubert will generate music for that category at a random tempo and random scale. Then, users can adjust the music by pushing the like or dislike button in the application. This is how the users train the AI to generate music closer to their tastes.

Mubert helps you focus better. Unlike Spotify or other apps, Mubert doesn’t have any stops or any pauses. Our music streams are on a continuous, infinite loop. The tempo of the songs are the same. There are no changes in the mood of the music for the whole track. We have tested our prototypes with joggers and pretty much all of them said the same thing: 

“Mubert helps you to become focused on the specific task the track is tailored to. It’s almost like meditation.”

The most common use cases today are streaming services and voice assistants. Another large target market is music for public spaces. These use cases encounter similar problems regarding copyrighted content. For public spaces, any music played must be licensed or you risk getting fined. One big case was Peleton, a company that streams music for sports. This year they were fined 150 million dollars for using copyrighted music. 

Mubert is a platform that makes it easy to get music for these use cases. With our services, you can stream royalty-free music, removing the risk of fines from using copyrighted songs. 

I mean finally the thing we’ve all been waiting for! Muzak with a virtual human face. 🤣
I for one, can’t wait to go see JAI:N perform in the metaverse.

We’re going to have to embrace all this cringe for a moment and take a step back to look at what Sensorium are doing as an organisation. Beyond making a NPC-DJ

Mikhail Prokhorov: “The combination of VR and AI creates unique opportunities for developing new platforms where users can seamlessly interact with the virtual environments and amongst themselves.  We are confident that the development of a new way of communication in the virtual world, along with the limitless possibilities for self-realization in those digital environments, present an unprecedented business opportunity. Over the next 10 years, we’re expecting explosive market growth around the integration of VR and AI technologies.”

Sensorium Corporation

One of the reasons I’m writing this Dimensino series because the Metaverse is coming.

Sensorium’s major investor is Billionaire Mikhail Prokhorov. Former 2012 Russian presidential challenger to Putin. Sensorium have so far raised $100 Million in investment, and thats not nothing. AND they are a Crypto company! (Here’s the white paper). Sensorium have their own token ‘SENSO‘. SENSO is designed to be the ‘in-Metaverse’ currency for Sensorium Galaxy. Seeing as its an ERC20 token, I except we’ll see all sorts of NFT and programable tokens inside the SENSO network eventually.

TIDAL invested $7m in tokens over the summer. With TIDAL’s token acquisition, the platform’s artist co-owners – including Jay-Z plus Lil Wayne, Rihanna, Calvin Harris, Daft Punk and Coldplay’s Chris Martin – have acquired access to broadcast their content within the Sensorium Galaxy. It also gives them access to develop natively on the Sensorium platform.

Sensorium Galaxy

Erm, If the you think the IDEA of a Generative Musak Avatar DJ playing a gig alongside greybeard house DJs in a virtual world is cringe … check out the video for the virtual arena they will all be performing in whilst you sit at home alone with a VR rig strapped to your face.

Also ….. whatever this game playvideo/demo is supposed to about.

I do really recommend you head over to the Sensorium Galaxy website to see how else they describe the project. lol.

SOCIAL METASPACE
Sensorium Galaxy is a social metaspace that unites various conceptual worlds in a single virtual environment. Each of the worlds is created in collaboration with renowned artists, visionaries, prominent figures of culture and art.

HUMANISTIC VALUES
Sensorium Galaxy is an alternative reality for open and safe communication, for creativity and limitless self-expression. It is a new world, free from the conventions and prejudgements of the real world.

Also take a look at the concept art for their ‘Underwater World of Perpetual Motion and Dance‘ whilst you’re over there. Essentially the Galaxy is Sensorium’s metaverse/virtual world. Built around entertainment experiences. Like Roblox, Fortnite, Minecraft ‘verses’.

Sensorium Ecosystem

All of the above should really be seen as a glorified tech demos.

The meat and potatoes of the Sensorium Project and why they have taken so much investment is explained by their partner diagram below.

They are not fucking about. The list is very impressive.

It’s important to note that this tech is going to be driving experiences in the real world as much as its going to be providing virtual environments.

The presence of Epic / Unreal engine in the technology stack I revealing. What Sensorium are hoping to deliver to its entertainment partners is a platform for hybrid Metaverse environments, retrofitted into existing entertainment venues.

As Matthew Ball in the Epic Primer notes:

And as Unreal expands in film and TV, it’s also expected to become a leader in live music/events, too. Unreal also runs much of Disney’s Star Wars: Galaxy’s Edge theme park attraction. (Notably, Epic’s CTO, Kim Libreri, was previously SVP Technology at Lucasfilm.) Sony Music, for example, has already announced it’s building a technology team for Unreal-based concerts. This move is particularly telling as Sony Music’s sister company Sony Interactive Entertainment has its own gaming platform (PlayStation) and numerous proprietary engines.

Unreal Engine also powers Disney’s ‘Volume’ Soundstage. Where the Mandelorian is produced on.

Unreal and gaming engines are breaking out of video games and becoming important architectural elements to built environments. They will soon be powering adverting, TVshows, theme parks, theatres and more. Which is why these entertainment partners lined up to work with Sensorium are worth remembering. The future is hybrid virtual spaces/places.

Augmented Entertainment.

MSG Sphere

If Sensorium are looking to retro fit hybrid virtual environments in existing venues. Then, The Madison Square Garden Company’s MSG Sphere project is the first glimpse of what is to be achieved when built from scratch. A grand entertainment product/project with an ‘architectural tech stack’ integrated into the fabric of the building. It’s not just theme parks and sound stages that are going Hybrid/Virtual (need to coin a good term for this tbh)

Boasting a futuristic look halfway between the 1964 New York World’s Fair Unisphere and Buckminster Fuller’s fabled geodesic dome, 18,000-seat MSG Spheres are currently planned for both Las Vegas and London, where it will presumably present an alternative to rival AEG’s The O2 Arena.
350-foot round dome, completely covered on the outside and inside by programmable, wraparound LED screens, with a 170,000 square foot display plane within the arena itself. It’s “the largest and highest resolution on earth,” capable of displaying 250 million pixels, more than 100 times clearer than today’s available HD TV technology.  The outside wrap of the Sphere can be turned into any number of visuals, from a giant tennis ball to a world globe, from displaying a performance going on inside or even making Sphere blend in with the surrounding skyline.

Londoners may remember the MSG Sphere proposals for London. A project that has been widely derided in the media. Nicknamed ‘The Golf Ball’. A project that will apparently be a plague on London’s ever changing skyline. This video of Maya Jama shilling the project (I think) is very cool.

Incredible giant ‘Golf Ball’ could transform east London’s skyline

Ground has already been broken on the Las Vegas sphere. It going to be completed in 2021, Covid depending. It’s important to remember that *this is happening*. It’s not speculative.

Not Big, But Small

What interests me more about the possibility of these spaces, are not the grand architectural gestures like the MSG sphere. But the potential in small scale immersive spaces. When this technology meets the street.

I recommend Serpentine Galleries, Future Art Ecosystems (FAE) book series. It offers strategic insights to practitioners and organisations across art, science, technology and policy. Its where small scale experiments in this tech are going to occur.

What I’d like to see are:

Hybrid worlds running in old cinemas, in the basement of pubs, upstairs at a restaurants. DIY 30 person venues built with second hand LCD panels with gaming PC’s behind the bar. DIY theatre performed onstage with jury rigged 180 volume like spaces.

Physical and virtual performers interacting with one another seamlessly onstage, whilst being live-streamed to the world.

One of the biggest opportunities and areas of excitement for me, are not generative Muzak DJs performing in VR spaces. But the idea of generative worlds. Performers operating in worlds co-created and influenced by the audience.

Recently 1030JH streamed ‘Twitch plays the Amen Break’. Whilst generative music, performed by a virtual DJ like JAI:N is mildly interesting this stuff is amazing.

In my opinion, the best generative soundtrack made to date is by 65 DOS

What we’re trying very hard to [do is] work with the logic of the game and within the need for it to be able to react quickly to whatever a player does. Coming on from that is finding a spot where we can also put those themes in. Even just the melody, like a big hummable melody, that could be five or ten seconds of time and with the player you have to find a way where you can allow the game to do that but for it not to be incongruous with what the player then ends up actually doing.

If you decide to dock at a space station you want some big theme to play – but then the player changes their mind and flies away, or gets into a fight, but the theme has already been triggered and is playing. It’s not going to be as effective. So it’s not really a conflict, but we’re pulling things apart to make sure that we’re always pulling towards songwriting as opposed to – it’s not random, it’s cleverer than that. But the more jazz-like, improvised nature of how it creates itself. It’s a balancing act, I guess.

Imagine generative soundtracks accompanying live performances. With cues not only from the virtual world being pulled in, but also cues from the audience with an app – or dedicated hardware like the light sticks from the Kpop world.

Imagine this technology being used inside an architectural virtual spaces of 28,000 people or just 30. Two way feedback.

The audience could press a button and influence the world on the enormous screens around them. Positively or negatively, speeding up the tempo, maybe a vote to change the key, the mood or tenor of the soundtrack.

But why not go one step further? Why not have the topology of the virtual world, change, deform and flex based on the mood of the audience. Immersive Metaverse worlds will undoubtedly do this dynamically, but hybrid environments inside spaces like the ones mentioned above will get there first.

If anyone is interested in Designing Adaptive Virtual Worlds I recommend the work of Dr Ning Gu and Mary Lou Maher.

I am really excited by the idea of realtime audience co-creation of virtual environments. Both in theatre and hybrid environments. If you’re building this stuff hit me up lets chat


Thanks or everyone who has subscribed or supported the blog so far this. More to come in 2021.

Get New Posts via Email 📨

Join 4,514 other subscribers.

Dimensino is an ongoing series subscribe if you want to follow along.
You’ll also get posts about my weekly 5min podcast, my webshow, and my weeknotes.

Hire Me

If you would like to hire me to puzzle this sort of thing though for you in a professional capacity – Get In Touch!


Leave a Comment 💬

Click to Expand

4 responses to “NPC Djs + Co-Creating Virtual Worlds”

  1. […] na publicidade. Mostra que se está a tornar outra ferramenta para os criativos dos media. Npc Djs + Co-Creating Virtual Worlds: Se o lado mais visível da realidade virtual está,  novamente,  a sair de moda (é uma […]

  2. […] But the reality is Non Fungible Token primitives  are going to be really mundane real soon. They are going to be in everything. Every virtual gun, every piece of clothing, every item in a virtual world is going to be an […]

  3. […] NPC Djs + Co-Creating Virtual Worlds […]

Leave a Reply

Your email address will not be published. Required fields are marked *