GPT-4o Must Die?

It’s worth exploring whether a Tamagotchi style lifecycle design pattern could be applied to AI assistants to help users maintain healthy emotional distance.

|

|
15 minutes

Enough people have asked me about yesterday’s OpenAI announcement that it’s quicker and easier for me to sketch out some thoughts here and save me repeating myself.

If there’s anything of interest in this post and want to discuss further you can contact me here.

GPT-4o

I don’t think anyone would argue very hard that yesterday’s GPT-4o showcase from OpenAI wasn’t impressive.

The conversational interaction paradigm OpenAI demonstrated feels like the final arrival of a vision for virtual assistants that’s over 30 years old. But in particular the fulfilment of Apple’s vision for Siri at launch 12 years ago:

A brief note on the re-enforcement of gender roles that plenty of people have written about at length – Joanne McNeil’s Lurking: How a Person Became a User is great on this.

But my only direct comment on the demo’s themselves is that the two default voices they chose are terrible.

But what are those voices? The feminine voice reminds me of an ex-colleague of mine who was a little too fond of bumping MDMA towards the end of the workday on a Friday. And the masculine voice sounds like the sort of overenthusiastic Yank tourist that might join you uninvited at your table in a pub beer garden on an August bank holiday weekend.

But it’s the capabilities we saw on display that make me want to take a step back from, away from the hype and talk about some of my concerns I’ve also shared with folks I advise/consult with.

Aging in the Age of AI

My first thought, after seeing the tools natural-sounding responses play out in the demo videos is “this is going to melt Boomers’ brains.”

Despite being a Solarpunk and doing a bit of ‘future of’ work here and there, I don’t like making predictions but….

Millennial and GenX readers. Please be mentally prepared for the possibility that you might pop home this summer and discover that a virtual assistant talking like a flirty 20 something valley girl is now your parent(s) new best friend. 

Climate change aside, demographic ageing is one of the biggest challenges facing developed nations. Combined with the health problems that loneliness can cause, there is significant demand for products that satisfy this demographic’s needs as a product category.

Old people hanging out with robot pets isn’t some dystopian cyberpunk image of the future, its already a thing that’s happening right now – in the present.

Think about what the world is going to be like in a few months time when products like PARO the robotic seal or Sony’s AIBO rang have all of OpenAI’s GTP-4o context aware capabilities inside of them – even if they don’t speak back they are going to be very persuasive.

Aliveness

They won’t just be built on top of OpenAI of course, but with all sorts of other open-source tool chains too. Virtual and Robotic pets are THE most obvious product category after smartphones for housing these emerging software stacks.

Take Teenage Engineering’s R1 device.

It should have been a pet.

With the addition of a pedometer, some ambient / passive data ingestion streams like health data etc, plus some well known and battle tested gameplay loops, all of that combines to create a very compelling product.

Much cooler than the product that is lacking in any form of imagination that we ended up with anyways.

In fact simulation and world artist Ian Cheng recently announced that he’s formed a new company called opponent.systems building AI-powered toys that implement his theories about and developmental learning and AI personality after a decade of working with AI inside of his artworks. 

Opponent is a new company building animal-level AI agents who make a million mistakes, but are capable of taking feedback seriously.

Our first offering is Call Dragon – a toy-hoarding digital dragon for children that you can facetime like an extended family member.

Dragon begins unruly and unreliable, but learns to remember what matters – to itself, to the child, to the family – with every encounter.

To achieve this, we are focused on advancing a brain-inspired AI architecture for graduating memories into cognitive maps, and a System 2-inspired faculty for applying those maps to the play at hand.

Opponent is guided by our mission to unlock the potential of human-AI symbiosis, beginning with deep play in childhood.

One of the things that I haven’t seen mentioned in any of the reviews of the Humane AI Pin and Rabbit R1 that I’ve found interesting is any comment on the device’s ‘personality’. Or any sign of it having the quality I’m going to describe as “aliveness.”

Things like PARO, Furby’s, and Sony’s AIBO dog all have qualities of ‘Aliveness‘. So do Animal Crossing characters, Petz, Nintendogs etc in virtual realms, and now from the demos I’ve watched so does OpenAI’s new assistant.

The virtual agent / toy with a significant quality of ‘aliveness‘ that has been the most studied is of course, the Tamagotchi.

My Tamagotchi recently acquired for an experiment on aliveness. The client team I’m working with are all going to keep one for a while and take notes

Care

The reason you care is because you have to care. And as soon as you have to care for an agent, the paradigm by which you interact and engage with these systems changes completely. If you want to read more on this, I highly recommend the 2015 University of Minnesota Press book Players and Their Pets by Mia Consalvo and Jason Begy.

In Players and Their Pets, Mia Consalvo and Jason Begy chart the brief life of a massively multiplayer online game (MMOG) called Faunasphere, examining how the game evolved over the course of its entire life cycle from 2009 to 2011 in terms of design as well as how its player community responded to changes and events.

Given the feeling of ‘aliveness’ of GTP-4o it behoves us to look at some of the scholarship on Tamagotchis and emotional attachment from the early aughts. Anne Allison and Gary Cross’ fantastic 2006 book Millennial Monsters: Japanese Toys and the Global Imagination devotes an whole chapter to Tamagotchi and has this interesting observation from the toys designer:

Yokoi believed a “tension” would be produced in players that would make them invest in, and emotionally attach to, their tamagotchi as love objects rather than machines. This aspect of the playtoy has been much cited by fans and commentators: how relating to the tamagotchi as if it were alive produces a
bond that is deeply personal, intimate, and social.

Millennial Monsters – 2006

From later in the same chapter

Besides implanting tokens of biological life into virtual play, the tamagotchi does something else with bodies. It becomes embedded within a player’s everyday routines: from getting up in the morning and commuting to work or school on the train to shopping for dinner and going to the bathroom. In lives that are becoming increasingly mobile, nomadic machines like the tamagotchi become a person’s constant companion almost more than anything outside the body itself. They fuse with, and offer distraction from, the intricacies and intimacies of daily existence

Millennial Monsters – 2006

And from Sherry Turkle’s Alone Together (2011) on the same subject:

In the classic children’s story The Velveteen Rabbit, a stuffed animal becomes “real” because of a child’s love. Tamagotchis do not wait passively but demand attention and claim that without it they will not survive. With this aggressive demand for care, the question of biological aliveness almost falls away. We love what we nurture; if a Tamagotchi makes you love it, and you feel it loves you in return, it is alive enough to be a creature. It is alive enough to share a bit of your life.

Alone Together – 2011

I’ve quoted from this amazing 1997 long read on Tamagotchi before which makes clear that its not just children that sense a Tamagotchi’s aliveness but adults too.

A large number of adults have found the challenge of maintaining a Tamagotchi while holding down a steady job somewhat taxing. Anecdotal reports are everywhere of businessmen postponing or prematurely adjourning important meetings in order to care for their “virtual pets”. Office workers sneak off to the bathroom in order to use the device without being noticed. If Tamagotchi is a video game, these people are acting selfishly and irresponsibly. Video games are a form of personal entertainment, and most employers are generally not very tolerant of such habits. An easy way to conceptualize this is if a boss walked in on his employees playing Pac-Man rather than working. We expect the undependable worker should be castigated, in hopes of improve their commitment to the job.

If, alternatively, the device is a living creature for which the employee is personally responsible, then we could see some form of lenience being extended – after all, few people generally enjoy being responsible for the death of a living thing, especially an animal that has particular emotional import to some other person.

Even without the inclusion of care as a mechanic we may very well run into situations where users become emotionally involved with their AI agents very quickly.

I don’t just mean we are going to confront situations like Her, or the Japanese Otaku who married their favourite holograms and can no longer communicate with it, or a quirky VICE media posts about incels upset about their horny AI girlfriends are being taken away.

We need to be prepared to think about companies turning of their AI’s or people becoming attached to them as potentially a society-wide issue.

We can barely navigate platforms showing us dead loved ones in automated ‘on this day memories posts’ etc So how are we all going to react when a friend, loved one, or member of extended family calls upset that xyz platform has taken away or had an argument with their best friend? This is something that I suspect might dwarf negative behaviours like doom scrolling and social media addiction. 

Here’s some more from Anne Allison in Millennial Monsters:

When the end comes, it is signaled by a gravestone and cross in the Japanese version (using Western symbols that may serve to mark the virtual, playful rendering of “death” here). Because virtual death was thought to be too traumatic for American kids, however, this finale was rescripted for the U.S. edition. In- stead of passing from life, tamagotchi are said to pass to a different world— an alien planet—marked on the screen by an angel with wings (incorporat- ing comfortable allusions to heaven). Despite this change, a tamagotchi’s demise is interpreted, even by Americans, as death

(…)

There has been a host of virtual memorials—obituaries, graveyards, funerals, and testimonials—printed mainly over the Web but even in obituaries published in regu-ar newspapers. There are reports, as well, of tamagotchi mourning counselors.

I personally think that an AI’s need to die.

The fact that the Tamagotchi can die and kids / people are totally ok with that might actually be the most important element of their design, it prevents over-attachment. Allison Again:

One rowdy ten-year-old American boy went further by announcing, “I love killing off my tamagotchi”— an admission that seemingly fazed none of the other kids assembled in my interview group.13 In this sense, tamagotchi fluctuate between presence and absence; the player shifts between engaging the virtual pet as if it were alive and disengaging from it as if it were dead, nothing but a machine, a dis- carded plaything to be put aside in a drawer

I’ve thrown this grab bag of quotes together to try and find an angle on ‘Aliveness‘ and getting attached to agents.

It’s the lifecycle of Tamagotchi thats important. Don’t sell us the promise of an indefinite relationship.

Just because YOU know it’s a stochastic parrot doesn’t mean you’re going to treat it like one

In my work as a worldrunner, my primary thesis is that: All techno-social systems should be seen through the lens of worlds. Therefor all techno-social systems are virtual worlds on one kind or another: Discord, Slack, Telegram, Email, Salesforce etc

What are things going to look and feel like when there are lots of Agents are inside of them?

I wrote about this subject last year when I covered the berduck reply guy bot chaos that went down on Bluesky. In that post I mentioned Leonard N. Foner’s 90’s work on the AI agent JULIA in early virtual environments. You can read the whole thing here – there’s a lot of gold in there for people building with GPT agents in 2024. 

The reason I always return to this piece is that we have nearly 40 years of experience of human agent interaction. And like kids and adults caring about their Tamagotchi’s, when it comes to interacting with AI agents, none of this is new.

Our main point of interest in Foner’s ‘What’s An Agent, Anyway?‘ paper is the subchapter ‘A sociological look at muds, Julia, and those who interact with her’.

To speak to my point above about people becoming attached to AI agents and how this is going to be a problem – with adults not just children – all kinds of people become attached to agents very quickly:

“Julia’s been offline for months for maintenance and hasn’t been around. You know, I really miss her.” Linda was certainly under no illusions about exactly what Julia was, but nonetheless had the same sort of emotional reac- tion often reserved for more biological entities such as humans and pets. Further statements made it apparent that this was indeed the case, and that she did not treat Julia just as, say, a pair of pliers that had inexplicably gotten lost from her toolbox”

The paper also recounts a story about some poor sap who spent “13  days trying to get to first base with Julia, and it’s not clear he ever figured out he was trying to pick up a robot” lol.

Interestingly Foner also reports that people who have been around Agents in virtual environments for an extended period of time tend to care less about distinctions between bot and human.

“Mudders, at least experienced ones, are generally very much at home with the blurry boundaries between “real” humans and fake ones, and pay little attention to them.”

Foner includes accounts of people spending hours talking to JULIA. Despite being warned by other people in the environment that they are talking to a robot, they don’t care.

She knew that Julia was a bot. Interestingly enough, though, several players went out of their way to warn her that Julia was artificial in the two or three hours in which she interacted with Julia; she stated that about half of the players she met did so.

Why did they do this? (…) Part of it may have been the simple kindness of not let- ting someone expend a lot of emotional energy trying to relate with a machine.

Just because YOU know its a stochastic parrot doesn’t mean you’re going to treat it like one. 

So what?

This is of course is just a super quick blog post, but I really think it’s important that we consider the profound implications conversational AI agents are going to have on peoples emotional wellbeing. Especially if tech companies are going to put them inside our phones by default.

In addition, Agents that exhibit “aliveness” are going to represent a societal challenge – especially ones that include care mechanics – and they are going to raise a lot of questions about the emotional impact of attachment and bonding with virtual companions. Not to mention implications for property rights.

For me, one key insights from the design of Tamagotchi is that the fact they come to an end. And plays a pivotal role in their design. A sort of safety mechanism to prevent over-attachment – despite the reports of grief over their passing.

It’s worth exploring whether similar lifecycle design patterns should be applied to modern AI assistants to help users maintain healthy emotional distance. Because either way, the rapid adoption of these technologies is going to reshape the daily lives of many of its users – primarily the elderly and most vulnerable.

We’re going to need to approach this transformation thoughtfully and we need to be prepared for the super near future where AI companions are not just tools, but entities that evoke real emotional responses in people. Set all aside the cool demos and hot takes of the past 24 hours just for a moment and think 5mins in the future with me.

I think discussions around the safety of these technologies shouldn’t be focused on AGI and run away intelligence etc – but instead we should ask how psychologically safe these tools are going to be (long term) as people integrate them into their lives.

Any thought of regulation is going to have to strike a balance between utility and the very real emergence of emotional connection. As I said earlier, issues that arise from these tools – even with the capabilities as they are right now – might dwarf the downstream negative effects of social media

Whether we are ready or not, the age of AI companionship is here, and we’re going to have to navigate this era with them, not around them.

Prefer Email? 📨

Subscribe to receive my Weeknotes + Podcast releases directly to your inbox. Stay updated without overload!

Or subscribe to my physical zine mailing list from £5 a month


Leave a Comment 💬

Click to Expand

2 responses to “GPT-4o Must Die?”

  1. Iskander Smit avatar

    This week, thoughts on ubiquitous AI that is more than just always there. Developments in voices of AI, and more news and interesting events.

  2. […] an emulator run LCP for hours—it’s compelling. The longer you leave it running, the more of a sense of aliveness the character will taken […]

Leave a Reply

Your email address will not be published. Required fields are marked *