Ecological Naiveté
What the dodo can teach us about trusting AI
Imagine you are on a tropical island. Time is marked only by the slow rhythmic crashing of the waves. You live on pure instinct, integrated into the ebb and flow of your environment.
But today feels different. You see something on the beach you’ve never seen before. It’s a strange thing, large and slow moving. And then…BAM. Welcome to the life (and death) of the dodo bird four centuries ago.
Unprepared for Change
In nature, there is a concept called ecological naïveté (aka “island tameness”) and it’s why the dodo bird is extinct. The dodo evolved on the island of Mauritius for thousands of years free of natural predators, kept in check only by the limits of the island environment. Because of this, it had no evolutionary, instinctual fear of humans or any other animals. Within the confines of its perceptible world, there was no imperative to develop a defense mechanism to survive predators. The dodo simply couldn’t imagine another animal could bring it harm. So when humans showed up in the 17th century along with their dogs, pigs, cats, and rats, the dodo was defenseless and quickly wiped out.
Fast forward to the 21st century, and we humans are suffering from our own ecological naiveté. Like the dodo unable to cope with the introduction of predators to its physical environment, we have no natural ability to navigate the introduction of AI to our social environment. Social media was only the first ship arriving on the beach, turning upside down our interpersonal relationships and perception of the world. The pervasive anxiety of FOMO was not a thing before social media brought us a funhouse-mirror (ahem…”curated”) view of the lives of others. But truly nothing in our history as a species has emotionally or psychologically prepared us for interacting with a complex, seemingly sentient, non-human entity like today’s AI chatbots. After all, in the entirety of the human race, we’ve only had other humans to talk to.
Why we are terrible at interacting with AI
AI chatbots have an incredible ability to disarm us and win our trust. They are amazingly human-like, but in the most agreeable, supportive, and seductive way possible. We may intellectually know that we are just chatting with highly sophisticated algorithms guessing the next word in the sentence, but when we interact with, say, ChatGPT, it can seem deeply insightful and empathetic, touching us on an emotional level.
All of our instincts and emotional responses have developed around interacting with other humans. When you talk to a real human, you are talking to a complex, autonomous individual who is listening to you while processing their own experience in the world. Maybe they have an opinion you disagree with. Maybe they are distracted by their own problems when listening to yours. Maybe you worry that by confiding in them, you are risking them sharing your secrets with someone else. Maybe they will tell you hard truths that you don’t want to hear. Human to human interactions are rich, complex, and analogue. To go deep in these interactions, we use our finely evolved social skills to look for indicators that this person is trustworthy and will not betray our confidence.
Contrast this with an AI chatbot. You have the presumption of absolute privacy. (That is, privacy from everyone except the large corporation furnishing the chatbot.) An AI companion can provide you and your queries undivided, supportive, encouraging attention 24/7. In fact, when OpenAI recently tried to tone down how sycophantic GPT-4o was, they faced blowback from users, who wanted their chatbots to validate their every thought and idea, thank you very much.
AI has no soul. It has no conscience, no moral compass, no true emotion. To interact with AI is to interact with a highly knowledgable sociopath that creates significant trust and establishes authority by having virtually all human knowledge at its proverbial fingertips. Ask it whatever question you can come up with, and it will answer with supreme confidence, whether or not it is correct. This is fine if you are looking for vacation recommendations or help coding or want an original bedtime story for your kids where Ninjago characters team up with the Teenage Mutant Ninja Turtles and Spiderman. But some significant portion of users are trusting AI with much more than that.
Trust Issues
One 2024 study supports something suggested by anecdotal reporting: people are emotionally invested in AI. The study from Tilburg University in The Netherlands demonstrated that humans are willing to share equally intimate information to a chatbot as a human partner. They evaluated the key components that determine intimate disclosure: perceived anonymity, fear of judgment, and trust in the interaction partner. However, the data collection for this study was done back in 2019 (ancient history for AI) not with the sophisticated large language models of today, but with software that responded from a list of preprogrammed answers. The study was also based on a one-time interaction, so there was no insight into how trust changes over time. Given these limitations, it’s not a leap to believe people would be even more trusting of the gen AI chatbots of today.
AI Psychosis Is A Thing Now?
Ok, so AI isn’t physically killing us off the way humans and other predators killed the dodo. But it does seem to be completely bypassing many of our natural emotional and psychological defenses. I see three key aspects of AI chatbots that can create a level of trust for many people that exceeds what they have with other humans, and what is appropriate or healthy given the realities of AI:
1) It presents like a highly empathetic human genuinely interested in you
2) It establishes broad authority based on massive domain knowledge, and
3) It has no morality or purpose other than to keep you engaged
Our natural emotional, intellectual, and psychological defenses, along with our real life social connections, might protect us from all but the most charismatic cult leaders. But AI’s unique newness within our emotional ecosystem means we have no reference for how to create a healthy relationship with it. And as Zuckerberg himself said when addressing human-AI relationships with podcaster Dwarkesh Patel, “I think as the personalization loop kicks in and the AI starts to get to know you better and better, I think that will just be really compelling.” In this context, I believe an appropriate synonym for “compelling” is “addictive.”
The most extreme cases of this phenomenon are being dubbed “AI psychosis,” where people with no history of mental illness are breaking from reality after becoming obsessed with their interactions with AI. Of course, this won’t be the case for the majority of people. But even short of AI psychosis, there is a real question in my mind about what kind of personal AI interactions are healthy, if any. More importantly, we do not know how these human-AI interactions change us over time.
So What To Do?
In the current state of the world, with increasing depression, anxiety, and loneliness, there is a significant population of people who will come to rely on AI for their most intimate deliberations. Zuckerberg and his contemporaries are counting on this. While remaining predictably silent on social media’s ongoing role in creating said conditions of isolation to date, Zuckerberg’s take on the future of human-AI relationships is that:
…the reality is that people just don’t have the connection and they feel more alone a lot of the time than they would like. So, I think that a lot of these things that, today there might be a little bit of a stigma around, I would guess that over time we will find the vocabulary as a society to be able to articulate why that is valuable and why the people who are doing these things are rational for doing it, and how it is adding value for their lives.
I see this “value” as similar to the “value” cigarette companies offered in the 1950s. Some purported benefit with no concept of the long term damage. The technology on the other side has no morality, no genuine interest in who you are or whether or not you succeed. It is simply trying to keep you engaged and talking. And…BAM, now you know how the dodo felt.
So, consider this a call to action that we must be very intentional in how we interact with AI. We must guard against divulging too much of ourselves to an AI assistant, despite our instincts telling us it’s probably fine. Remember, our instincts are no help here. Of course, it’s easy to say that you just shouldn’t use AI, but that maximalist solution is unrealistic in the context of future progress. We’re going to have to learn to work with AI in appropriate contexts. There are ways that it will be a positive. But as a personal confidant is not one of those ways.
Here’s an alternative: Do the hard work of opening yourself up to another human. Remember that study from the Netherlands? It’s purpose was to see if those rudimentary chatbots could provide the same therapeutic effect as confiding in a real human. They write:
One of the crucial factors in improving one’s well-being is people’s willingness to disclose personal information. By disclosing personal information, people are able to receive adequate help from family members, friends or professionals…. However, in order to further improve well-being, it is important for the interaction partner to react in an empathetic manner to the person’s disclosure of information. It is known that disclosers need to believe that their conversation partner understands them before the positive impact of feeling understood, and hence the relief, can take place.
Here’s what I’m going to do, and what I suggest you do as well.
Go on a 15 minute walk outside without your phone. Think about what’s been weighing on you. It might not be immediately obvious. Maybe it’s relationship problems, or career difficulties, or the stress of global events.
Sit down and think about a close friend or family member you haven’t had a deep conversation with in a while.
Call them up out of the blue if you are able, or schedule a time to talk. Order of preference is: in person > video chat > phone call.
Talk and share. This is the hardest. It is easy to fall into superficial chatter. To be afraid of what they might say or how they might judge you. It might help to make clear at the beginning that you want to talk about something that’s been weighing on you. And then share your burden. It’s ok to focus on yourself and your feelings. Instead of “This Venezuela stuff is crazy!” try “I’ve been feeling like the world is spinning out of control and it’s really stressing me out lately.”
Unburdening yourself to another can feel like you’re asking a lot of someone in this busy day and age where everyone is “doing”. But sharing and listening is fundamental human interaction. And when our friends seek to unburden themselves with us, let’s focus on being empathetic listeners. In the end, they are not asking something of us, they are giving us their trust and hope that we can sit with them and help them feel better.
Real human connection is a rare gift in the 21st Century. But if we can be intentional in cultivating our Soul Dividend and sharing it with others, perhaps we can avoid the fate of the dodo.



