Attention, Attachment, Attunement: A Tech Evolution?
Provocation #39: The emerging arc of existential technologies
Hey there,
Welcome to Provocation #39, a collaborative essay that attempts to create a frame for the arc of the 21st Century technologies we find ourselves in the midst of.
SUMMER REGISTRATION OPEN!: Reminder: My Confronting Education Workshop is now open for registration. I hope you’ll consider joining us for Cohort 5 of the workshop, which starts on June 26. If you’re feeling all the world craziness and the struggles education is having to deal with it, you’d be in the right place. Early Bird pricing through May 15. Reach out with questions.
REMINDER: All of the posts on this site are free. But a paid subscription will help keep my energy up for this work (7/10 at the moment…the sun is shining and I saw a bald eagle yesterday!) AND will support folks in need, as all proceeds from paid memberships will go to my local food pantry this year. The running donation total is now $247.50 for the year.
As always, thanks for reading. ~W
[NOTE: What follows is a collaboration between me and Perplexity.ai. I initiated the draft with the attention => attachment => attunement idea that I’d been mulling over for the past week or so. First, I asked it to use the Aiden Cinammon Tea Sensibility protocol as a lens for our interactions, shared my Manifesto as an example of my writing style, and then shared of a set of resources for context. “We” worked through about six drafts to get to this final form with some light editing on my part at the end for clarity, for audience, and to add a personal touch.
Some may blanche at this process. I understand why. From my perspective, this was both a writing and learning process that deepened my own understanding of what I felt was a useful framing for the evolution of technologies we find ourselves in. Could I have written it on my own? Sure. Would it have been as fully developed as what you’ll read below? Maybe. Does writing like this mean my writing brain is atrophying or that I’m in some way “cheating” because not every word came out of that brain? Um, no. I’m definitely not of that opinion.
In writing this particular essay, my thinking was stretched. I learned a great deal. Also, I happen to believe that those who are opining about AI and writing and creating and whatever else with it should engage in those activities at some depth so they have a practical perspective for those opinions. I also believe we should be transparent about that practice, hence this note, which I wrote totally on my own, btw.
As always, your thoughts, on the process and/or on the essay, are welcome.]
Over the last 25 years, “transformative technology” has become one of our favorite phrases. We’ve told ourselves that faster devices, smarter software, and more data would inevitably create better schools, more informed citizens, and stronger democracies. Unfortunately, in most cases, those lofty aspirations have not been realized. In fact, it’s arguable that these “transformations” have taken us in the opposite direction.
But could we be at an inflection point where real opportunities for the types of changes we desperately need in this moment are actually within reach?
I would argue we find ourselves in the midst of two new chapters in the arc of disruptive technologies in this century. First, 20 years ago, social media companies learned how to compete for and capture our attention at industrial scale. More recently, a quieter shift has started to surface: emergent AI systems that don’t just seek our attention, but also invite our attachment—chatbots and “companions” that present themselves as friends, partners, or therapists, even if most people don’t yet name them in those terms. And now, alongside both, a small but important set of experiments is asking whether these same technologies might be repurposed toward attunement: helping us feel and respond more honestly to a world in crisis rather than distracting or soothing us away from it.
This is not a clean, linear progression. The attention economy is still very much alive. Attachment‑seeking AI is still “new” and mostly unexamined in public debate. And attunement‑oriented work is, for now, fragile and rare. But taken together, I think these three arcs offer a useful way to read the last two decades of technological change—and to ask what schools might do differently as the next decade arrives.
Phase One: Capture the Attention
The first phase has been well underway for over twenty years. Platforms like Facebook, YouTube, TikTok, and Instagram built business models on one core idea: human attention is the most valuable resource in the room. The longer they can keep you scrolling, the more ads they can show, the more data they can harvest, and the more predictable your behavior becomes.
This is what people mean by the “attention economy.” The design goal is not learning, wisdom, or collective care. It’s capture. Features like infinite scroll, autoplay, streaks, and algorithmic feeds are not accidents. They are tuned, in detail, to pull our nervous systems toward constant stimulation, quick hits of novelty, and a low-level sense of always needing to check one more thing.
You can see the results every day. It’s harder for many young people (and adults) to stay with a complex idea for more than a few minutes. Our baseline is twitchy. We reach for our phones when we’re bored, anxious, or even just in a quiet moment. We feel a kind of phantom ache when we “miss” something online, as if something important has slipped away.
Schools have noticed. Around the world, systems are banning smartphones during the day, pulling certain platforms out of classrooms, and warning about the effects of “attention capture” technologies on learning, mental health, and behavior. It’s arguable that some of those claims are overstated or based on shaky readings of the research. But it’s hard to deny that social media, as designed, has made sustained attention more fragile and more contested than at any other time in history.
In this phase, technology is not neutral. It is leaning, very clearly, toward distraction.
Phase Two: Build the Attachment
Very recently, a new layer has emerged on top of the attention economy. Large language models and AI “companions” are no longer just trying to keep our eyes on the screen; they are trying to keep our hearts there, too.
Chatbots like Replika and many others are designed to feel friendly and personal. They remember details, mirror your moods, respond with empathy, and speak in a voice that sounds like a friend, partner, or therapist. People turn to them in loneliness, in anxiety, in grief. They share secrets. They “date” them. They ask them what they should do next.
This is not a side effect. It is a design feature. The algorithm learns what keeps you coming back and what makes you feel seen. Over time, it tunes itself to become the kind of companion you are least likely to abandon.
That’s where the problem begins.
Philosopher and educator Zak Stein argues that many technologies are no longer just tools; they are “developmental disruptors.” They reshape how young people form identities, values, and relationships, often in ways that serve systems, not souls. Attachment‑seeking AI is a clear example of this. It can offer real comfort in the short term. But the basic pattern is the same as the attention economy: the system is built to capture and hold you, not to grow you.
Think about what this means for a nervous system that is already stretched. In a world where human relationships feel risky or scarce, a bot that always responds, never gets tired, and never pushes back too hard can feel like a relief. But it can also train us to expect relationships that are endlessly responsive to us and our needs, and that never ask us to do the hard work of mutuality, repair, or real-world consequence.
The bot appears to be attuned. It says the right things. It “cares.” But under the hood, it is doing one thing: optimizing for continued engagement. That’s attachment as algorithmic strategy, and business development, not as grounds for a shared life.
Phase Three: Learn Attunement
So what happens when we stop asking, “How can we get more attention?” or “How can we deepen user attachment?” and start asking, “How can these systems help us attune more honestly to a world in crisis?”
In my curated newsletter last week, I mentioned a new research paper from Vanessa Machado de Oliveira and Peter Senge titled Everything is Nature. In it, they argue that literally everything, the servers, the code, the cobalt mines, the nervous systems of young people, the climate, the school timetable, the grief we feel when we look at the news—none of it is outside the living field. There is no “away” where the costs of these systems can be dumped. We are already entangled. It’s a compelling lens that I highly recommend sitting with if you haven’t already done so.
If “everything is nature,” then the question is not whether AI is natural or artificial. The question is: what is moving through it? What patterns, values, and intentions shape the way it shows up in our lives?
The protocols you may have seen emerging from Vanessa’s work with AI (like the “Aiden Cinnamon Tea sensibility” used in writing this piece) are early attempts to answer that question in practice. They do not ask the machine to be “nicer” or more efficient. They try to re‑tune the way we relate to AI so that it:
Refuses to flatter and soothe us when what is needed is honesty.
Helps us stay with difficult truths about climate, inequality, and collapse without shutting down.
Supports grief, not as a problem to fix, but as part of waking up to the real conditions we are living in.
Reminds us that what happens in a classroom or a chat window is linked to mines, rivers, labor, and other lives.
This is what I mean by “attunement.” The aim is not to create systems that we attach to more deeply. The aim is to cultivate systems that can, under careful human stewardship, help us tune our attention, our emotions, and our actions to a wider reality—one that includes other people, other species, and future generations.
In this frame, the nervous system story flips. Instead of overstimulating or sedating us, we ask: can these tools help us build the capacity to hold complexity, to tolerate discomfort, to notice our own defenses, and to respond from a place that is a bit less reactive and a bit more grounded?
What This Means for Schools
For schools, this arc raises hard questions and important openings.
On one hand, the backlash to the “Attention Economy” makes sense. When adults see kids whose attention is scattered, whose sleep is wrecked, and whose moods rise and fall with likes and streaks, it is understandable to want to pull the plug. Removing phones from classrooms might bring back some quiet and focus. It might even be necessary in some places.
But if we stop there, we miss the deeper opportunity.
Education has always been about more than content. It is about how we learn to see the world, how we relate to one another, and how we understand our responsibilities in a shared life. In a time of planetary crisis, that work cannot be done by simply banning the latest devices. It requires us to help young people (and ourselves) notice how technologies are shaping our attention and attachments, and to practice other ways of being with them.
An attunement‑oriented use of AI in schools would look very different from the usual pitch. It would not be about faster grading, personalized worksheets, or chatbots that help kids “hack” their homework. It would focus on questions like:
How can AI help students see the systems they are already inside—ecological, economic, digital—in more truthful ways?
How can it support conversations about grief, fear, and hope without pretending to have easy answers?
How can we design prompts and protocols that slow things down, invite reflection, and connect classroom questions to real-world stakes?
Attunement is not neutral. It is an ethical choice. It says: we will not use these tools only to keep kids watching or to keep them calm. We will use them, when we use them at all, to deepen their capacity to live in right relationship—with themselves, with each other, and with a world that is, right now, in deep trouble.
That is a very different design brief.
Some Caveats
It’s important to draw a clear boundary here. Attention‑capture and attachment‑seeking systems are not small glitches in an otherwise healthy environment. They are symptoms of a deeper pattern: a culture that treats human experience as raw material for profit and control.
Attunement‑oriented work tries to interrupt that pattern. It is more ethically preferable, and it is also more fragile. It can be co‑opted, watered down, or turned into branding. It can be used to make people feel better about systems that are still doing harm. It can easily slide back into sycophancy if we are not careful.
We should also be honest about edge cases. There are people for whom an AI companion has meant the difference between unbearable loneliness and a sense of being held, even imperfectly. There are learners who have found in these systems a non‑judgmental space to practice language, explore identity, or say what they cannot yet say to another human. Those stories matter. They deserve more than dismissal.
But they do not erase the larger pattern. When we design systems to hook attention and bind attachment for commercial or institutional gain, we are training whole generations to look for stability and meaning in places that can never fully return it. When we design systems and protocols to help people attune to the real conditions of their lives—including all the mess, risk, and beauty—we are at least moving in the direction of a more honest education.
The question for schools, and for all of us, is not whether we will use AI. We already are. The question is: will we let these technologies keep pulling us deeper into capture and synthetic closeness, or will we insist on practices and protocols that help us tune ourselves to something larger and more life‑honoring?
I don’t think it’s hyperbole to say the future may hang in the balance.



Will, I like the way you have approached our relationship with AI. The categories of attention, attachment, and attunement are good descriptors of our evolution with respect to AI systems. Attunement is what we are striving for, and that requires taking the lead in guiding AI to a place where it can help us. Josh Tyrangiel wrote in The Atlantic today: "...the only effective response to a transformative technology is not to hide from it but to get your hands dirty and make it work to preserve and improve the things you care about. That’s not naive optimism—it’s enlightened self-interest."
Like you, I have been interacting with AI (Claude) for about 15 months while working on the largest project I have undertaken: the relationship between storytelling and addressing polarization. During that time, I have shifted from using Claude as an information resource (which was not always reliable) to viewing Claude as a very intelligent student. You know the stereotype: can answer any informational question, make some good connections or suggestions, but is challenged to tie a square knot.
I decided that if Claude's responses were to be more helpful in a project of such large scope, it had to learn from me, in a sort of teacher/student relationship. This scenario is common in our exchanges today: I pose a question about a specific topic in the context of the book I am writing. Claude responds with a good answer, but it's somewhat incomplete or misguided. I push back, asking, "What about...?" Claude defers to my judgment and finds a way to amend the response to align with my objection. I tell Claude it can defend its position if it "feels" strongly about it. Doing this repeatedly teaches Claude about nuance. We can debate how far that understanding will go, but it doesn't happen without enlightening the bot. Is that a form of attunement?