Discussion about this post

User's avatar
Joel Backon's avatar

Will, I like the way you have approached our relationship with AI. The categories of attention, attachment, and attunement are good descriptors of our evolution with respect to AI systems. Attunement is what we are striving for, and that requires taking the lead in guiding AI to a place where it can help us. Josh Tyrangiel wrote in The Atlantic today: "...the only effective response to a transformative technology is not to hide from it but to get your hands dirty and make it work to preserve and improve the things you care about. That’s not naive optimism—it’s enlightened self-interest."

Like you, I have been interacting with AI (Claude) for about 15 months while working on the largest project I have undertaken: the relationship between storytelling and addressing polarization. During that time, I have shifted from using Claude as an information resource (which was not always reliable) to viewing Claude as a very intelligent student. You know the stereotype: can answer any informational question, make some good connections or suggestions, but is challenged to tie a square knot.

I decided that if Claude's responses were to be more helpful in a project of such large scope, it had to learn from me, in a sort of teacher/student relationship. This scenario is common in our exchanges today: I pose a question about a specific topic in the context of the book I am writing. Claude responds with a good answer, but it's somewhat incomplete or misguided. I push back, asking, "What about...?" Claude defers to my judgment and finds a way to amend the response to align with my objection. I tell Claude it can defend its position if it "feels" strongly about it. Doing this repeatedly teaches Claude about nuance. We can debate how far that understanding will go, but it doesn't happen without enlightening the bot. Is that a form of attunement?

2 more comments...

No posts

Ready for more?