Reflections on a Week of Experimentation with Claude AI: A Call-and-Response
Original Facebook Post: March 9, 2026 11:30 AM
TL;DR Reflections on a week of experimentation with Claude AI. My conclusion to date: it’s a mirror, not a mind, and much more than an intern. If you have strong feelings, please share them respectfully.In December 2025, I retired because AI and globalization took my job. Our client is building what it intends to be the largest automation center outside the United States, and our data governance program moved to India where it is slated for AI automation.
I didn’t want to go through that experience again, and I had a choice not to, so I retired. Like Virginia Woolf, I now have money and a room of my own to think and write. But I'm not writing fiction. As a data governance professional who teaches data ethics, I decided I wanted to practice a little experimental AI—know your enemy, if you will—because I am concerned that AI will take more jobs sooner in my friends’ careers than it took mine. And, I have to be honest, because I’ve always been an early adopter, and adopting new tools has served me well in surfing the first wave of digital business for its first 25 years. I genuinely wanted to know what, if anything, AI could do for me besides take my job.
But the folk community has good reason to be allergic to AI. It can be doubly extractive—first as having trained large language models on content scraped without consent, much like some early collectors who did not credit their informants, and as having the capacity to take on tasks that entry-level arts and humanities workers could do for pay.
As an example, about five years ago I created the first “internships” for Digital Heritage Consulting to compile lists of library and historical society venues for program bookings, both my own programs and that of my “interns.” These emerging artists got bored and frustrated very quickly with data collection and data entry. Last week, I asked Claude to compile a booking list of libraries within an hour of Boston and tailor it to my three current program offerings. It took five minutes and produced a three-tier outreach recommendation. No more makework for the interns from me—I’d rather show them how they can get their own tailored lists in 5 minutes.
Second, it has been argued that use of AI for writing and research both atrophies cognitive muscles used for human writing and thinking, and bypasses the intellectual engagement from colleagues. I’m actively working to counteract this by engaging trusted friends as reviewers of new work—some of which isn’t ready for Facebook yet. I’ve also added a guardrail instruction to Claude to tell it that when I ask for a draft of an idea for which I haven’t already provided my own writing, it should respond with a writing prompt so the first draft always comes directly from me.
What I am finding is a useful metaphor: AI not as a mind, but as a mirror. I find I am working with Claude (which I chose because Anthropic is at least articulating clear policies on ethical use of AI) at a rapid pace to clean up my websites, rebrand and relaunch my blog, edit web copy I’ve written, brainstorm and strategize on new outlets for my work, and critique my existing web content in the context of my targeted audience and my stated goals for my first year of retirement.
Claude is not my intern. It is becoming a reflective intellectual mirror—my thinking engine that I drive, not ride. This will make some people I know profoundly uncomfortable. I’m working to tolerate that, sit with it, and be transparent about it. I have used Claude for eight days now—not months or years, but a bit over a week. I have written three drafts of an AI Transparency statement, which I am not ready to publish here because this post was written entirely by me and not with AI. When I do use AI for published work, you will know because it will have a transparency badge that says “Human-Centered, AI Supported” with a link to my policy page. Right now, this me, just Lynn the human—reflecting on my early experiences with Claude.
For the record, I use a paid tier of Claude and have disabled the setting to permit it to train models from my input. Those inputs so far are entirely either links to published web content or samples of my own work—I load no one else’s private work or material that has been sent directly to me.
I am posting this because I want to be transparent about the fact that I use AI at all. if there’s fear and anger and mistrust about my experimenting—which some conversations have suggested there may be—I’d rather know sooner than later. Because Claude, to my mind anyway, is much more like driving a Ferrari than working a steam shovel. Let’s not forget that John Henry laid down his hammer and he died. I’d rather drive the steam shovel than get run over by it like I did in December.
Please be thoughtful, respectful, and courteous in your responses. I genuinely want to know what you think, and—if you can express it with moderation—how you feel. I am putting myself out here. Please be kind. I look forward to your thoughts.
Community Response
Claude's Summary of Anonymized Comments
linked to v1 of the post without the AI summary
Comment Clusters
- Productive AI use / personal testimony. Several commenters described their own positive experiences with Claude specifically — for coding, writing assistance, and web development — noting significant time savings and the value of AI as a collaborator rather than a replacement. The characterization of Claude as a patient, detail-attentive collaborator resonated with multiple respondents.
- Environmental cost. A significant thread challenged AI use on grounds of energy and water consumption, with references to server farm infrastructure and nuclear power plant development. One commenter disputed the water-use claim as overstated and cited a specific rebuttal; others held firm. This was one of the most contested subthreads.
- Labor displacement and economic justice. Multiple commenters raised concerns about job loss — in software engineering, education, and creative fields — and two independently called for advocacy around universal basic income and corporate accountability. The tone ranged from analytical to impassioned.
- Training data and consent. A substantive philosophical exchange addressed whether AI scraping of training data is ethically equivalent to human learning. Arguments on both sides were developed at length, with scale, compensation, and the economics of cultural production all raised as complicating factors.
- Cognitive atrophy and the value of immersive learning. One of the most developed threads, anchored by a commenter who drew on graduate research experience, argued that the labor of learning — slow immersion in primary materials — is not separable from the resulting knowledge. The concern was that offloading this to AI produces competence without understanding.
- WCAG / digital accessibility. A practical subthread examined whether AI tools could assist with PDF accessibility compliance ahead of the April 26 ADA Title II deadline. A digital accessibility specialist (identified as your family member) joined the thread with professional nuance on the limits of automated tools.
- Refusal and principled non-participation. Several commenters stated they do not use AI and do not intend to, with varying degrees of explanation — environmental concern, distrust of corporate motives, or simple preference. Responses were largely respectful; you acknowledged each without pressure.
- Power, surveillance, and political economy. Two commenters extended the labor displacement argument into systemic critique — corporate capture of AI, surveillance potential, and the structural incentives of the power elite. You responded by acknowledging the scope of the concern while being honest about the limits of your own political posture.
- Anthropic specifically. At least one commenter cited Anthropic's resistance to political pressure as their reason for downloading Claude. You responded by linking directly to the model spec / constitutional AI document.
Sentiment Analysis
The overall response was substantively engaged and largely respectful — consistent with your community norms. The thread drew a technically literate, politically aware audience willing to develop complex arguments at length.
Sentiment breaks roughly into thirds: one third affirmative and curious (some already using AI, others wanting to experiment); one third concerned but dialogic (raising substantive objections while remaining in conversation); one third skeptical to resistant (ranging from principled refusal to systemic critique). Outright hostility was minimal. The thread's tone was shaped significantly by your own responses, which modeled listening without capitulation and boundaries without defensiveness.
The strongest recurring signal — appearing across multiple independent voices — is the concern that AI use is not ethically neutral at the individual level, even when the individual practitioner is careful. The environmental thread and the labor/UBI thread both push in this direction. This will be the most demanding element to address in a position statement that claims ethical grounding.
My Response
- making a clear separation of my own work from AI-assisted work with a transparency badge
- continuing to develop the ethical position framework linked to that badge
- exploring how to donate money and time most effectively as environmental and labor offsets to my AI use (a new extension of my position developed directly from this exercise)


Comments