
Over the past several months, I’ve been using AI more intentionally in my work. Not as a novelty, and not as a shortcut, but as a partner. The experience has been productive, occasionally surprising, and increasingly thought-provoking, especially when it comes to questions of ownership, learning, and ethics.
One recent project involved building documentation for a large internal knowledge base. The audience was broad: clients, release managers, developers, and technical teams. Some content needed to be highly technical and specific; other sections were more general and explanatory. I started with a clear content map: what belonged where, at what level of depth, and for whom. With that structure in place, I used ChatGPT as a writing partner to draft and iterate quickly.
What stood out wasn’t just the speed, but how it changed where I spent my energy. Instead of writing every paragraph from scratch, I stayed at the level of content strategy and design: deciding where a paragraph was enough, where a page was needed, and how different audiences would move through the material. Using AI allowed me to get a working site built and ready for user review in a month, something that would previously have taken much longer.
I’ve had a similar experience while building a facilitated coaching process for leadership teams. The program includes slides, Mural exercises, and short guidebooks, and moves through environmental scanning, persona development, theory of change, secondary research, interview design and analysis, and action planning. Here too, AI helped me stay at the design level. I used it to draft examples, exercises, and explanatory text, which I then refined based on my context and experience.
In this work, we’ve been explicit about when AI can help and when people need to do the thinking themselves. We talk openly about using AI for gathering, summarizing, and theming material and about retaining ownership over learning, sense-making, and judgment. These distinctions matter.
I’ve also been using an AI co-pilot developed by our partner LitheSpeed, trained on value management principles and their book From PMO to VMO. I’ve found it invaluable as a learning accelerator, a way to workshop ideas, deepen my understanding, and think through how to implement practices at a granular level during a transformation initiative. It doesn’t replace experience or context, but it can dramatically shorten the path to insight.
Across all of these examples, one pattern stands out: using AI as a partner allows me to operate at a higher level of thinking, design, and strategy. At the same time, it raises a concern I’m still sitting with, one that goes beyond my own practice.
I gained my 25 years of experience the long way: writing and rewriting, getting feedback from more experienced practitioners, doing the analysis myself, making mistakes, tracing my work back to understand where I went wrong, designing and delivering training, facilitating meetings that didn’t land, and learning—sometimes painfully—when my instincts were off. Through that lived experience, I developed my “spidey sense”: the ability to feel when something in the analysis, the content, or the room just isn’t quite right.
As a senior consultant, I now use AI as a writing and thinking partner. Before, a senior would have worked with a junior consultant, someone gradually building the skills and judgment to eventually step into my shoes. If AI replaces too much of that work, I worry about what happens to succession. How does the next “me” learn these professional skills if we remove the very experiences that built them?
This concern shows up elsewhere too. It’s easy to generate large volumes of content without engaging deeply with it. I’ve seen meeting agendas that look elegant on paper but are impossible to run, and meetings where pages of AI-generated text are dropped into the chat with a casual “let’s see what AI says.” That kind of use is often overwhelming rather than helpful. Humans still have limited attention, energy, and time.
I’m also increasingly aware of the environmental cost of AI: the electricity, water, and rare materials required to power it. Trained in physical and human geography, I feel the tension across our physical and cultural landscapes.
Here are the principles guiding my approach to AI:
- Elevate thinking, don’t avoid it
Use AI to speed drafting, explore ideas, or summarize content — but retain ownership over the analysis, learning, and sense-making. - Stay intentional about ownership and learning
Ensure people still experience the work that builds judgment and professional skill, especially for succession and mentoring. - Design for humans, not just systems
AI can produce content fast, but humans remain the ultimate arbiter of practicality, engagement, and nuance. - Be mindful of what AI might quietly erode
Skills, experience, and judgment can atrophy if AI does too much for us — think of it as a tool to amplify, not replace. - Consider the environmental and ethical trade-offs
AI consumes significant resources. Reflect on when and how its benefits outweigh the costs.
If you’re wrestling with similar questions or curious about how we’re integrating AI thoughtfully into strategy, facilitation, and transformation work, we’d love to talk. Please get in touch at info@alluvial-consulting.com