Conducting Your AI Symphony
On the difference between leading your AI symphony or passively joining the audience
In my last piece, I wrote about the memory issue with AI and how it felt a lot like the movie Memento. A challenge navigating a world that resets on you, and building breadcrumbs so the next session picks up where the last one left off.
But building your system is just one element required to truly maximize the benefits of your AI “team.” How actively you participate in that system is often what makes the difference between your AI enhancing your work or simply adding more noise to it.
Even when the breadcrumb system is working with the right context loaded, the plan current, and the session starts with a proper brief, the quality of what comes out can still vary wildly. Some sessions feel like you are hitting every note while conducting a world-class orchestra. Others feel like a complete miss, the output not in sync, the music falling flat.
The variable isn’t the AI. It’s the human conductor.
The Cacophony
AI-generated output that looks polished and professional on the surface but lacks the substance to actually move anything forward has a new term being coined by researchers at Stanford and BetterUp: workslop. It’s typically grammatically flawless and structurally sound, but entirely empty of any real substance.
Recent studies show that nearly half of enterprise professionals have encountered some form of workslop and report that it consumed roughly two hours of additional work to correct. In this instance the productivity is negatively impacted rather than enhanced. The failure is AI not being properly conducted, not that AI failed at the task.
What makes workslop particularly hard to catch is exactly what makes it dangerous: it doesn’t fail the way a normal bad piece of work would fail. Instead of obvious gaps or missing structure, workslop is polished to perfection with a well structured, well written, clear thesis and conclusion. The problem is that the output may not address the issue the human operator was actually intending to solve. Perhaps the context was too vague, or the human judgment was never applied. Instead the human conductor may have simply assumed the AI orchestra would figure out the music without properly inserting themselves into the process to provide the right guidance.
The cacophony is produced when the human element of the AI equation becomes passive instead of active.
Early in building my workflow, I admit I also produced workslop. There were sessions where I would prompt the AI with a vague idea, or a shot in the dark, with an expectation that the AI would simply “get it.” The structure would be great, the prose well written, and the point confidently missed the mark. I had become too passive and did not provide enough precision in my orchestration. The music fell flat.
The Maestro — What Good Looks Like
To truly maximize the potential of your AI symphony you must become a focused and deliberate conductor. Without that focus, your AI tools will not act in unison and will not produce quality output. Good conducting operates at two levels: how you direct each instrument in a given session, and how you sequence the ensemble across a full workflow. Both require the same thing: an active human at the podium.
At the session level, the conductor’s job is precision and presence. A vague prompt produces a vague result regardless of how capable the model is. The AI is working with what it is given, and if what it is given is underspecified, the output will be polished and wrong. Staying active, reviewing, redirecting, adding context as the session develops is not micromanagement. It is conducting.
At the ecosystem level, the conductor’s job is sequencing. The work I’ve been running across my AI stack over the past several months is not one continuous conversation with one tool. It’s an ensemble, and each instrument has a voice and a defined role.
The way it actually works in practice: meetings get transcribed via AI directly into Notion. Those transcriptions become living inputs, not filed away, but fed forward. When I sit down to develop strategy or build a plan, Claude pulls from those Notion pages as active context. The AI isn’t starting from a blank chat; it’s working from a brief assembled in real time from actual conversations and decisions. The difference between “here is an unstructured thought about what I would like to complete” and “here is a Notion page that captured the recent discussion, plus additional context, plus a clear directive ask” is not marginal. It is the difference between a session that builds and a session that reconstructs.
Final outputs are returned to Notion so that when I shift tools and move from Claude to ChatGPT for drafting, or to Cursor for building; the context travels with me. The instruments hand off and the score stays intact.
Where orchestration becomes truly complex, and where enterprise teams will require the most attention, is when you are conducting multiple steps across multiple AI tools and agents, with multiple human orchestrators involved. The more people and AI elements in play, the more critical it becomes that each human conductor is actively directing their piece of the symphony. Without that, the ensemble doesn’t produce harmony. It produces expensive noise.
Breaking Work Into Smaller Movements
One of the most practical things I’ve learned is that the size of a task matters as much as the clarity of the brief. Handing too many elements to one agent in one step is a reliable path to workslop.
Breaking work into deliberate, clear-objective chunks allows you to review output at each stage and course-correct before compounding errors forward. It keeps the AI’s context tight and current. And it keeps your own judgment engaged throughout rather than delegating it to a final review that arrives too late to be useful.
A Claude session doesn’t produce a finished strategy document in one pass. It produces a structured problem frame. I review, redirect, and add context from what the AI surfaced; and choose what to modify and carry forward as we iterate. The output of that session becomes the input for the next movement: a draft in ChatGPT, a build in Cursor, a brief update in Notion.
The research I’ve been doing, the outreach I’ve been building, the tools I’ve been creating do not happen in a single session or through a single tool. It happens in movements, across an ensemble, with an active conductor at the podium throughout. That active role is what separates compounding output from compounding noise.
What the Conductor Is Actually Protecting
There’s a version of this conversation that tips into anxiety: that AI will make your thinking worse if you’re not careful. While there is much to discuss on that topic, one thing is concretely true: you choose how deep or shallow you position yourself in the workflow, and that level of depth will materially improve or degrade the output.
The conductor’s most important job is not just directing the AI. It’s protecting the judgment that the AI cannot replicate. It is the contextual reading, the pattern recognition built over years, and the instinct that knows when something technically correct is actually wrong situationally. That judgment doesn’t disappear when AI enters the room, but if you become passive then that judgment can go quiet. The spectator in the audience mistakes velocity for progress and lets it go quiet. The Maestro keeps the orchestra loud and focused on beautiful symphonies.
Stay at the podium. Direct the ensemble. Review the movements before they compound. That is what separates the conductors from the spectators. Right now, in this early chapter of working with AI, the gap between those two outcomes is wider than most people realize.
How are you dividing the work between you and your AI stack? I’m curious what others are navigating. Drop a comment or reach out directly.
