AI writing creates several recurring problems.
Drafts that sound more confident than the thinking behind them. Teams producing more versions than they can properly review. Models being asked to guess standards that have never been made explicit. Prompts being used to compensate for weak source material.
Those problems point in the same direction — reliable AI writing depends on the system around the model.
HBR calls it ‘workslop’ — AI output that creates more work than it saves because the cost of checking gets pushed onto colleagues downstream.
A weak AI writing system can feel like it’s saving you time while really just shunting the work of fact-checking, tone correction and source verification to someone else.
AI writing systems that work have five things in place:
Define the outcome properly before starting. That means don’t ask your AI for a staff email. Instead, write down who is going to read it, what they already know, and what one thing the piece needs to achieve. Without this, the model picks its own brief.
Write down what good looks like. Most teams have a house standard, but it lives in the heads of senior writers. The model cannot read your mind. A short reference document — lead with what changed, use plain language, end with one action, use this information set — creates usable instructions.
Curate a source pack. AI will happily draft from its training data or whatever it can find on the internet if you let it. What goes in is as important as what comes out.
Separate drafting from review — and name who owns the review. A polished first draft looks closer to final than it is, and that is a trap. If the same person who prompted the draft also signs it off, the review is weaker than it needs to be. Decide in advance: who checks the draft, against what standard, and before what kind of publication or release. That person needs the authority and the time to push back.
Revise and repeat. When you get poor results, it is tempting to blame the prompt. A better approach is to ask about the surrounding system and figure out which bits of context were not specific enough or got missed by the model. Focus on the system around the AI before you worry about the model itself.
These are problems we help communications, content and marketing teams solve.
Teams are under pressure to use AI but many are not satisfied with output that feels generic or hard to trust.
The key is building a repeatable setup — how briefs are framed, how source material is gathered, how standards are expressed, how review is structured, and where AI genuinely helps without weakening quality or control.