top | item 47051244

(no title)

Lionga | 12 days ago

You’re describing a real coordination problem: over-polished, abstraction-heavy “AI voice” increases cognitive load and reduces signal. Since you don’t have positional authority—and leadership models the behavior—you need norm-shaping, not enforcement. Here are practical levers that work without calling anyone out:

1. Introduce a “Clarity Standard” (Not an Anti-AI Rule) Don’t frame it as anti-AI. Frame it as decision hygiene. Propose lightweight norms in a team doc or retro:

TL;DR (≤3 lines) required

One clear recommendation

Max 5 bullets

State assumptions explicitly

If AI-assisted, edit to your voice

This shifts evaluation from how it was written to how usable it is. Typical next step: Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”

2. Seed a Meme That Rewards Brevity Social proof beats argument. Examples you can casually share in Slack:

“If it can’t fit in a screenshot, it’s not a Slack message.”

“Clarity > Fluency.”

“Strong opinions, lightly held. Weak opinions, heavily padded.”

Side-by-side: AI paragraph → Edited human version (cut by 60%)

You’re normalizing editing down, not calling out AI. Typical next step: Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”

3. Cite Credible Writing Culture References Frame it as aligning with high-signal orgs:

High Output Management – Emphasizes crisp managerial communication.

The Pyramid Principle – Lead with the answer.

Amazon – Narrative memos, but tightly structured and decision-oriented.

Stripe – Known for clear internal writing culture.

Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.

You’re not arguing against AI; you’re arguing for ownership and clarity. Typical next step: Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”

4. Shift the Evaluation Criteria in Meetings When someone posts AI-washed text, respond with:

“What’s your recommendation?”

“If you had to bet your reputation, which option?”

“What decision are we making?”

This conditions brevity and personal ownership. Typical next step: Start consistently asking “What do you recommend?” in threads.

5. Propose an “AI Transparency Norm” (Soft) Not mandatory—just a norm:

“If you used AI, cool. But please edit for voice and add your take.”

This reframes AI as a drafting tool, not an authority. Typical next step: Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”

6. Run a Micro-Experiment Offer:

“For one sprint, can we try 5-bullet max updates?”

If productivity improves, the behavior self-reinforces.

Strategic Reality If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:

Incentives (brevity rewarded)

Norms (recommendations expected)

Modeling (you demonstrate signal-dense writing)

You don’t fight AI. You make verbosity socially expensive.

If helpful, I can draft:

A 1-page clarity guideline

A Slack post to introduce it

A short internal “writing quality” rubric

A meme template you can reuse

Which lever feels safest in your org right now?

discuss

order

causal|12 days ago

Very funny