Skip to content

Editorial scheduler

Manual rewrites work for a dozen articles. For 100+ articles, you need a queue + automation. The scheduler does that — but with hard rate limits to look like a human editor, not a bot farm.

How it works

  1. Select articles in the Audit tab → ⚡ Bulk action → 📅 Schedule rewrite + publish
  2. The plugin queues them with a configurable cadence (default: 2 articles per day)
  3. A WordPress cron job runs every hour and processes the queue:
    • Picks the next queued article
    • Runs the AI rewrite
    • Creates a draft (or replaces in-place, configurable)
    • Optionally auto-publishes after a review delay (default: 7 days for drafts, 0 for in-place replace)

Why max 2 publications/day

Google’s algorithm and Search Quality Raters flag sites that suddenly publish/republish 50 articles in a day as suspicious (signal of AI spam farm or hacked site). The 2/day cap ensures:

  • Spread over time — Google sees gradual content refresh, not a tsunami
  • Editorial credibility — even small blogs with one editor can sustain 2/day
  • Crawl-friendly — Googlebot crawls + indexes naturally
  • Cost control — at $0.005/rewrite + $0.04/image = $0.045 × 2/day = $0.09/day max in AI costs

You can lower this further (1/day, 1/week) but not raise it above 2/day — this is a safety hard-cap.

Scheduling options

When you click 📅 Schedule rewrite + publish, a modal asks:

  • Mode: Draft (review before publish) or In-place replace
  • Cadence: rewrites per day (1 or 2)
  • Auto-publish delay: 0 = immediate, 7 days = grace period for drafts (default)
  • Window: which days of the week to publish (default: all 7 days)
  • Time of day: when the cron picks up (default: random within working hours UTC)

Watching the queue

In the ⏰ Planificateur tab:

  • Queue tab: pending articles with ETA, status (queued, rewriting, draft created, published, failed), and per-row actions (cancel, prioritize, re-run)
  • History tab: every rewrite (auto + manual) with comparison before/after, original URL, draft URL, redirect status, AI model used, cost

Failure handling

If a rewrite fails (OpenRouter timeout, content too long, etc.), the article moves to “Failed” status with the error logged. You can:

  • 🔄 Retry — re-run with the same model
  • 🪄 Run manually — open the rewrite modal interactively to debug
  • ❌ Cancel — remove from queue

The cron retries failed items 3 times automatically before marking them as terminal failures.

Reproducible flow

The History tab is your audit trail. For each rewrite, you can see exactly:

  • Original title + content (cached at rewrite time)
  • New title + content
  • AI model used + token consumption + cost
  • Verdict before/after
  • Risk level before/after
  • Whether a 301 redirect was created (slug change)

This is useful for compliance (showing what changed when), debugging quality issues, and rolling back specific rewrites.

Combining with the Audit filter ”🤖 RETRAVAILLER”

The most efficient workflow:

  1. Run a full audit (📊 Analyse complète)
  2. Filter to 🤖 RETRAVAILLER (verdict IA)
  3. Sort by traffic descending (touch the highest-traffic candidates first)
  4. Select all → 📅 Schedule with 2/day cadence
  5. Review drafts daily, publish good ones

For a 200-article RETRAVAILLER backlog, you get through it in ~100 days while maintaining editorial quality.

What’s next?