Files First, Always
Article Tech

A Safe Way to Edit Database Content With AI Coding Tools

How a pull/diff/push CLI workflow bridges the gap between database content and file-based AI editing tools like Claude Code.

On This Page

The "Just This Once" Trap

You're fixing a typo in a blog post. The word "recieve" is staring at you from the database. You know the fix. One UPDATE statement and it's done.

UPDATE blog_posts SET content = REPLACE(content, 'recieve', 'receive') WHERE id = 42;

Clean. Fast. And quietly dangerous. That query skipped cache invalidation, left no audit trail, and if someone else was editing the same post, their changes just got overwritten with no warning.

How I Learned This the Hard Way

The incident that changed my mind. I was editing a blog post's content through a quick MySQL dump-and-reload workflow. Pushed my edit back to the database. Everything looked fine. Three days later I noticed the meta description was wrong. I'd updated it through the admin panel earlier that same day, forgotten about it, and then my CLI push silently overwrote it with the stale version I'd dumped hours before. Classic self-inflicted wound, completely invisible until the damage was done.

That's when I realized content fields aren't like other columns. A status field is a single value. A published_at date is simple to reason about. But a content field holding 2,000 words of markdown sits at the center of everything that can go wrong with a blind UPDATE:

Your cache doesn't know. Content fields have rendered HTML cached with a 24-hour TTL. Update the source markdown without clearing the cache, and the old version keeps serving. No error, no warning, just stale content.

Your other edits don't know. That admin panel update at 2:15pm? Gone. Your CLI push at 2:30pm replaced the whole record. You won't notice until days later when the wrong meta description is staring at you from Google's search results. Like mine was.

You don't know. You're overwriting the entire content of a published article without seeing what changed. Hope you didn't accidentally delete a paragraph.

It turns out that designing for AI-assisted editing and designing for safe editing are the same problem. Both want a local file to work with, a diff before committing, and protection against stale overwrites. Building for one gave me the other for free. That insight shaped everything that follows.

The Pattern Claude Code Taught Me

I use Claude Code for most of the development on this site. And early on, I ran into an obvious friction point: Claude Code is brilliant at editing files. It can read them, do surgical str_replace operations on specific paragraphs, rewrite entire sections. But a 2,000-word blog post sitting in a MySQL TEXT column? It can't "see" that the way it sees a file.

I asked Claude Code about this, and it gave me the simplest possible advice: extract, edit as file, write back.

The quick-and-dirty version looked like this:

# Dump content to a file
mysql -u root -p mydb -e "SELECT content FROM blog_posts WHERE id=42" \
  --raw --skip-column-names > /tmp/post-42.md

# ... let Claude Code edit the file ...

# Push it back
mysql -u root -p mydb -e "UPDATE blog_posts \
  SET content=LOAD_FILE('/tmp/post-42.md') WHERE id=42"

And honestly? It worked. Claude Code could finally read the post, suggest edits, fix formatting, rewrite sections, all using its native file-based workflow. The content was just a .md file now, and that's exactly the format it's built for.

Don't stop here. I'm showing you this because it illustrates the idea, not because it's a solution. This approach has real problems:

  • No cache invalidation. Your app is still serving the old rendered HTML.
  • No stale detection. If someone edited the post while you were working on the file, LOAD_FILE just overwrites their changes silently.
  • LOAD_FILE needs special MySQL permissions (FILE privilege and secure_file_priv configuration). On most setups, it won't even work.
  • No diff preview. You're writing the entire file back blind.

This is the version that cost me that meta description. So I built a proper one.

The Pull/Diff/Push Workflow

The concept stayed the same: extract to file, edit, write back. But I wrapped it in a CLI tool (cli/content.php) that handles all the things raw SQL can't.

Pull: Database to File

php cli/content.php pull blog 42

This creates storage/content/blog-42.md with YAML frontmatter containing metadata and the raw markdown below it:

---
id: 42
type: blog
title: "Your Article Title"
slug: "your-article-slug"
status: "published"
meta_title: "SEO Title Here"
meta_description: "SEO description here."
---

## Your Markdown Content

Starts here, exactly as stored in the database...

The tool records a timestamp in .pulled.json, tracking when you grabbed this snapshot. That timestamp becomes important later.

Edit: Your Tool, Your Way

The .md file is just a file. Edit it in VS Code, Vim, or hand it to Claude Code. The frontmatter fields are editable too. Change the meta_description, update the title, fix the content, all in one place.

One design choice worth mentioning: the CLI normalizes \r\n line endings to \n on both pull and push. That might seem like a minor detail, but when your editing tool runs on WSL and produces Windows-style line endings, every single line shows up as "changed" in the diff without it. Small thing, big quality-of-life improvement.

Diff: See Before You Commit

Before pushing anything, preview exactly what changed:

php cli/content.php diff blog-42.md

The output shows metadata changes and a line-by-line content diff, color-coded in the terminal:

[meta_description]
  - DB:   Old description that was too long.
  + File: Concise new description under 160 chars.

[content]
  Summary: 3 changed, 1 added, 0 removed line(s)
  - The old paragraph with the typo recieve
  + The fixed paragraph with receive

No surprises. You see exactly what you're about to change before it touches the database.

Push: File to Database (With Safety Nets)

php cli/content.php push blog-42.md --confirm

The --confirm flag is intentional friction. Without it, the tool just shows the diff and stops. You have to explicitly opt in to the write.

But the real safety feature is stale edit detection. Remember that timestamp from the pull step? Before pushing, the CLI compares it against the current updated_at in the database. If someone (or something) modified that record since you pulled it, the push fails:

[ERROR] Stale edit detected!
  Pulled at:    2026-03-15 14:00:00
  DB updated:   2026-03-15 14:15:00
  Content was modified since you pulled it.
  Pull again to get the latest, or use --force to overwrite.

This catches the exact scenario that bit me. The --force flag exists as an escape hatch, but you have to consciously choose to use it.

After a successful push, the tool clears the relevant caches automatically. Blog posts clear blog_* and home_* caches. Tools clear tools_* and home_*. No manual cache busting required.

Simple metadata fields like status, title, and published_at are still fine for direct SQL. The workflow is for large text fields with cached renders and downstream dependencies.

The Architecture Under the Hood

The tool is a single PHP file, about 600 lines, that bypasses the normal Service layer. That's a deliberate choice. Services in this project require CSRF validation and HTTP context that don't exist in a CLI environment. Instead, the CLI calls Model::update() directly and handles cache invalidation manually.

The content type configuration is declarative. Adding a new content type means adding one entry:

const TYPE_CONFIG = [
    'blog' => [
        'model' => BlogPost::class,
        'content_field' => 'content',
        'metadata_fields' => ['title', 'slug', 'excerpt', ...],
        'title_field' => 'title',
    ],
    // Add new types here
];

Cache patterns follow the same approach, mapped per content type so the tool knows which caches to invalidate after a push.

The whole tool took an afternoon to build. The raw MySQL version worked in ten minutes. The difference is stale detection, cache invalidation, and diff preview: three features that prevent every "oh no" moment the hack would eventually cause. The number of "I just need to run a quick UPDATE" moments it's prevented since? Lost count.

Making Claude Code Enforce It

A CLI tool is only useful if you remember to use it. I kept catching myself reaching for a quick UPDATE on a content field. Old habits.

The fix was making Claude Code itself enforce the workflow. In my project, I have a skill (a reusable instruction file) that Claude Code loads whenever content editing comes up. The skill says: "When editing large text fields (blog content, tool description, guide description), ALWAYS use cli/content.php pull/edit/push workflow. Never use raw SQL UPDATE on content fields."

That same rule lives in the project's CLAUDE.md file, the instruction document Claude Code reads at the start of every session. Between the skill and the project rules, Claude Code won't let me take shortcuts even when I ask nicely. It pulls the content to a file, makes the edits there, shows me the diff, and pushes with the safety nets intact.

This is the part that surprised me most. The workflow isn't just a tool I built. It's a constraint I taught my AI assistant to enforce on me. The skill handles the "how" (which CLI commands to run, in what order), and the project rules handle the "when" (any time a content field is involved, no exceptions).

If you're setting up something similar, the key is making the rule absolute. Not "prefer the CLI tool" or "consider using the workflow." Just: these fields are off limits for raw SQL. Period. Claude Code respects hard rules better than suggestions.

Get Personalized Help

Copy this prompt to ChatGPT, Claude, or your favorite AI assistant. Fill in your details and get guidance tailored to your specific situation.

I read the pull/diff/push workflow for database content at https://ivanmisic.net/blog/tech/safe-way-edit-database-content-ai-coding-tools and want to build the same thing for my project. The goal is safer edits but also faster and cheaper AI-assisted editing since working with local files means less back-and-forth and fewer tokens burned.

My setup:
- Tech stack: [YOUR STACK - e.g., Node.js/PostgreSQL, Python/Django, Ruby on Rails, Laravel]
- Where my content lives: [DESCRIBE - e.g., MySQL TEXT columns, MongoDB documents, headless CMS API, WordPress database]
- AI tool I use for coding: [TOOL - e.g., Claude Code, Cursor, GitHub Copilot, Windsurf]
- Content I need to edit safely: [WHAT - e.g., blog posts in markdown, product descriptions, email templates, documentation pages]
- Do I have any caching layer?: [DESCRIBE - e.g., Redis, file cache, CDN, none that I know of]

Build me a CLI tool (or script) that does exactly what the article describes:

1. A **pull** command that extracts a content record from my database into a local file with metadata in YAML frontmatter and the content body below it. It should record a pull timestamp for stale detection.

2. A **diff** command that compares my local file against what's currently in the database, showing metadata changes and a line-by-line content diff.

3. A **push** command that writes the file back to the database, but ONLY after: checking if the record was modified since I pulled it (stale detection), showing me the diff for confirmation, and clearing any relevant caches after the write.

4. A rule or instruction I can add to my AI coding tool so it always uses this workflow instead of raw SQL when editing content fields.

Start with the pull command. Use my tech stack. Keep it simple enough that I can extend it later.