A Deep Dive into AI Bots, Database Bloat & High CPU
The Incident: When an Upgrade Brings a Server to Its Knees
Updates are supposed to bring improvements, not instability. However, immediately following a routine upgrade to WordPress 6.9, our engineering team faced a critical stability issue on a client’s high-traffic infrastructure (CapitolCorridor.org).
Despite operating on a robust Plesk-based VPS with 8 GB of RAM, the server began exhibiting alarming symptoms:
- CPU Usage: Spiked to 90–100% and stayed there.
- Database Load: MariaDB processes were consuming nearly all available resources.
- Downtime: Intermittent 503 errors and severe sluggishness in the WP Admin.
This post documents our 5-day investigation, the “False Positives” we encountered with AI traffic, and the eventual “smoking gun” we discovered deep in the database.
Phase 1: The Initial Suspect – The AI Traffic Spike
As soon as the alerts triggered, our primary investigation pointed to a massive surge in external traffic.
The Observation:
Our logs showed a sharp increase in request volume. Upon closer inspection of the access logs, we identified a swarm of AI-driven bots and LLM crawlers hitting the site aggressively. Given the timing, it looked like a classic resource-exhaustion attack.
The First Defense (Firewall & WAF):
We immediately moved to mitigate the external pressure:
- WAF Hardening: We updated firewall rules to block known AI user agents (e.g., GPTBot, CCBot).
- Rate Limiting: We applied strict rate limits on the server to cap the number of requests per second.
The Result: While the malicious traffic was successfully blocked at the edge, the website performance did not recover. The CPU remained saturated, and the site continued to throw 503 errors.
Phase 2: Server-Level Triage
With external traffic neutralized, we assumed the server was simply “hung over” from the attack or misconfigured for the new WordPress version. We initiated a round of aggressive server-level optimizations.
The Actions:
- PHP-FPM Tuning: We switched process management to ondemand and capped worker limits to prevent RAM exhaustion.
- System Cron: We disabled the internal wp-cron.php (which triggers on page load) and offloaded it to a system-level cron.
- Resource Allocation: We adjusted buffer sizes and swap memory settings.
The Result: No positive impact. Despite a fortified firewall and an optimized server environment, the CPU usage hovered at 90–100%. This confirmed a critical theory: The problem wasn’t the traffic coming in; it was how the database was processing it.
Phase 3: The Deep Dive (Finding the Root Cause)
We turned our attention to MariaDB. We enabled the Slow Query Log to see exactly what WordPress was struggling to read.
The Smoking Gun
The logs revealed a recurring disaster. Queries were taking 20 to 40 seconds to execute, scanning millions of rows. The culprit was the wp_postmeta table.
We found massive amounts of data stored under keys like:
- _wpa_event
- _wpa_old_event
The Diagnosis: Architectural Collision
The root cause was a collision between Modern WordPress Core (6.9) and Legacy Plugin Architecture.
- The Plugins: We were running analytics and “Popular Post” plugins (Jetpack Stats and WP Popular Posts) that stored hit counters and event data directly in the WordPress database (wp_postmeta).
- The Trigger: WordPress 6.9 introduced more aggressive REST API usage and block introspection.
- The Crash: Every time the Page Builder (Cornerstone) or the Block Editor loaded, WordPress tried to autoload metadata. Because the analytics plugins had bloated the table with millions of rows, the database choked trying to retrieve this data.
Key Takeaway: The AI traffic spike was real, but it was a “Red Herring.” The true killer was the database architecture, which could no longer handle even normal traffic levels under WordPress 6.9.
The Resolution: How We Fixed It
Once the diagnosis was confirmed, we executed a surgical cleanup and infrastructure upgrade.
Step 1: Database Cleanup
We manually removed the toxic data. Using SQL, we targeted the specific keys responsible for the bloat (after taking a full backup).
SQL
DELETE FROM wp_postmeta
WHERE meta_key IN (‘_wpa_event’,’_wpa_old_event’);
The Result: Millions of rows were removed, instantly dropping the database size.
Step 2: Plugin Governance
We deactivated and removed the plugins responsible for the bloat.
- Action: Removed Jetpack Stats, WordPress Popular Posts, and WP Most Popular.
- Replacement: Moved all analytics tracking to external, non-blocking platforms like Google Analytics 4 and Cloudflare Analytics.
Step 3: Implementing Redis Object Cache
To prevent future database hammering, we implemented Redis.
Because this server hosted two separate WordPress installations, we had to be careful to avoid “Cache Collision.” We configured wp-config.php with unique salts and Redis database distinct indexes.
- Site A: Redis Database 0
- Site B: Redis Database 1
This ensures that queries are served from fast memory (RAM) rather than hitting the MariaDB disk every time.
Results & Stability
The impact of these changes was immediate and drastic.
| Metric | Before Optimization | After Optimization |
| CPU Usage | 90–100% (Saturated) | 15–25% (Normal) |
| Database Load | Critical | Nominal |
| Page Load Speed | 5–10 Seconds (Intermittent 503s) | < 1.5 Seconds |
| Admin Experience | Unusable | Snappy |
3 Lessons for WordPress Developers
If you are managing high-traffic WordPress sites, specifically on version 6.9 or higher, treat this as your warning:
- Don’t Trust the First Symptom: We saw AI bots and assumed that was the issue. It wasn’t. Always dig deeper if the “obvious fix” doesn’t work.
- Audit Your Storage: If a plugin stores logs, stats, or “views” in your database, delete it. Use external services for data that writes frequently.
- Logs Don’t Lie: We guessed it was bots. We guessed it was PHP config. But the Slow Query Log is what actually solved the problem.
Need help stabilizing your high-traffic WordPress site?
At Kha Creation, we specialize in diagnosing complex infrastructure bottlenecks. If your site is struggling after an update, contact our engineering team today.

