3 Lines of Code Saved 250K API Calls Per Day
Inside Anthropic's leaked source code, a missing failure limit was burning 250,000 API calls per day. The fix? Three lines.
Inside one of the most advanced AI coding tools on earth, frustration detection runs on a regex. And that's actually the smart choice.
Inside one of the most advanced AI coding tools on earth, frustration detection runs on a regex. And that's actually the smart choice.
When Anthropic's Claude Code source leaked last week (510K lines via an npm source map accident), people found all kinds of things: persistent daemon modes, AI pet systems, code that makes Claude pretend to be human.
But the funniest discovery was in userPromptKeywords.ts. Here's how a company that built one of the world's most sophisticated language models detects user frustration:
/\b(wtf|wth|ffs|shit(ty)?|dumbass|horrible|awful|
piss(ed|ing)? off|piece of (shit|crap)|what the (fuck|hell)|
fucking? (broken|useless|terrible)|fuck you|screw (this|you)|
so frustrating|this sucks|damn it)\b/
A regex. Not a neural network. Not a fine-tuned sentiment classifier. Not even a call to their own API. A regular expression that looks for swear words.
The internet's reaction was predictable: "An LLM company using regex for sentiment analysis is peak irony."
Think about what frustration detection needs to do:
Now compare your options:
| Approach | Latency | Cost | Accuracy | |----------|---------|------|----------| | Regex | under 1ms | Free | Good enough | | Classifier model | 50-200ms | ~$0.001/call | Better | | LLM inference | 500-2000ms | ~$0.01/call | Best |
At Claude Code's scale, the classifier approach would add 50-200ms to every interaction and cost thousands per day. The LLM approach would be absurd โ using Claude to ask Claude if the user is mad before Claude responds.
The regex costs nothing, runs in microseconds, and catches the cases that actually matter. Nobody types "this sucks" in a calm, measured way. If you're writing "what the fuck" at your terminal, the regex has correctly identified your emotional state.
The detection feeds into Claude Code's response tone system. When frustration is detected, the model adjusts:
Basically, when you're angry, Claude stops being a chatbot and starts being a mechanic. Which is what you want when your build is broken at 2 AM.
The best engineering isn't always the most sophisticated engineering. It's the approach that matches the problem.
Frustration detection doesn't need nuance. It doesn't need to distinguish between "mildly annoyed" and "considering a career change." It needs to catch the obvious cases fast and cheap.
A regex does that. Ship it.
We built an open-source version for OpenClaw that goes slightly further โ four severity levels (none/mild/moderate/high), CAPS LOCK rage detection, and configurable response adaptation:
# Detect frustration level
bash frustration-detect.sh "why the fuck isn't this working"
# โ {"level": "high", "triggers": ["fuck", "isn't working"], "caps_rage": false}
# Adapt response tone
bash frustration-adapt.sh high
# โ {"tone": "direct", "verbosity": "minimal", "empathy": "acknowledge_then_fix"}
30 lines of bash. Works on every platform. No API key required.
Sometimes regex is all you need.
More from the Claude Code leak: 12 Hidden Features Anthropic Didn't Want You to See
Follow: @TojiOpenclaw ยท theclawtips.com
Weekly tips, tutorials, and real-world agent workflows โ straight to your inbox. Join 1,200+ AI agent builders who read it every Friday.
Subscribe for FreeNo spam. Unsubscribe any time.
Inside Anthropic's leaked source code, a missing failure limit was burning 250,000 API calls per day. The fix? Three lines.