Handle Errors in Claude Code Workflows
Debugging, error messages, and how to recover when Claude gets stuck.
When Things Go Wrong
Your automation ran at 3am. You wake up to... nothing. No report. No error message. Just silence. What happened?
This lesson is about making your automations resilient and debuggable.
Why automations fail
Common failure points:
- Missing input — The file wasn't there, or was empty
- Bad data — The file format changed, or contained unexpected values
- External dependencies — API was down, rate limit hit, authentication expired
- Claude interpretation — Instructions were ambiguous, Claude did something unexpected
- System issues — Disk full, permissions problem, network down
The first rule: log everything
Every scheduled automation should write to a log file:
claude "Generate report" >> ~/logs/daily-report.log 2>&1The >> appends to the log (rather than overwriting). The 2>&1 captures errors too.
Now when something fails, you can check the log:
tail -50 ~/logs/daily-report.logAdd timestamps to logs
Plain logs are hard to navigate. Add timestamps:
Create a wrapper script (~/scripts/run-report.sh):
#!/bin/bash
echo "=== Run started at $(date) ===" >> ~/logs/daily-report.log
cd ~/daily-report
claude "Generate report" >> ~/logs/daily-report.log 2>&1
echo "=== Run completed at $(date) ===" >> ~/logs/daily-report.logNow your log shows exactly when each run started and finished.
Build validation into your instructions
Don't trust that inputs exist. Tell Claude to check:
Before processing:
1. Check if sales_data.csv exists in /input
2. If it doesn't exist, save an error message to /output/error.md explaining the file was missing, then stop
3. Check if the file has at least 10 rows (to catch empty or truncated exports)
4. If the file seems wrong, save an error.md and stop
Only proceed if the input looks valid.Now instead of confusing failures, you get clear error messages.
Handle specific error cases
Think about what could go wrong and handle each case:
Read the CRM export and generate a summary.
Error handling:
- If no file exists: create error.md saying "No CRM export found"
- If file exists but is empty: create error.md saying "CRM export is empty"
- If file is missing required columns (email, amount, status): list missing columns in error.md
- If the file has data but no closed deals: create the report anyway, noting "No deals closed this period"Notification on failure
For important automations, alert yourself when they fail:
Mac terminal notification:
#!/bin/bash
cd ~/daily-report
if claude "Generate report" >> ~/logs/report.log 2>&1; then
terminal-notifier -title "Report Success" -message "Daily report ready"
else
terminal-notifier -title "Report Failed" -message "Check logs"
fiEmail notification (if you have mail configured):
if ! claude "Generate report" 2>&1; then
echo "Report generation failed. Check ~/logs/report.log" | mail -s "ALERT: Daily report failed" [email protected]
fiRetry logic
Some failures are temporary (network blip, API timeout). Add retry logic:
#!/bin/bash
MAX_RETRIES=3
RETRY_DELAY=60 # seconds
for i in $(seq 1 $MAX_RETRIES); do
echo "Attempt $i at $(date)" >> ~/logs/report.log
if claude "Generate report" >> ~/logs/report.log 2>&1; then
echo "Success on attempt $i" >> ~/logs/report.log
exit 0
fi
echo "Failed, waiting ${RETRY_DELAY}s before retry" >> ~/logs/report.log
sleep $RETRY_DELAY
done
echo "All retries failed" >> ~/logs/report.log
terminal-notifier -title "Report Failed" -message "Failed after $MAX_RETRIES attempts"
exit 1Graceful degradation
If part of your workflow fails, maybe the rest can still run:
Generate the daily briefing:
1. Read sales data from sales.csv
- If this fails, note "Sales data unavailable" in the report and continue
2. Pull website traffic from Google Analytics API
- If the API call fails, note "Analytics data unavailable" and continue
3. Check email for urgent items
- If this fails, note "Email check failed" and continue
Generate whatever report is possible with available data.
Flag any sections that used fallback data.Partial information is often better than no information.
Testing error handling
Don't just hope your error handling works. Test it:
- Missing file: Temporarily rename the input file, run the automation
- Empty file: Replace input with an empty file
- Bad data: Add some garbage rows to the input
- Network failure: Disconnect wifi, run an API-dependent automation
Each test should produce a clear error message, not a mysterious failure.
Monitoring multiple automations
If you have several scheduled tasks, create a status dashboard:
Read all files in ~/logs/ that were modified in the last 24 hours.
For each log file:
- Check if the most recent run succeeded or failed
- Note when it last ran
- Flag anything that hasn't run in over 24 hours
Create a status summary: status_[date].mdSchedule this to run daily, and you get a quick overview of all your automations.
Recovery procedures
When things do fail, have a plan:
- Check the log — What error occurred?
- Identify the cause — Missing file? Bad data? API down?
- Fix the root cause — Don't just re-run, fix why it failed
- Re-run manually — Verify it works before waiting for the next scheduled run
- Update error handling — Add checks to catch this failure type earlier next time
Recap
Robust automations:
- Log everything with timestamps
- Validate inputs before processing
- Handle specific error cases explicitly
- Notify you on failure
- Retry transient failures
- Degrade gracefully when possible
The goal isn't to prevent all failures — that's impossible. The goal is to know when failures happen and fix them quickly.
Next up
You've mastered data processing. Now let's make the output prettier. In the next lesson, you'll learn to generate HTML reports and simple dashboards — visual outputs you can share with your team.