The minimum viable monitoring setup
Before building anything elaborate, implement these three monitoring essentials for every automation you deploy. They take 30 minutes to set up and prevent the majority of production incidents.
1. Error email notifications
In Make.com: go to Scenario Settings and enable "Email notifications on scenario error." Set it to alert on the first error and every Nth error (choose N=10 for high-volume scenarios). In n8n: configure the Error Workflow feature to send an email or Slack message on any workflow failure. In Python: use a try/except block that sends an email or Slack notification when any critical step fails.
2. Google Sheets execution log
Every automation run should write a row to a Google Sheet: timestamp, items processed, items succeeded, items failed, total API cost, and any error messages. This single sheet provides everything you need to review performance weekly. Set a 15-minute calendar reminder on Monday mornings to review last week's log — it takes 5 minutes and catches gradual degradation before it compounds.
3. Make.com built-in execution history
Make.com stores the last 30 days of execution history for each scenario. Use it to: verify runs are completing on schedule, trace exact data flow through failed runs (click on any failed execution to see where it stopped and what error occurred), and identify patterns in failures (same time of day, same email sender, same input format).
The four metrics that matter for AI automation
Metric 1: Straight-through rate (STR)
The percentage of runs that complete successfully without human intervention or retry. Target: 85%+ for a well-tuned automation. Below 80% indicates systematic issues requiring prompt refinement or error handling improvements. Calculate weekly: successful runs / total runs. Trend matters as much as absolute level — a declining STR signals drift that needs investigation before it reaches a failure threshold.
Metric 2: AI approval rate (for automations with human review)
For automations where humans review and approve AI outputs before action, the approval rate (percentage of outputs approved as-is without editing) measures AI quality. Target: 80%+ for a production-ready automation. Below 70% means the AI is consistently producing outputs requiring human correction — the automation is saving less time than you think. Track weekly and use declining approval rate as the trigger for prompt refinement work.
Metric 3: Cost per item
Total API cost divided by items processed, tracked weekly. Stable cost per item is the expected state. Increasing cost per item signals: prompt length inflation (system prompt has grown), input token increase (longer inputs being processed), or model change. Decreasing cost per item is usually positive — lower volume with fixed overhead dilutes, or prompt compression is working.
Metric 4: End-to-end latency
For time-sensitive automations (lead alerts, urgent email processing), track the time from trigger to output delivery. Use Make.com's execution duration field or add timestamps at pipeline start and end in your logging sheet. Target latency depends on your use case — email classification can tolerate 30–60 seconds; urgent lead alerts should complete in under 30 seconds.
The weekly monitoring review (10 minutes)
The most effective monitoring practice is a 10-minute weekly review on a fixed day (Monday morning or Friday afternoon work well). The review checklist:
- Open your Google Sheets log. Filter for last 7 days. Check total runs vs. expected runs — are automations running on schedule?
- Calculate STR for the week: successful runs / total runs. Compare to previous week. Flag if declined more than 5 percentage points.
- Scan error messages for patterns. If the same error type appears more than 3 times, investigate and fix.
- Review Make.com execution history for any failed scenarios. Trace 1–2 recent failures to understand root cause.
- Check API costs in OpenAI dashboard. Compare to previous week. Investigate any unexpected increase.
This 10-minute review catches 90% of production issues before they become significant problems. Automate the data collection (the log already exists); the human judgment layer — pattern recognition, root cause hypothesis, prioritisation of fixes — is the irreplaceable 10-minute investment.
Advanced monitoring: automated alerts and dashboards
Automated STR alert
Add a monitoring automation that runs daily and calculates yesterday's STR from your log sheet. If STR falls below 75%, send a Slack alert or email automatically. This converts your reactive weekly review into proactive daily alerting for significant degradation. Implementation: a scheduled Make.com scenario that reads the log sheet, calculates STR, and conditionally posts to Slack if below threshold. Build time: 2–3 hours.
Simple performance dashboard
Your Google Sheets log supports a visual dashboard with no additional tools. Add a second sheet with summary calculations (COUNTIF for success/failure counts, SUMIF for costs, AVERAGEIF for latency) and insert Google Sheets charts showing weekly trends. Share this dashboard with stakeholders who need visibility into automation performance. Build time: 1 hour. Cost: $0.
Pre-launch setup: AI automation pre-launch checklist — includes the complete monitoring setup requirements before deploying any automation to production.
FAQ
Weekly at minimum for all active production automations. Daily for new automations in their first 2 weeks and for automations processing more than 200 items per day. Monthly is insufficient — a week of degraded performance before detection is acceptable; a month is not. The 10-minute weekly review described above is the minimum effective cadence for catching issues before they compound.
First, determine if the pattern is systematic (same error type, same input type, same time of day) or random (distributed across different inputs and times). Systematic errors indicate a specific failure mode that can be fixed with a targeted prompt edit or code change. Random errors at low rates (under 5% of runs) may be acceptable transient failures that retries handle. For systematic errors: identify the smallest change to the prompt or code that addresses the root cause, test on your full test set, and deploy.
Keep building expertise
The complete guide covers every tool and strategy.
Complete AI Automation Guide →ThinkForAI Editorial Team
Updated November 2024.

