📚 Foundations

AI Automation Glossary:
Every Term Defined Clearly

AI automation has its own vocabulary that is constantly referenced but rarely defined clearly. This glossary provides precise, jargon-free definitions for every term you will encounter when building or evaluating AI automation systems.

Foundations·ThinkForAI Editorial Team·November 2024
AI automation has its own vocabulary — terms like tokens, embeddings, RAG, hallucination, and agentic are used constantly but rarely defined clearly. This glossary provides precise, jargon-free definitions for every term you will encounter when building or evaluating AI automation systems.
Sponsored

Core AI automation terms (A–L)

Agent / AI Agent: An AI system that can decide what actions to take, use tools, and work toward a goal across multiple steps — as distinct from standard automation that follows a fixed, predefined sequence.

API (Application Programming Interface): The programmatic connection that allows software to communicate. In AI automation, the OpenAI API or Anthropic API is what Make.com calls to get AI-generated outputs. You pay per API call based on tokens used.

Chunk: A segment of text produced when splitting large documents into smaller pieces for embedding and storage in a vector database. Typically 400-800 words each, with overlap between adjacent chunks.

Context window: The maximum amount of text (measured in tokens) that a model can process in a single call — both the input you send and the output it generates. GPT-4o has a 128,000-token context window (approximately 96,000 words).

Embedding: A numerical representation of text as a high-dimensional vector (list of numbers) that captures semantic meaning. Texts with similar meaning have similar embedding vectors, enabling semantic search.

Hallucination: When an AI model generates confident-sounding but factually incorrect information. Reduced significantly by prompt design (explicit factual constraints) and RAG (grounding in retrieved documentation).

Idempotent: An operation that produces the same result whether executed once or multiple times. Good automation design ensures write operations are idempotent to prevent duplicate records when triggers fire multiple times.

LLM (Large Language Model): The AI model type that powers most AI automation — trained on large amounts of text and capable of understanding and generating human language. GPT-4o, Claude, and Gemini are LLMs.

Core AI automation terms (M–Z)

Monitoring log: A record of every automation run — timestamp, inputs, outputs, success/failure, and costs. The essential foundation for catching automation failures and tracking performance over time.

Operation (Make.com): The unit Make.com uses to measure usage. Each module execution in a scenario run counts as one operation. Make.com Core includes 10,000 operations/month.

Prompt / System prompt: The instructions given to an AI model. The system prompt defines the AI's role, task, output format, constraints, and examples. Quality of the system prompt is the primary determinant of output quality.

RAG (Retrieval-Augmented Generation): An architecture where relevant content is retrieved from a knowledge base and included in the AI prompt as context, reducing hallucination and enabling responses grounded in specific documentation.

ReAct: Reason + Act — the standard AI agent architecture where the model explicitly reasons about what to do, calls a tool, observes the result, and continues until the task is complete.

Shadow mode: Running an automation on real inputs without taking real actions — logging what it would do rather than doing it. Used during testing to evaluate performance before production deployment.

Straight-through rate (STR): The percentage of automation runs that complete successfully without human intervention. Target: 85%+ for a well-tuned production automation.

Temperature: A parameter (0.0-2.0) controlling output randomness. Lower temperature (0.0-0.2) produces more consistent, deterministic outputs — appropriate for classification and extraction. Higher temperature (0.5-0.8) produces more varied, creative outputs — appropriate for content generation.

Token: The unit AI models use to process text. Approximately 4 characters or 0.75 words per token. You pay per token for API usage. A typical email is 200-400 tokens; a long document might be 3,000-10,000 tokens.

Vector database: A database that stores text embeddings and enables fast similarity search — finding the most semantically similar content to a query. Examples: Pinecone, Supabase pgvector, FAISS, Chroma.

Webhook: A real-time HTTP callback — one system sends data to another URL immediately when an event occurs. Webhooks trigger automations instantly vs. polling (scheduled checks every 15+ minutes).

FAQ

What is the difference between a prompt and a system prompt?

A prompt is any text input sent to an AI model. A system prompt specifically refers to the instruction block sent as the "system" role message in the Chat Completions API — it defines the AI's role, task, and constraints, and is processed before the user message. In Make.com's OpenAI module, the system prompt goes in the "System message" field. For automation, the system prompt is the most important configuration — get it right and your automation works reliably; get it wrong and it fails systematically.

What does "grounded" mean in AI automation?

An AI response is grounded when it is based on specific, verified information provided in the prompt rather than on the model's training data. RAG architecture grounds responses by retrieving relevant documentation and including it in the prompt as context. Grounding dramatically reduces hallucination because the model answers from the provided content rather than generating from statistical patterns in training data.

Sponsored

Keep building expertise

The complete guide covers every tool and strategy.

Complete AI Automation Guide →

ThinkForAI Editorial Team

Updated November 2024.