Adopt Glossary

With clear definitions and helpful examples, Adopt's AI glossary has everything you need to know to succeed in AI space.

AGI & Future AI

AI Control Problem

The challenge of ensuring that superintelligent AI, once created, can be controlled or guided so that it does not act against human interests; essentially, how to put effective safeguards on a superior intelligence

Artificial General Intelligence (AGI)

A hypothetical future AI that possesses broad, human-level (or beyond) cognitive capabilities across a wide range of tasks and domains, rather than being specialized to a narrow task

Artificial Superintelligence (ASI)

A level of intelligence of a theoretical AI that far surpasses human intelligence in all aspects, including creativity, problem-solving, and social skills

Existential Risk (AI)

The risk that misaligned or improperly controlled future AI systems could lead to catastrophic outcomes on a global scale, including human extinction or irrecoverable civilizational collapse (addressing this is a major motivation behind AI alignment research)

Intelligence Explosion

A scenario in which an AI undergoing recursive self-improvement accelerates its intelligence exponentially, possibly going from human-level to vastly superhuman in a short timespan

Long-termism

An ethical and strategic framework that emphasizes the importance of positively influencing the long-term future (often brought up in AI discussions regarding ensuring advanced AI will be beneficial for humanity in the long run)

Recursive Self-Improvement

The capability of an AI system to improve itself (its own algorithms or architecture), leading to a positive feedback loop of increasingly powerful iterations; often discussed as a path to rapid AI advancement towards superintelligence

Technological Singularity

A theorized point in the future when technological progress (often tied to the advent of AGI/ASI) accelerates beyond humanity’s ability to comprehend or control it, potentially radically transforming society

Alignment & Safety

AI Alignment

The endeavor to ensure AI systems’ goals and behaviors are aligned with human values and intentions; an aligned AI reliably does what its creators or users intend and does not act counter to human interests

AI Safety

The field concerned with preventing AI systems from causing unintended harm, whether due to errors, misuse, or misaligned objectives, especially as systems become more powerful

Adversarial Examples (Adversarial Attacks)

Inputs designed to fool an AI system into erring, often by exploiting quirks in the model’s processing (for example, a seemingly innocuous prompt that causes a harmful or nonsensical output)

Alignment Problem

The fundamental challenge in AI of how to create extremely powerful agents that will remain aligned with what humans actually want and not pursue harmful objectives (a key issue as we approach human-level or superhuman AI)

Constitutional AI

An approach to AI alignment (pioneered by Anthropic) where an AI is guided by a set of principles or a “constitution” during training, using AI feedback and these principles to refine behavior, reducing the need for direct human feedback in fine-tuning

Content Moderation

Processes and tools used to filter, remove, or flag inappropriate or harmful content in AI outputs (often combining automated model filters and human review for safety-critical systems)

Existential Risk

The potential risk that advanced AI could pose to the existence or future of humanity if not properly controlled or aligned (a topic of debate, often discussed in the context of superintelligent AI and long-term safety)

Guardrails (AI Safety Measures)

Mechanisms and policies integrated into AI systems to prevent or limit undesirable outputs (such as filters for hate speech, refusal handlers for disallowed content, or constraints on actions an agent can take)

Hallucination

When an AI model confidently generates incorrect or fictitious information that is not grounded in its input or knowledge (e.g. making up facts or sources); a common issue with generative models that poses safety and trust concerns

Inner Alignment

Ensuring that the AI’s emergent goals (its internal heuristics or motivations developed during training) align with the intended objective; even if we set a correct outer goal, the AI might internalize a proxy goal, leading to misalignment

Jailbreaks (Prompt Jailbreaking)

Methods by which users intentionally trick or circumvent an AI system’s safety filters or guardrails (often via prompt manipulation) to get it to output content it is normally restricted from producing

Model Robustness

The degree to which an AI model can maintain performance (and safe behavior) under distribution shifts, noisy or adversarial inputs, or other unexpected conditions; improving robustness is key to reliability and safety

Outer Alignment

Ensuring that an AI’s stated objective (the one we train it on or deploy it to achieve) is in line with human values; essentially, the design of the reward or goal given to the AI should reflect what we truly want

Prompt Injection

An attack where a malicious or cleverly crafted input is given to an AI (often in a prompt) to manipulate it into ignoring its prior instructions or constraints, potentially causing it to produce disallowed output or reveal hidden prompts

Red Teaming

The practice of testing an AI system by trying to get it to fail or produce problematic outputs (playing an adversarial role) in order to identify weaknesses, vulnerabilities, or unsafe behaviors so they can be fixed

Toxicity (Harmful Content)

Refers to hateful, harassing, or extremely inappropriate content that AI models must be trained or controlled not to produce; many AI safety efforts focus on reducing toxic generations

Ethics & Society

AI Ethics

The field and principles concerned with the moral implications and responsible use of AI, ensuring developments and deployments of AI are fair, transparent, and beneficial

AI Governance

Structures, policies, and norms for overseeing AI development and deployment at organizational or societal levels, to manage risk and ensure alignment with societal values

AI Regulation

Legal and regulatory frameworks aimed at overseeing AI technology (for instance, laws requiring safety checks, bias audits, or defining liability); e.g. ongoing efforts like the EU AI Act to formalize rules for AI

Accountability

The practice of assigning responsibility for the impacts of AI systems, ensuring there are mechanisms to audit, trace, and, if necessary, rectify harmful outcomes caused by AI decisions

Bias (Algorithmic Bias)

Systematic unfairness in AI outputs caused by biases in training data or model design, which can lead to favoring or disfavoring certain groups (e.g. along race or gender lines)

Deepfakes

Hyper-realistic fake media (video, audio, images) generated by AI, often depicting someone doing or saying something they never did; raises concerns around misinformation and consent

EU AI Act

A comprehensive regulatory proposal by the European Union specifically focused on AI, which plans to categorize AI systems by risk and impose requirements (transparency, oversight, etc.) for high-risk AI applications

Explainability

Providing understandable explanations for an AI’s output or behavior (either through interpretable models or post-hoc methods that shed light on black-box models)

Fairness

The principle that an AI system should not systematically discriminate against individuals or groups; in practice, ensuring algorithmic decisions are equitable across different demographics

Intellectual Property (AI & Copyright)

Legal issues surrounding AI, such as copyright of AI-generated content and the use of copyrighted data for training (a topic of debate as generative models become widespread)

Interpretability

The ability to explain or understand how an AI model is making its decisions (making the model’s internal logic more human-comprehensible)

Misinformation

False or misleading information that can be amplified by AI (e.g. AI-generated fake news or deepfake content), posing ethical and societal challenges in discerning truth

Model Card

A form of documentation that accompanies an AI model, detailing its intended use, performance, training data, ethical considerations, and limitations, to inform users and encourage responsible deployment

Privacy

The principle of respecting and protecting personal data in AI systems (e.g. ensuring that user data used for training or processed by the model is handled in compliance with privacy rights and regulations)

Responsible AI

Practices and guidelines to develop and deploy AI systems in a manner that is ethical, accountable, and respects human rights and values

Transparency

Openness about how AI systems work and make decisions, including clarity about training data, algorithms, and limitations (often achieved through documentation or interpretable models)

Watermarking (AI Content Attribution)

Embedding hidden signals in AI-generated content to identify it as such (and potentially trace its source), proposed as a way to help distinguish AI outputs and curb misuse

Generation & Decoding

Beam Search

A decoding algorithm that keeps track of multiple candidate sequences (beams) at each step, expanding them and finally choosing the highest probability full sequence (often used in translation)

Greedy Decoding

A simple decoding approach that always picks the highest-probability next token at each step, which can be fast but might miss globally better sequences and lead to repetitive outputs

Nucleus Sampling (Top-p)

A decoding strategy that selects from the smallest possible set of tokens whose cumulative probability exceeds a threshold p, allowing dynamic cutoff based on distribution

Temperature (Sampling Temperature)

A parameter controlling randomness in text generation; higher temperature yields more random/creative outputs, while lower values make outputs more deterministic

Top-k Sampling

A decoding method that restricts the model’s next-token choices to the top k most likely tokens, then samples from that subset, reducing low-probability oddities

Human-in-the-Loop

Active Learning

An interactive training approach where the AI selectively queries a human annotator for labels on uncertain examples, optimizing the use of human labeling effort to improve the model efficiently

Human Evaluation

Involvement of human judges to assess the quality, accuracy, or safety of AI outputs (often used in research to complement automated metrics, or in deployment for quality control)

Human Feedback

Any form of input from humans about an AI’s performance (could be explicit labels, ratings, or corrections) used to guide model behavior or evaluate outputs

Human Oversight

Monitoring of an AI system by humans who can step in to intervene, correct, or shut down the system if it behaves undesirably (an important concept for keeping AI behavior in check)

Human-AI Collaboration

Arrangements where humans and AI systems work together on tasks, leveraging the strengths of each (for example, AI generates options and human makes final decisions, or vice versa)

Human-in-the-Loop (HITL)

A setup where human input, oversight, or feedback is integrated into the AI system’s operation or training process, ensuring human guidance or intervention at crucial points

Preference Learning

The process of learning a model of human preferences (likes/dislikes) from data, often by having humans rank or choose between AI outputs, so the AI can generate outputs more aligned with what humans prefer

Reinforcement Learning from Human Feedback (RLHF)

A training technique where human feedback (e.g. preference comparisons on outputs) is used as a reward signal to fine-tune an AI’s policy, aligning the AI’s behavior with human preferences

Key Models & Platforms

AutoGPT

An experimental open-source AI agent (2023) that uses GPT-4 or GPT-3.5 to autonomously iterate on tasks, self-prompt, and chain together actions towards a high-level goal given by the user (one of the first systems to showcase semi-autonomous GPT-based agents)

BLOOM

A 176B-parameter multilingual LLM developed by the BigScience collaboration (2022) as an open alternative to proprietary models, capable of generating text in multiple languages

BabyAGI

An open-source task automation agent that uses an LLM to create, prioritize, and execute tasks in a loop, inspired by the idea of an “automatic AI executive assistant”; often mentioned alongside AutoGPT as an early exploration of autonomous agent frameworks

Bard

Google’s conversational AI (powered by models like LaMDA and PaLM 2), designed to answer queries and assist with tasks in a way similar to ChatGPT, with access to real-time information via Google search integration

ChatGPT

The AI conversational assistant based on GPT-3.5/GPT-4, notable for its easy interface and ability to engage in dialogue, answer questions, and perform tasks through natural language

Claude

Anthropic’s large language model assistant, designed with a focus on being helpful, honest, and harmless; known for using Constitutional AI training and able to handle very large context lengths

Claude 2

The second-generation Claude (Anthropic, 2023), offering improved reasoning, coding, and a context window of up to 100K tokens, allowing it to digest and output very long documents or conversations

DALL-E 3

The third iteration of OpenAI’s text-to-image generative model (2023), which produces highly detailed and accurate images from text prompts and is better integrated with language understanding (often accessible via Bing Image Creator or ChatGPT plugins)

DeepSeek R1

DeepSeek’s “Reasoner 1” model (2025), open-sourced under MIT license, aimed at robust reasoning tasks and serving as a foundation for building agentic capabilities and long-term reasoning

DeepSeek V3

An advanced AI model from DeepSeek (2024) focusing on reasoning, coding, and tool use capabilities; part of DeepSeek’s series of models, with improvements in “DeepThink” reasoning modes and available via API

DeepSeek-VL

DeepSeek’s vision-language model (2024) that can process both text and visual inputs, handling tasks like image understanding, OCR, and diagram analysis by combining language and visual reasoning

Falcon

A series of large open models (Falcon-7B, Falcon-40B) released by the Technology Innovation Institute, which achieved top ranks among open-source LLMs (40B version particularly known for chat and reasoning prowess)

GPT-3

Generative Pre-trained Transformer 3, OpenAI’s 175-billion-parameter language model (2020) that demonstrated striking few-shot learning abilities and set the stage for the modern LLM era

GPT-3.5

An improved series of GPT-3-based models (e.g. OpenAI’s text-davinci-003 and GPT-3.5-Turbo, released 2022) that power early ChatGPT versions, offering better instruction-following and dialog capabilities

GPT-4

OpenAI’s flagship 4th-generation LLM (2023) known for its advanced reasoning, more factual responses, and multimodal vision capabilities (in the vision-enabled version)

Gemini

A forthcoming multimodal AI model announced by Google DeepMind, envisioned to combine advanced language capabilities with cutting-edge problem-solving and planning (touted as a next-generation model rivaling GPT-4)

GitHub Copilot

An AI pair-programmer tool (by GitHub/OpenAI) that suggests code completions and functions directly in the editor, based on the context in the code file; powered initially by OpenAI Codex and now by advanced GPT models

Hugging Face Transformers

A widely-used open-source library and platform by Hugging Face that provides implementations of numerous transformer models (BERT, GPT, T5, etc.) and tools for training and inferencing, instrumental in democratizing access to state-of-the-art AI models

LLaMA 2

Meta’s open-source LLM (2023) and successor to LLaMA, available in various sizes (7B to 70B parameters) and released with a permissive license, spurring a wave of community models and fine-tuned variants

LangChain

A framework for developing applications with LLMs by “chaining” together components like prompt templates, memory, and tool use; provides standardized interfaces to build complex agentic behavior (e.g. multi-step reasoning or using external tools)

Midjourney

A commercial generative art model/service known for its high-quality, aesthetically pleasing image outputs from text prompts; widely used by artists and designers through a Discord interface

Mistral

Mistral AI’s 7B-parameter open LLM (2023) known for its strong performance relative to size and fully open availability, demonstrating efficient scaling and competitive ability on many tasks

PaLM 2

Google’s Pathways Language Model 2 (2023), a family of advanced LLMs with strong multilingual and reasoning skills, used across Google’s products and as the base for models like Bard

Stable Diffusion

An influential open-source text-to-image diffusion model (released 2022 by Stability AI) that can generate images on consumer hardware; its open release spurred a community ecosystem for image generation

Vicuna

A popular 13B-parameter chat model created by fine-tuning LLaMA on user-shared ChatGPT conversations; notable as an open-source chatbot that achieves high-quality dialogue close to ChatGPT-3.5 levels

Whisper

OpenAI’s powerful speech recognition model (2022) capable of transcribing audio in multiple languages and formats with high accuracy, released open-source and widely used for automated transcription

LLMs & Architecture

Context Window

The span of tokens an LLM can consider at once (context length); larger windows enable the model to handle longer inputs or conversations

Decoder-Only Model

Transformer architecture that generates text by predicting the next token in a sequence (e.g. GPT series models)

Deep Learning

The use of large neural networks with multiple layers to learn complex patterns from data

Encoder-Decoder Model

Architecture with an encoder network that processes input (e.g. text) into a representation, and a decoder that generates an output (used in translation, etc.)

Foundation Model

A large AI model (like an LLM) trained on broad data at scale, adaptable to many downstream tasks

Generative Pre-trained Transformer (GPT)

A family of decoder-only LLMs pre-trained on large text corpora and designed to generate human-like text

Language Modeling (Next-Token Prediction)

The core training task for generative language models: predicting the next token in a sequence given the preceding tokens

Large Language Model (LLM)

A deep neural network (often transformer-based) trained on massive text data to generate or understand language

Multi-Head Attention

Technique in transformers where multiple self-attention operations run in parallel to capture different relationships

Neural Network

Computing system inspired by the human brain’s networks of neurons, composed of layers of interconnected nodes

Self-Attention

Mechanism that allows a model to weigh the importance of different parts of the input sequence when encoding a representation

Tokenization

The process of breaking text into tokens (words, subwords, or characters) that serve as the basic units for an LLM

Transformer

A neural network architecture using self-attention mechanisms to efficiently model long-range dependencies in sequences

Memory & Retrieval

Embeddings

Numerical vector representations of data (text, images, etc.) where semantic similarity corresponds to geometric proximity; used by LLMs to represent words/phrases and by retrieval systems to find related information

Episodic Memory

Memory of specific events or experiences for an AI agent (analogous to human episodic memory), such as remembering the sequence of its past actions or dialogues

Knowledge Base

A repository of information (documents, FAQs, structured data, etc.) that an AI system can draw upon; often linked via retrieval so the model can access facts beyond its parametric memory

Knowledge Graph

A structured network of real-world entities and their relationships, which can serve as an external knowledge source for AI (some systems use knowledge graphs to provide factual grounding)

Long-Term Memory

Persistent memory for an AI agent, enabling it to retain information beyond the immediate context window (often implemented via external storage like databases or summaries of past interactions)

Memory Retrieval

The process of recalling stored information from an AI’s long-term memory or database (e.g. fetching relevant notes from a vector store based on the current query or situation)

Retrieval-Augmented Generation (RAG)

A technique that augments an LLM by retrieving relevant documents or facts (using vector search or other methods) and providing them as additional context for the model to improve accuracy

Semantic Memory

Memory for general knowledge and facts that an AI agent has acquired (analogous to human semantic memory), which it can draw on when needed

Short-Term Memory (Context)

The transient memory of an AI agent or LLM, typically the content of the current context window (recent dialogue or data in prompt) that the model can directly utilize

Vector Database (Vector Store)

A specialized database for storing embedding vectors and enabling efficient similarity search (e.g. finding the nearest vectors to a query), used to help LLMs retrieve relevant context

Model Training & Fine-Tuning

Adapters (Adapter Layers)

Small additional layers inserted into a model’s architecture and trained for a new task, allowing the bulk of the original model to remain fixed

Fine-Tuning

Additional training of a pre-trained model on a specific task or dataset to specialize its behavior (e.g. fine-tuning an LLM for coding or chat)

Instruction Tuning

Fine-tuning an LLM on datasets of task instructions and responses, so it better follows human instructions and prompts

Knowledge Distillation

Technique where a smaller “student” model is trained to replicate the behaviors of a larger “teacher” model, transferring knowledge in a compressed form

Low-Rank Adaptation (LoRA)

A PEFT method that injects trainable low-rank update matrices into a model’s layers, enabling effective fine-tuning with minimal parameters

Model Compression

Broad set of methods (distillation, quantization, pruning, etc.) to reduce model size or computation while preserving performance

Parameter-Efficient Fine-Tuning (PEFT)

Techniques to fine-tune large models by adjusting only a small number of parameters, making training more efficient

Pre-training

Initial training of a model on a broad, generic task (like next-word prediction on huge text data) to learn general patterns

Prompt Tuning

Fine-tuning approach where fixed prompt embeddings are learned, which guide the pre-trained model on a new task without updating the full model

Pruning

Removing unnecessary weights or neurons from a model (usually those with small effect), to make it smaller and faster with minimal loss in accuracy

Quantization

Technique to reduce model size and speed up inference by using lower-precision numbers (e.g. 8-bit or 4-bit) for model weights and calculations

Supervised Fine-Tuning (SFT)

Fine-tuning using explicit human-written examples of inputs and desired outputs (often a step before RLHF in training chatbots)

Multimodal & Generative AI

Audio Generation

Creating novel audio (speech, music, sound effects) via AI models, such as generating music from a genre prompt or producing human-like speech from text

CLIP (Contrastive Language-Image Pretraining)

A model that learns a joint embedding for images and text by training on image-caption pairs, used to connect text and image domains (e.g. powering text-guided image generation)

Diffusion Model

A generative model (popular for image synthesis) that gradually transforms random noise into a coherent image through iterative denoising steps (e.g. as used in Stable Diffusion)

Generative Adversarial Network (GAN)

A class of generative models with two components (generator and discriminator) trained in opposition, where the generator learns to produce data (images, etc.) that can fool the discriminator

Image Captioning

Task of generating a descriptive caption for a given image, often using vision-language models

Image Generation

The creation of novel images by AI models from some input (such as text prompts or style examples), exemplified by generative models like GANs and diffusion models

Multimodal AI

AI systems or models that integrate multiple types of data (e.g. text, vision, audio) in processing or generation, enabling richer understanding and output

Speech Recognition (ASR)

Automatic Speech Recognition, converting spoken audio into text (e.g. models like Whisper transcribe audio to text)

Text-to-Image

Generative AI task where a model creates an image based on a given text description (prompt)

Text-to-Speech (TTS)

Generating spoken audio from text input, allowing AI systems to talk (voice outputs can be made to sound natural via advanced TTS models)

Variational Autoencoder (VAE)

A generative model that learns to encode data into a latent space and then decode (sample) from that space to generate new data; often used in combination with other models (like providing a latent space for diffusion)

Video Generation

The AI-driven creation of video content, potentially from prompts (e.g. text-to-video models) or by interpolating between images, though still in early stages of development

Vision-Language Model (VLM)

A model that connects visual data and language, allowing tasks like describing images, answering questions about images, or aligning image and text embeddings

Visual Question Answering (VQA)

Task where a system answers natural language questions about a given image; it requires understanding the image content and the question

Planning & Autonomy

Action-Observation Loop

The iterative cycle in which an agent takes an action in an environment (or calls a tool), then observes the result, then decides the next action, and so on (also called the perception-action loop)

Agent (AI Agent)

An AI system that perceives its environment (through inputs) and takes actions autonomously to achieve goals; often refers to an LLM-based program that can iteratively reason and act

Agent Executor

In agent frameworks (e.g. LangChain), the part that actually carries out the plan or loop of an agent: selecting actions, executing them, observing results, and iterating this process

Agentic AI

A descriptor for AI systems that demonstrate agency, i.e. that can make decisions, take initiatives, and perform tasks autonomously rather than just responding passively

Autonomous Agent

An AI agent that operates with a high degree of independence, deciding on its own actions towards achieving an objective without step-by-step human guidance

Environment (for Agents)

The external context or space in which an agent operates and with which it interacts (could be a real physical environment, a simulated world, or a conceptual task environment for the agent)

Generative Agents

A term referring to AI agents (often powered by LLMs) that generate behaviors and interactions autonomously, sometimes used to simulate human-like agents in interactive environments (e.g. simulated game or social environments)

Goal-Oriented Behavior

Behavior of an AI agent that is driven by specified objectives or end states; the agent chooses actions based on progress toward fulfilling a goal

Multi-Agent System

A setup where multiple AI agents (or agents and humans) interact or collaborate, which can lead to emergent behaviors and require coordination strategies

Orchestration

The process of coordinating multiple components or agents in an AI system, ensuring that various models, tools, or sub-agents work together in a coherent workflow to accomplish complex tasks

Partial Observability

A condition where an agent does not have access to the complete state of the environment, requiring it to make decisions under uncertainty (many real-world scenarios are partially observable)

Plan-and-Execute

An agent strategy where a high-level plan is formulated (possibly using one model or process) and then executed stepwise (possibly by another process), as opposed to interleaving planning and execution at each step

Planner (Planning Module)

A component or algorithm that devises a sequence of actions or steps to achieve a given goal (some agent frameworks include a dedicated planning module to map out tasks)

Policy

In an agent or RL context, the strategy that defines the agent’s actions in each state or situation (can be learned or hard-coded); a policy maps observations to the next action the agent should take

Reinforcement Learning (RL)

An AI learning paradigm where an agent learns to make sequences of decisions by receiving rewards or penalties; often used for training autonomous agents via trial and error

Reward Function

In reinforcement learning or goal-driven agents, a function that provides feedback (reward) to the agent, quantifying how well it is doing with respect to the goal; the agent tries to maximize cumulative reward

Task Decomposition

Breaking a complex goal into smaller, manageable subtasks (an ability of advanced agents to plan hierarchically or tackle problems step-by-step)

World Model

An internal model of the environment that an autonomous agent maintains or learns, enabling it to simulate outcomes of actions or reason about the environment’s dynamics (commonly used in robotics and planning)

Reasoning & Prompting Techniques

Chain-of-Thought (CoT)

A prompting or reasoning approach where the model is encouraged to generate intermediate reasoning steps before giving a final answer

Few-Shot Prompting

Providing a few example inputs and outputs in the prompt to demonstrate a task to the model, improving performance on that task without updating model weights

In-Context Learning

The phenomenon where an LLM learns to perform tasks from the prompt context alone (such as the few-shot examples) without any parameter updates

Prompt Engineering

Crafting and optimizing input prompts to guide an LLM’s outputs effectively (including wording, format, adding context or constraints)

ReAct (Reason+Act)

An approach that interleaves the model’s chain-of-thought reasoning with actions (e.g. calling tools), allowing the model to think stepwise and use tools when needed

Reflexion (Self-Reflection)

A method where an agent reflects on its past actions/answers, critiques itself, and uses that feedback to improve subsequent reasoning or responses

Role Prompting (Persona Assignment)

A prompting technique where the model is instructed to “act as” a certain role or persona (e.g. a helpful tutor), influencing the style and content of its responses

Self-Consistency

Decoding strategy for reasoning tasks where multiple reasoning paths or answers are sampled, and the final answer is chosen by picking the most common result among them

System Message (System Prompt)

In structured chat prompting (e.g. ChatGPT API), a special initial prompt that sets the context or rules for the assistant (e.g. defining its role or tone)

Tree-of-Thoughts (ToT)

A strategy where the model explores multiple reasoning paths (branching out possible solution steps like a tree) and evaluates or backtracks to find the best outcome

Zero-Shot CoT

A prompt technique that elicits reasoning without examples by appending a phrase like “Let’s think step by step,” prompting the model to generate a chain-of-thought

Zero-Shot Prompting

Prompting a model to perform a task with no explicit examples, relying on the model’s pre-trained knowledge and instruction-following ability

Tool Use & Integration

API Call

A method by which an AI agent invokes an external application’s function (Application Programming Interface) to perform an action or retrieve information (e.g. calling a weather API)

Code Execution

Ability for an AI agent to write and run code as a tool (e.g. using a Python interpreter to calculate or manipulate data), enabling solving problems that require computation

Function Calling

A structured way for an AI model to request using a tool or function by outputting a JSON/structured payload specifying the tool and parameters (notably introduced in OpenAI’s API to integrate tools)

MRKL

Stands for Modular Reasoning, Knowledge and Language, an architecture that uses an LLM as a “router” alongside specialized external modules (tools or calculators) to solve complex tasks

Plugins (AI Plugins)

Extendable tools or modules that an AI system can use when needed (for example, ChatGPT plugins that allow it to access external services like web Browse or bookings)

Tool Use (AI Tool Augmentation)

Equipping AI systems (especially LLMs) with the ability to use external tools or APIs (calculators, web search, databases, etc.) to enhance their capabilities and accuracy

Toolformer

A model or approach where the LLM is trained to decide when to invoke certain tools by inserting special tool-use tokens in its output, effectively learning tool use during its training

Web Browse (Web Access)

Capability of an AI agent to perform internet searches or navigate web pages to retrieve up-to-date information (as seen in systems like WebGPT or Bing Chat)