コンテンツにスキップ

学ぶ

MCP prompts: A complete introductory guide

Learn what MCP prompts are, how they work with tools and resources, and how to use them to streamline software testing workflows.

mcp prompts

Agentic AI is changing software testing, moving from simple assistants to intelligent systems that plan, run, and analyze workflows on their own.

A 2026 survey of 500 execs at $100M+ companies shows everyone plans to expand agentic AI, but scaling it remains a challenge—results can be inconsistent, and processes often lack repeatability.

For QA and DevOps teams, MCP (Model Context Protocol) prompts offer a solution by ensuring predictability and structured output from prompts, which can also be reused across workflows.

This guide will introduce MCP prompts, explain how they work, and show practical ways to integrate them into testing workflows from test generation to defect analysis and regression evaluation.

What are MCP prompts?

TL;DR: MCP prompts are reusable templates on an MCP server. They structure how models receive input and produce responses.

If MCP feels a bit abstract, let me ground it. The MCP is a standardized way for AI clients to discover and safely call tools exposed by MCP servers. It defines how context, tools, and interactions flow.

Prompts, tools, and resources are all first-class capability primitives of MCP—the lowest-level, standardized building blocks through which MCP servers expose functionality to AI clients.

MCP prompts are reusable templates defined on an MCP server that consistently direct what an AI model receives and responds to. They’re listed via prompts/list and fetched with prompts/get, where typed arguments get injected at runtime.

As the MCP docs put it, “Prompts allow servers to provide structured messages and instructions for interacting with language models. Clients can discover available prompts, retrieve their contents, and provide arguments to customize them.”

Critically, prompts are user-controlled, meaning they’re intentionally selected and never auto-triggered by the model at runtime.

MCP prompts

 

In clients, they can be triggered with slash commands or other client-defined UI patterns, and the server returns a structured messages array (role-tagged, content-typed), ready to be injected directly into the LLM context.

How MCP prompts are structured

TL;DR: MCP prompts follow a defined schema with name, description, arguments, and optional metadata. They return structured messages with typed content.

MCP prompts follow a well-defined schema that makes them predictable and also easy to evaluate across MCP clients and MCP servers.

At the core, each prompt defines

  1. name: A unique, machine-readable identifier used for programmatic access.
  2. title: An optional, human-friendly label you can display.
  3. description: An optional short explanation of the prompt’s purpose and expected behavior.
  4. icon: An optional array of visual assets for rendering the prompt in client interfaces.
  5. arguments: An optional typed input schema defining the parameters required.
Here’s your JSON properly formatted (no code changes, just indentation/spacing fixed):

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "prompts": [
      {
        "name": "code_review",
        "title": "Request Code Review",
        "description": "Asks the LLM to analyze code quality and suggest improvements",
        "arguments": [
          {
            "name": "code",
            "description": "The code to review",
            "required": true
          }
        ],
        "icons": [
          {
            "src": "https://example.com/review-icon.svg",
            "mimeType": "image/svg+xml",
            "sizes": [
              "any"
            ]
          }
        ]
      }
    ]
  }
}
```

Arguments are marked with required: true/false. Required arguments must be supplied for prompts/get to resolve, while optional ones may be omitted. So if a prompt requires test_id, you must pass it, and you can skip optional ones.

When you retrieve a prompt via prompts/get, you get back a structured messages array. Each message supports:

  1. Text: Plain texts are the primary format for instructions and input.
  2. Image: Base64-encoded data with a valid media type (e.g., image/png) allowing inclusion of visual information in messages.
  3. Audio: Base64-encoded data with a valid MIME (media type) allowing audio information to be included in messages.
  4. Embedded resources: Server-side artifacts (documentation, code samples, or other reference material) injected directly into context.
{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "description": "Code review prompt",
    "messages": [
      {
        "role": "user",
        "content": {
          "type": "text",
          "text": "Please review this Python code:\ndef hello():\n    print('world')"
        }
      }
    ]
  }
}

Prompts also support role: user and role: assistant, enabling controlled multi-turn interaction patterns within a single execution.

MCP prompts follow a well-defined schema that makes them predictable and also easy to evaluate across MCP clients and MCP servers.

Discovering and using MCP prompts

TL;DR: Clients first list available prompts, then retrieve them with arguments. The result is a ready-to-use message array for the model.

Consuming prompts in an MCP setup follows a two-step protocol flow: list, then get.

Step 1: prompts/list

You start by discovering what exists:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "prompts/list",
  "params": {
    "cursor": "optional-cursor-value"
  }
}

Then, you get back prompt metadata (name, title, description, arguments), plus nextCursor for pagination if there’s more.

Step 2: prompts/get

Once you know the prompt, you resolve it by passing concrete argument values:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "prompts/get",
  "params": {
    "name": "code_review",
    "arguments": {
      "code": "def hello():\n    print('world')"
    }
  }
}

This returns a fully resolved messages array ready for the model.

If you’re now wondering how clients stay updated, MCP servers emit notifications/prompts/list_changed (if enabled), allowing MCP clients to re-fetch without polling.

Prompt arguments also support auto-completion via the completion API, suggesting valid values as you type.

The best way to understand MCP prompts is to build one.

Building an MCP prompt server

TL;DR: An MCP server defines prompts and returns structured instructions. It uses handlers to list prompts and generate responses dynamically.

The best way to understand MCP prompts is to build one. Below is one in Python (version 3.12) that exposes a single prompt—a defect summary generator that takes a bug description and severity level, then instructs the model to produce a structured defect report.

Note that this doesn’t run your tests or execute any code. It only shapes the instruction the model receives.

Quick setup

Use uv for a clean, isolated environment:

uv init --app && uv add mcp

Create your server file (e.g., mcp_server.py) and then define two handlers: app.list_prompts() to advertise what prompts exist, and app.get_prompts() to serve the fully resolved instruction when an MCP client requests it.

The server code

from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Prompt, PromptArgument, PromptMessage, TextContent, GetPromptResult

import anyio
# Initialize the MCP server with a name your clients will recognize
app = Server("qa-prompt-server")


@app.list_prompts()
async def list_prompts():
    # This is what clients see when they call prompts/list
    return [
        Prompt(
            name="defect_summary",
            title="Defect Summary Generator",
            description="Generates a structured defect report from raw bug notes",
            arguments=[
                PromptArgument(
                    name="bug_description",
                    description="Raw notes or observations about the bug",
                    required=True  # Client must supply this
                ),
                PromptArgument(
                    name="severity",
                    description="Bug severity: critical, high, medium, or low",
                    required=False  # Optional — prompt still works without it
                ),
            ],
        )
    ]


@app.get_prompt()
async def get_prompt(name: str, arguments: dict):
    bug = arguments.get("bug_description", "")
    severity = arguments.get("severity", "unspecified")

    # Build the instruction that gets sent to the model
    instruction = f"""
    You are a QA engineer writing a defect report.
    Severity: {severity}
    Bug Notes: {bug}

    Produce a structured report with: Summary, Steps to Reproduce,
    Expected vs Actual Result, and Suggested Priority.
    """

    return GetPromptResult(
        messages=[
            PromptMessage(
                role="user",
                content=TextContent(type="text", text=instruction.strip())
            )
        ]
    )


async def main():
    async with stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            app.create_initialization_options()
        )


# Boot the server
if __name__ == "__main__":
    anyio.run(main)

Why arguments make this reusable

Notice that bug_description is required while severity is optional. This design decision makes the prompt work across completely different scenarios. A critical production crash and a low-priority UI issue can both go through the same template, with only the inputs changing.

Connect to Claude desktop

To connect your MCP server to your Claude desktop, you need to add the server details to the Claude desktop config file at: ~/Library/Application Support/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "qa-prompt-server": {
      "command": "/your/path/to/uv",
      "args": [
        "--directory",
        "/your/path/to/project-folder",
        "run",
        "mcp_server.py"
      ]
    }
  },
  "preferences": {
    "coworkScheduledTasksEnabled": false,
    "sidebarMode": "chat",
    "coworkWebSearchEnabled": true,
    "ccdScheduledTasksEnabled": false
  }
}

Using it on your Claude desktop

  1. Restart your Claude desktop after saving the config.
  2. Click the “+” icon in the message box.
  3. Select “Connectors.”
  4. Go to “Add from qa-prompt-server” and select “Defect Summary Generator.”Defect Summary Generator
  5. Fill in the arguments form and submit.defect summary

The result

After filling in the form with something like

  • Bug_description: Checkout button unresponsive after applying a discount code on mobile Safari
  • Severity: high,

You will see a context card attached to your message input like this:

defect summary text

You can send this directly, or add more context first, then hit the send arrow (the orange upload/send icon in the bottom right) to get your structured report.

The model receives your resolved instruction and responds in the “Summary, Steps to Reproduce, Expected vs Actual Result, and Suggested Priority” structured format we described in our app.get_prompts() handler.

defect report document

MCP prompts best practices and common pitfalls to avoid

TL;DR: Keep prompts focused, use arguments, and validate inputs. Avoid vague prompts and skipping validation.

MCP prompts are simple to set up but also easy to get wrong. Here’s what works and what to watch out for to avoid inconsistent outputs, broken pipelines, or vulnerabilities:

Best practices

  1. Keep prompts focused on a single task to ensure predictable outputs.
  2. Write a clear, specific description to make prompt discovery easy for systems and users.
  3. Use arguments instead of hard-coding context for flexibility and reusability.
  4. Validate inputs server-side to prevent errors and guard against prompt injection attacks.

Pitfalls to avoid

  1. Avoid creating vague prompts that produce inconsistent results.
  2. Never skip input validation. It can introduce security risks.
  3. Don’t ignore listChanged notifications because that will cause clients to miss updates in dynamic environments.

MCP prompts in software testing workflows

TL;DR: MCP prompts standardize AI usage in QA workflows. They ensure consistent outputs for test generation, defect analysis, and regression.

Most QA teams adopt AI and still get inconsistent results because their prompts vary. As a result, this breaks repeatability, which QA depends on. MCP prompts fix this by standardizing how models are used across workflows. Here are a few practical examples.

1. Generating test cases from requirements

A test_case_generator prompt could be defined to accept a feature ticket as a required argument and an optional framework (e.g., pytest, Cypress) and return structured scenarios—happy paths and edge cases consistently.

2. Summarizing defects for triage

A defect_triage prompt could take an error_log and severity, then return a consistent summary of your root cause hypothesis, affected component, and suggested priority. With a shared prompt, everyone analyzed and summarized defects the same way.

3. Standardizing regression analysis

A regression_analysis prompt also could take test_run_id and baseline_build, ensuring failures are always evaluated the same way by identifying what broke, what changed, and where issues cluster.

Agentic AI refers to systems that can plan, decide, and execute tasks across tools on their own.

Agentic AI and MCP prompts in testing

TL;DR: Agentic AI runs testing workflows autonomously. MCP prompts provide structure to make these workflows predictable.

Agentic AI refers to systems that can plan, decide, and execute tasks across tools on their own. In testing, that means agents running end-to-end QA workflows with minimal human intervention.

Now here’s the problem: without structure, those agents become unpredictable. That’s where MCP prompts come in. They act as the instruction layer, making sure every step—test generation, execution, analysis—follows the same defined pattern.

Tricentis leads here with secure remote MCP servers. Tricentis Tosca and NeoLoad expose MCP servers for automation and performance testing, qTest MCP structures test management workflows, and SeaLights provides MCP for coverage intelligence.

As Kevin Thompson, CEO of Tricentis, put it, “With MCP and Tricentis Agentic Test Automation…AI doesn’t just assist—it acts to drive productivity, reduce risk, and transform how testing gets done.”

Scaling MCP-powered agentic testing with Tricentis

TL;DR: MCP prompts enable consistent, reusable workflows. Tricentis helps operationalize this for scalable testing.

MCP prompts bring structure, consistency, and reusability to how teams interact with AI, turning one-off instructions into standardized, production-ready workflows.

By separating intent (prompts), execution (tools), and context (resources), they enable reliable, agentic testing systems.

For teams ready to scale this, Tricentis (an agentic quality engineering platform) gives you a way to operationalize MCP, standardizing prompts, coordinating workflows, and making testing outcomes dependable.

To see this in practice, explore how Tricentis can help you scale agentic automation and make your software testing more reliable.

This post was written by Inimfon Willie. Inimfon is a computer scientist with skills in Javascript, NodeJs, Dart, flutter and Go Language. He is very interested in writing technical documents, especially those centered on general computer science concepts, flutter, and backend technologies, where he can use his strong communication skills and ability to explain complex technical ideas in an understandable and concise manner.

Author:

Guest Contributors

Date: Apr. 21, 2026

FAQs

Can I use MCP prompts with any AI model?

Yes, MCP prompts work with any AI model that can process structured instructions and return predictable outputs.

What makes MCP prompts different from regular AI prompts for software teams?
+

MCP prompts are server-defined, structured, and reusable, ensuring consistent instructions and reliable results across workflows.

How do MCP prompts help reduce testing errors in software testing?
+

They enforce input validation and structured outputs, minimizing inconsistencies and mistakes in automated and manual testing workflows.

You may also be interested in...