Elena' s AI Blog

Cursor AI with MCP tools

29 Apr 2026 (updated: 22 Apr 2026) / 8 minutes to read

Elena Daehnhardt


Midjourney 7.0: Three glowing monitors: green Gemini, orange ChatGPT, purple Claude interfaces and an ergonomic keyboard for geeks, HD


TL;DR:
  • MCP turns Cursor into a tool-using assistant by exposing external capabilities through controlled server endpoints, reducing context switching and improving implementation speed.

Previous: Part 8 — Gemini CLI versus Claude CLI

Introduction

MCP (Model Context Protocol) tools connect Cursor AI to external systems. Think databases, APIs, and logging systems.

Why MCP Matters

Usually, debugging means jumping between tools:

  1. Check code in editor
  2. Look at logs in terminal
  3. Query database in another tool
  4. Check API docs in browser
  5. Back to editor

MCP eliminates this context switching by bringing external information into your AI assistant.

MCP (Model Context Protocol) servers are a way to connect Cursor’s AI assistant to external systems and data sources, allowing the model to use tools to interact with APIs, databases, and more. Instead of manually explaining a project’s structure, you can integrate directly with your tools, providing a real-time, up-to-date context for the AI.

How It Works

An MCP server exposes a set of “tools” that the language model can invoke automatically based on your prompts. Cursor supports several transport methods for these servers, including stdio (for local command-line servers) and Streamable HTTP/SSE (for remote servers).

You configure an MCP server by adding a JSON configuration to a file, either globally (~/.cursor/mcp.json) or for a specific project (.cursor/mcp.json). This configuration specifies the server’s name, command/URL, and any necessary authentication like API keys.

For security, the protocol is designed to keep a “human in the loop,” meaning Cursor should provide visual indicators when a tool is invoked and may require user confirmation for sensitive operations.

Examples of Usage and Setup

Here are examples of how to set up different types of MCP servers, based on common use cases:

1. Apidog MCP Server for API Documentation

This server allows the AI assistant to fetch and interpret your API documentation in real time. This is useful for generating client-side code, validation rules, or error handling based on your API schema.

To install:

  1. Generate an API access token and find your Project ID in Apidog.
  2. Add a JSON configuration to your Cursor mcp.json file. For a Windows user, it might look like this:
{
  "mcpServers": {
    "API specification": {
      "command": "cmd",
      "args": [
        "/c",
        "npx",
        "-y",
        "apidog-mcp-server@latest",
        "--project-id=<project-id>"
      ],
      "env": {
        "APIDOG_ACCESS_TOKEN": "<access-token>"
      }
    }
  }
}

After saving and restarting Cursor, you can use prompts like:

  • “Generate TypeScript interfaces for all data models in our API documentation.”
  • “Create a Python client for the authentication endpoints according to our API documentation.”
  • “Add comments for each field in the Product class based on the API documentation.”

2. Stripe MCP Server for Payment Processing

The Stripe MCP server exposes tools to interact with the Stripe API, such as sending emails, managing invoices, and creating payment links. This allows you to build features that rely on the Stripe API with natural language.

To install:

  • You can often use a direct installation link provided by the server developer.
  • Alternatively, you can manually add the server’s configuration to your mcp.json file. For a remote server, the configuration would specify the URL.
{
  "mcpServers": {
    "stripe": {
      "url": "https://mcp.stripe.com"
    }
  }
}

Once configured, the AI agent automatically detects the available tools, such as list_invoices or create_payment_link, and calls them when you ask a related question in the chat.

3. Figma Dev Mode MCP Server for Design-to-Code Workflows

This server connects Cursor to Figma’s Dev Mode, enabling the AI to access design information, components, and layout data. This is perfect for generating code from design files.

To install:

  1. In the Figma desktop app, enable “Dev Mode MCP Server” under your preferences.
  2. Add a new global MCP server in Cursor settings. The server runs locally, so the configuration might look like this:
{
  "mcpServers": {
    "figma-dev-mode": {
      "command": "npx",
      "args": ["-y", "figma-dev-mode-mcp-server"]
    }
  }
}

With this server running, you can use prompts to generate code from a selected Figma frame, extract design context like variables and components, and ensure consistency with your codebase by using Code Connect.

Key Benefits

  • Real-Time Context: The AI is always working with the most current data, whether it’s from an API, a local file, or a design document.
  • Reduced Context Switching: You can keep your entire workflow within your IDE, eliminating the need to switch between different applications to find documentation or manage external services.
  • Automated Workflows: The AI can perform complex, multi-step tasks by invoking a sequence of tools without explicit instructions from you at each step.
  • Security: By using mcp.json files and requiring authentication, you can control which tools the AI has access to and ensure secure access to sensitive data.
  • Simple MCP Example

Create a database query tool:

# tools/db_query.py
import json
import sys
import sqlite3

def query_database(query: str) -> dict:
    """Execute database query and return results."""
    conn = sqlite3.connect('app.db')
    cursor = conn.cursor()
    
    try:
        cursor.execute(query)
        results = cursor.fetchall()
        columns = [description[0] for description in cursor.description]
        
        return {
            "status": "success",
            "data": [dict(zip(columns, row)) for row in results]
        }
    except Exception as e:
        return {"status": "error", "message": str(e)}
    finally:
        conn.close()

if __name__ == "__main__":
    query = sys.stdin.read().strip()
    result = query_database(query)
    print(json.dumps(result))

Configure in Cursor settings:

{
  "mcp.tools": [
    {
      "name": "database-query",
      "command": "python",
      "args": ["tools/db_query.py"],
      "description": "Query the application database"
    }
  ]
}

Now you can ask:

Use the database-query tool to show me all users created today, then check if there were authentication errors for these users

Cursor AI executes the tool and incorporates results. Pretty cool!

Did you like this post? Please let me know if you have any comments or suggestions about your experience with AI-powered development tools. I am always happy to learn from your experiences, too!

References

  1. Cursor AI Official Documentation
  2. Model Context Protocol
  3. Amazon Kiro AI
  4. Guide:How to Handle Big Projects With Cursor
desktop bg dark

About Elena

Elena, a PhD in Computer Science, simplifies AI concepts and helps you use machine learning.

Citation
Elena Daehnhardt. (2026) 'Cursor AI with MCP tools', daehnhardt.com, 29 April 2026. Available at: https://daehnhardt.com/blog/2026/04/29/cursor-ai-creating-a-mcp-server/
All Posts