apidog-mcp-server
0.0.8 • Public • Published
The Apidog MCP Server allows you to use your API documentation from Apidog projects as a data source for AI-powered IDEs like Cursor. This means Agentic AI can directly access and work with your API documentation, speeding up development and making your workflow more efficient.
With Apidog MCP Server, developers can leverage the AI assistant to:
- Generate or modify code based on API documentation.
- Search through API documentation content.
- And much more! Let your imagination and your team’s creativity guide you. 😜
🎯 How Apidog MCP Server Works?
Once the Apidog MCP Server is set up, it automatically reads and caches all API documentation data from your Apidog project on your local machine. The AI can then retrieve and utilize this data seamlessly.
Simply instruct the AI on what you’d like to achieve with the API documentation. Here are some examples:
- Generate Code: “Use MCP to fetch the API documentation and generate Java records for the ‘Product’ schema and related schemas”.
- Update DTOs: “Based on the API documentation, add the new fields to the ‘Product’ DTO”.
- Add Comments: “Add comments for each field in the ‘Product’ class based on the API documentation”.
- Create MVC Code: “Generate all the MVC code related to the endpoint ‘/users’ according to the API documentation”.
- Node.js (version 18 or higher, preferably the latest LTS version).
- An IDE that supports MCP, such as:
- Cursor
- VS Code + Cline plugin
Step 1. Generate an Access Token in Apidog
- Open Apidog, hover over your avatar at the top-right corner, and click
Account Settings
>API Access Token
. - Create a new API access token (refer to the help documentation for details).
- Copy the access token and replace
in the configuration file below.
Step 2. Get the Apidog Project ID
- Open the desired project in Apidog.
- Click project
Settings
at the left sidebar, and copy theProject ID
from theBasic Settings
page. - Copy the project ID and replace
in the configuration file below.
Step 3. Configure the IDE
- Copy the following JSON configuration code to add to the MCP configuration file in your IDE:
{ "mcpServers": { "API docuemntation": { "command": "npx", "args": [ "-y", "apidog-mcp-server@latest", "--project-id=" ], "env": { "APIDOG_ACCESS_TOKEN": "" } } } }
If you’re on Windows and the configuration file above isn’t working, try using the configuration file below instead:
{ "mcpServers": { "API docuemntation": { "command": "cmd", "args": [ "/c", "npx", "-y", "apidog-mcp-server@latest", "--project-id=" ], "env": { "APIDOG_ACCESS_TOKEN": "" } } } }
- Add the copied JSON configuration code to the MCP file in your IDE:
-
For Cursor: Add to the global
~/.cursor/mcp.json
or the project-specific.cursor/mcp.json
.** - For Cline: Open the Cline panel > MCP Server > Configure MCP Server.
-
For Cursor: Add to the global
- Replace
and
with your personal Apidog API access token and project ID. - Name the MCP Server something like “API Documentation” or “xxx API Documentation” to help the AI recognize its purpose. Avoid generic names like “Apidog” or “A
13 Comments
zaleria
documentation is for humans.
let the LLMs read code.
hariharasudhan
want to run it on my local and not from your servers via token. after all the docs are already open
khvirabyan
Very cool, almost all of the API platforms offering API abstractions can now offer one MCP server to connect to all of the existing API integrations. It's very interesting to see how things will progress from here. One example is Zapier released their own MCP server (https://zapier.com/mcp). How the adoption will grow over time is very interesting to see, vendors such as GitHub will provide their own official MCP or the community will kick in and will support the integration + documentation generators might as well generate the MCP server. Great work!
koakuma-chan
This only works if your API is built on their "Apidog" commercial API building platform. I wonder how this compares to just scraping documentation from a regular documentation website.
smokel
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs.
https://modelcontextprotocol.io/
jascha_eng
Is anyone actually using MCPs productively? I feel like everyone and their mother are releasing specialized ones but I have yet to find them really useful.
The stuff that's cool works with purpose-built tools like what cursor and claude code do to edit files, run commands etc. Bring your own function hasn't really been useful to me.
That being said I do believe that giving current relevant documentation into the context does help with results. I've just yet seen anyone do that very successfully in an automated fashion.
Matsta
Looks cool, the only one similar I've seen so far that is similar is: https://github.com/cyberagiinc/DevDocs
But every-time I've tried to run DevDocs, I've had issues running it. Either the scraper or the MCP server fails to run.
tb1989
I think this article explains it all:
MCP Isn’t the USB-C of AI — It’s Just a USB-Claude Dongle
https://dev.to/internationale/mcp-is-not-ai-usb-c-its-usb-cl…
Nonetheless, I think your work is very good and it looks like a very useful dongle
musha68k
What have you actually built and deployed using this "vibe coding" methodology?
So far, I've mostly seen proof-of-concept applications that might qualify as decent game jam entries – contexts where experimental implementation is expected ("jank allowed").
I'm curious about production applications: Has anyone developed robust, maintainable agent systems using this approach that didn't require significant refactoring or rewrites even?
What's been your experience with its viability in real-world environments beyond prototyping?
z3t4
Have you obfuscated/minified the code? Why!?
franky47
I saw the domain and thought that NPM itself built an MCP to let agents read package docs and type definitions to stop hallucinating APIs that don't exist. Sadly, no.
We have .d.ts for machines (tsc), we have JSDoc & README.md for humans, can we get those LLMs to actually stick to those sources of truth, without having to do the work a third time (like llms.txt / cursor rules)?
Sterling9x
[dead]
nindalf
Is this useful? It seems really specific to their paid tool.
What would be more helpful is an MCP that exposed devdocs.io or similar. Cache selected languages/framework documentation locally. Then expose an MCP server that searched the local files with some combination of vector/BM25 search. Expose that as a tool to the LLM.