AI integration
Use opendata as the data plane behind MCP and agent workflows.
The clean mental model is: opendata provides the machine-readable map and the stable HTTP API, while your MCP server or agent runtime owns tool naming, auth storage, prompts, and downstream orchestration.
Start from the public map
Use llms.txt when the agent only needs the shortest possible public overview of entry points, products, solutions, and crawler guidance.
Load the expanded reference
Use llms-full.txt when the agent needs the fuller machine-readable catalog of solutions, packaged products, raw catalog entries, plans, and compare pages.
Call the API directly
Use the same X-API-Key header across product detail, schema, search, export, billing, and usage flows instead of scraping site HTML.
Recommended integration flow
For most teams, the fastest path is to load the public machine-readable files, select one product, inspect its schema, then expose only the narrow search or export tools your agents actually need.
Good fit
- Claude, GPT, Cursor, or internal copilots that need stable read-only public-data tools
- Teams wrapping opendata behind their own MCP server, agent runtime, or orchestration layer
- Search and retrieval workflows that need schema discovery before writing tools
Keep in mind
- opendata is an HTTP API and documentation surface, not a hosted MCP server endpoint
- Live API reads still require an X-API-Key even when docs and llms files are public
- Anonymous public search is a teaser path, not the contract you should build production tools on
Open the right entry point
Start from llms.txt if the agent only needs a compact map. Move to llms-full.txt when it needs the fuller catalog. Use Playground or raw product schema pages when you are deciding which concrete tool calls to expose.