conda-meta-mcp: Expert Conda Ecosystem Data for AI Agents

AI agents reason well, but only with good data. conda-meta-mcp bridges the gap between frozen training data and live ecosystem metadata.
Modern AI agents like Claude, Cursor, OpenCode, and Zed can fetch web content, run shell commands, and even install packages. But they lack direct access to the rich, structured metadata embedded in conda packages, information that's essential for solving complex packaging problems. conda-meta-mcp provides that missing link.
The Problem: Knowledge Cutoffs Meet Rich Metadata
When you ask an AI agent about conda, you hit two walls:
Knowledge cutoffs. Models trained in mid-2024 know nothing about September releases, new conda features, or recent CVEs. They guess based on stale training data.
Inaccessible metadata. Even with web access, agents can't efficiently query the structured data embedded in conda packages: recipes, dependency graphs, run_exports, build provenance. This information is buried in package archives, not exposed through simple web APIs.
Consider these questions:
- "Is
scikit-learn>=1.5available on conda-forge for linux-64?" - "What packages depend on numpy in conda-forge?"
- "Which conda package provides the
cv2Python import?" - "What run_exports does openssl declare?"
- "Which package ships
libcuda.so?" - "Which feedstock commit bumped pyspark to version v4.1.0?"
These aren't reasoning failures. They're data access problems. AI agents reason well with good information. Without live, authoritative metadata, they can't solve real packaging problems reliably.
Enter conda-meta-mcp
conda-meta-mcp is an MCP (Model Context Protocol) server that exposes authoritative, read-only conda ecosystem metadata directly to AI agents. The project is currently in incubation within the conda-incubator GitHub organization.
The Model Context Protocol is an open standard (introduced by Anthropic in 2024) for connecting AI models and agents to external data sources and tools. MCP servers can expose three types of primitives: tools (executable functions), resources (contextual data), and prompts (interaction templates). Agents invoke tools based on their descriptions, effectively a form of prompt injection that guides agent behavior.
Key benefit: agents query live data without embedding it in their training.
Instead of relying on stale training data, agents can now call structured tools:
Agent: "Is scikit-learn>=1.5 available on conda-forge for linux-64?"
↓
package_search("scikit-learn>=1.5", channel="conda-forge", platform="linux-64", limit=1)
↓
Returns: {"results": [{"version": "1.8.0", "url": "..."}], "total": 62}
↓
Agent: "Yes, 62 versions match. Newest is 1.8.0."
The user doesn't need to know how to call MCP tools or when to use them. The LLM selects appropriate tools based on their descriptions and synthesizes results into human-readable answers.
Why Conda Metadata Is Uniquely Rich
Every conda package embeds an info/ metadata tarball containing:
- Recipe and build scripts (
meta.yaml/recipe.yaml,build.sh,bld.bat): source provenance, dependency declarations, build commands - Rendered metadata (
run_exports.json,index.json,about.json): resolved constraints, feedstock URL and commit SHA, CI job identifiers
This is far richer than what wheels or sdists provide. The embedded provenance enables auditable reproducibility: you can rebuild a binary from first principles using metadata stored in the package.
This same metadata powers conda-forge's migration infrastructure, the heavy machinery that builds dependency graphs and continuously rebuilds thousands of packages (as described in Part 3 of our conda series).
Most conda-meta-mcp tools query indexed extracts of this embedded metadata: repodata indexes package specs, conda-forge indexes import mappings and PyPI names, and Quansight's API indexes file paths.
The Tools
conda-meta-mcp exposes ecosystem knowledge through seven composable tools. Agents chain them to answer complex questions.
package_search: Find Packages by Spec
Search packages by spec, channel, and platform. Results ordered by newest version.
package_search("scipy", channel="conda-forge", platform="linux-64", limit=1)
→ {"results": [{"version": "1.16.3", "url": "...scipy-1.16.3-py314hf5b80f4_1.conda"}], "total": 472}
package_insights: Deep Package Inspection
Inspect a package's info/ tarball: rendered recipe, run_exports, source provenance. Foundation for SBOM generation and CVE analysis.
package_insights(url="...openssl-3.6.0-h26f9b46_0.conda", file="info/run_exports.json")
→ {"info/run_exports.json": {"weak": ["openssl >=3.6.0,<4.0a0"]}}
repoquery: Dependency Graph Navigation
Query what depends on a package (whoneeds) or what a package depends on (depends).
repoquery("whoneeds", "numpy", channel="conda-forge", limit=1)
→ {"query": {"total": 286778}, ...}
cli_help: Conda CLI Documentation
Introspect conda's command-line interface dynamically. Supports regex filtering.
cli_help(tool="conda", grep="create", limit=10)
→ "conda create: Create a new conda environment from a list of specified packages..."
import_mapping: Python Import to Package
Map Python imports to conda packages.
import_mapping("cv2")
→ {"best_package": "opencv", "candidate_packages": ["libopencv", "opencv", "opencv-python-headless"]}
pypi_to_conda: PyPI Name to Conda Package
Map PyPI distribution names to conda equivalents.
pypi_to_conda("PyYAML")
→ {"pypi_name": "PyYAML", "conda_name": "pyyaml", "changed": false}
file_path_search: Find Packages by File Path
Search for packages containing a specific file path.
file_path_search("bin/conda", limit=5)
→ {"artifacts": ["cf/linux-64/conda-4.6.9-py36_0", ...], "total": 1000}
All tools support limit and offset for pagination. Most support get_keys for field filtering to protect agent context windows.
Real-World Workflows
Cross-Ecosystem Package Discovery
PyPI and conda use different naming conventions. An agent can resolve these mappings:
Agent: "I need to install opencv-python from PyPI using conda"
↓
pypi_to_conda("opencv-python")
→ {"conda_name": "opencv", "changed": true}
↓
import_mapping("cv2")
→ {"best_package": "opencv", "candidate_packages": ["libopencv", "opencv", ...]}
↓
Agent: "The PyPI package 'opencv-python' is 'opencv' on conda-forge.
It provides the 'cv2' import. Install with: conda install opencv"
Understanding Ecosystem Impact
Agent: "How critical is openssl in the conda-forge ecosystem?"
↓
repoquery("whoneeds", "openssl", channel="conda-forge", limit=1)
↓
Returns: total=30,673 packages depend on openssl
↓
package_insights(openssl_url, file="info/run_exports.json")
↓
Returns: {"weak": ["openssl >=3.6.0,<4.0a0"]}
↓
Agent: "OpenSSL is critical infrastructure. 30,673 packages depend on it.
Its weak run_export means packages built against it get a flexible
dependency (>=3.6.0,<4.0a0), allowing minor version upgrades without
rebuilds. A major version bump (4.0) would trigger ecosystem-wide rebuilds."
Reviewing Dependency Updates
When Renovate proposes a version bump, you want to know what actually changed beyond the version number. Did the maintainers add patches? Change dependencies? Modify build scripts?
Agent: "Show me the feedstock commit that built pyspark 4.1.0"
↓
package_search("pyspark=4.1.0", channel="conda-forge", platform="noarch", limit=1)
↓
package_insights(url, file="info/about.json", get_keys="extra")
↓
Returns: {"extra": {"sha": "3ff347891a4b2fca9da76005c99d547087a13dc7",
"remote_url": "https://github.com/conda-forge/pyspark-feedstock"}}
↓
Agent: "Here's the commit: https://github.com/conda-forge/pyspark-feedstock/commit/3ff347891a..."
The URL allows a user or agent to inspect in detail what was changed by whom.
CVE Analysis with Grype MCP
Regulations like NIS2, DORA, and the Cyber Resiliency Act require organizations to track their software supply chain. The Grype MCP can scan conda environments for CVEs. When combined with conda-meta-mcp, agents can perform deeper analysis to determine if a CVE actually affects your code.
Here's a concrete example: an agent scans its own MCP server environment:
1. Agent uses Grype MCP to scan conda environment of a local project
→ Grype reports: CVE-2025-8291 (Medium) affects python 3.13.9
"The 'zipfile' module would not check the validity of the ZIP64
End of Central Directory record offset..."
2. Agent calls package_insights() to inspect the Python recipe
→ No patches applied for CVE-2025-8291, vanilla upstream build
3. Agent searches the local project codebase for zipfile usage
→ No imports of zipfile found
4. Agent concludes: "CVE-2025-8291 is present in the installed Python version,
but the local project does not use the zipfile module. The vulnerable code
path is never executed. Risk: None for this application."
This deep analysis is only possible by combining multiple tools: Grype for CVE detection, conda-meta-mcp for package inspection, and code search for call graph analysis. A CVE scanner alone would flag the vulnerability without this context.
See also QuantCo's blog on conda regulation support for how syft consumes conda-meta/ files to generate SBOMs that standard CVE scanners can process.
Engineering: Built on Battle-Tested Foundations
conda-meta-mcp wraps existing, proven conda ecosystem libraries with no wheel reinvention:
conda_package_streaming.url.stream_conda_info(): Streams metadata from CDN (~100ms, no full download)conda.api.SubdirData.query_all(): Queries repodata.json using SAT solversconda.cli.conda_argparse.generate_parser(): Exposes conda's CLI documentation dynamicallyconda_forge_metadata: Provides PyPI to conda and import to package mappingslibmambapy: Powers repoquery with fast dependency graph traversal- conda-forge-paths database: Powers file path search via Quansight's API
Built with FastMCP for minimal boilerplate and async support.
Managed with pixi for fast, reproducible environments with lockfiles.
Design Principles
- Read-only by contract: Never mutates environments. Safe to host publicly.
- Context-aware: Pagination, filtering, and windowing protect agent context.
- Fast startup: pixi's lockfile ensures reproducible, secure dependency fetch in seconds.
Getting Started
See the conda-meta-mcp README for installation instructions covering Claude Desktop, Cursor, OpenCode, Zed, VSCode, and GitHub Copilot.
conda-meta-mcp can be combined with other MCP servers for cross-ecosystem queries. The MCP protocol enables this composability by design.
Outlook
conda-meta-mcp is use case agnostic: it exposes ecosystem facts without judgment, and works with many AI agents. What will you build? CVE analysis, dependency audits, migration planning, compliance reporting? The examples in this post are starting points.
Share your prompts and use cases with the project:
- conda-meta-mcp on GitHub
- Open an issue or discussion with your discoveries
Happy experimenting!
conda-meta-mcp provides data from upstream APIs without warranty of accuracy or completeness. We make no guarantees about LLM interpretation or actions based on this data. Third-party MCP servers (such as Grype MCP) are not provided, verified, or supported by us. Don't trust, verify!
