Every AI code assistant today has the same problem: it generates UI by guessing from training data. It doesn't know your component library's actual API. It doesn't know which props are required, which combinations are invalid, or what accessibility attributes your team requires.
The result? AI-generated code that looks plausible but doesn't match your design system. Your team spends more time fixing AI output than they saved by using it.
LLMs learn component APIs from public documentation, blog posts, and open-source code. This creates three failure modes:
What if AI assistants could query your component library's actual contracts at generation time?
That's the core idea behind .contract.json files in Fragments. Every component ships with a co-located metadata file containing:
{
"name": "Button",
"props": {
"variant": {
"type": "primary | secondary | outline | ghost",
"default": "primary"
},
"size": {
"type": "sm | md | lg",
"default": "md"
}
},
"usage": {
"when": ["Primary actions", "Form submissions"],
"whenNot": ["Navigation (use Link)", "Toggle state (use Switch)"]
}
}This isn't documentation for humans — it's a machine-readable contract that AI agents can query via MCP tools.
Fragments ships with 9 MCP tools that let AI assistants:
The result is AI that generates code your team would actually approve — because it's working from real contracts, not guesses.
As AI becomes a larger part of every development workflow, the gap between "AI-generated code" and "production-ready code" is the biggest bottleneck. Structured metadata closes that gap by giving AI the same context your senior engineers have.
The teams that invest in making their design systems AI-native today will have a compounding advantage as AI tooling improves.