Friction is a significant barrier to data contract adoption. If adopting contracts requires learning a new tool, writing unfamiliar YAML, and adding a deployment step, adoption will be slow even when teams are conceptually bought in. The cultural challenge is real, but mechanics also matter. Every additional step a producer must take to adopt a contract is an attrition point.
Meet engineers where they already work
One team in Sanderson’s Data Contracts book created an SDK that made contract-compliant data production one line of code. At Convoy, software engineers used an SDK to define and push new versions of contracts; backward-incompatible changes surfaced in the existing GitHub pull request flow.
Contracts should integrate into tools engineers already use: if the team lives in GitHub PRs, contract violations should appear as PR checks; if they use Slack, alerts go to Slack; if they deploy through CI/CD, contract validation is a stage in that pipeline. Asking engineers to use a separate portal, learn new YAML syntax, and run an unfamiliar CLI maximizes attrition.
Tailor the message to the audience
Different stakeholders respond to different arguments for contracts. The concept is the same, but the framing needs to match what each audience already cares about.
Leadership responds to incident post-mortems that quantify the cost of data failures. “We spent 40 hours last quarter fixing breakages from upstream schema changes in the payments data” is a concrete argument that maps to budgets and headcount. Abstract talk about data quality standards doesn’t land in budget conversations. The dollar cost of data incidents does.
For engineering teams, developer experience framing works better. “You manage the rest of your software as code. Why not your data?” This resonates because it connects contracts to practices engineers already value: versioned APIs, schema validation, CI/CD enforcement. You’re not asking them to do something foreign. You’re pointing out a gap in what they already believe in.
Data consumers care about fewer broken dashboards. The analyst who spends Monday mornings debugging why last week’s revenue numbers don’t match cares about contracts the moment you explain that contracts prevent the category of breakage that ruins their Mondays.
Compliance teams care about automated governance with less manual overhead. Contracts formalize data-sharing agreements in a machine-readable format, which maps directly to regulatory requirements around data lineage and accountability.
Post-mortem data as leverage
Post-mortem data is particularly effective for building the case. When a dashboard breaks because someone renamed a column upstream, trace the full cost: the hours spent debugging, the delayed report, the wrong number that reached the executive team. These incidents make an abstract concept feel concrete and urgent to the people who can prevent the next one.
The practice is simple: every time an upstream schema change breaks your pipeline, document it. How long did it take to detect? How long to fix? Who was affected? What decisions were delayed or made with wrong data? Over a quarter, this log becomes the business case for contracts. You don’t need to argue in theory. You have receipts.
This also creates the feedback loop that makes the case for contracts self-reinforcing. When upstream schema changes break your models, make the cost visible to the producing team rather than silently fixing it yourself. File incidents. Track repair time. Producers who never see the downstream consequences of their changes have no reason to care. Producers who see “your column rename caused 6 hours of debugging and delayed the board report” start caring quickly.
Embed contracts in engineering KPIs
Including data quality metrics in engineering KPIs shifts incentives more durably than evangelism alone. If the payments team is measured only on payment processing latency and error rates, data quality is nobody’s responsibility. If their scorecard includes “data incidents caused by schema changes,” they have a structural reason to maintain contracts.
This requires executive sponsorship. An analytics engineer can’t unilaterally add data quality metrics to another team’s KPIs. But the post-mortem data provides the justification, and the KPI change provides the sustained incentive. Evangelism alone fades. KPIs persist.
Training on contracts during onboarding works better than trying to retrofit adoption onto teams with established workflows. New engineers have no habits to unlearn. If contract maintenance is part of how they learn to develop features, it becomes part of their default workflow rather than an extra task.
The Data Product Manager role
Sanderson proposes a Data Product Manager role as the organizational bridge between producers and consumers. The ideal candidates are “business-inclined data developers: analysts, analytics engineers, data scientists, or data engineers that enjoy working with customers.” Someone who understands both the technical implementation and the business context of the data.
This role fills a gap that usually goes unaddressed. Producers don’t know what consumers need. Consumers don’t know what producers can deliver. The contract negotiation process (the collaborative model) needs someone to facilitate it. Without a dedicated person, the negotiation either doesn’t happen or happens ad hoc and inconsistently.
Analytics engineers are often positioned for this work: they deal with the downstream impact of schema changes daily and have enough upstream context to translate between producers and consumers. The formal Data Product Manager role matters most at scale. In a small organization, informal relationships handle the coordination; once dozens of data products with contracts span multiple teams, someone needs to own the practice itself, not just individual contracts.