Organization-wide data contract rollouts frequently stall: a company-wide standard is proposed, a specification document is written, and six months later the only contracts that exist are the ones the proposer wrote themselves. The anti-patterns are well documented. Sanderson, Freeman, and Schmidt structure their change management framework around Kotter’s eight-step model, starting with urgency and coalition-building before scaling. That model applies to data contracts because the challenge is organizational, not technical.
Create urgency by making the cost visible
When upstream schema changes break models, most data teams silently fix the problem: two hours of debugging, base model adjustment, test updates, and move on. The producing team never learns anything happened. This removes the feedback loop that would create demand for contracts. The producing team has no reason to change their behavior because, from their perspective, nothing went wrong.
The alternative: file incidents. Track repair time. Make the cost visible to the producing team rather than absorbing it yourself. “Your column rename on Friday caused 6 hours of debugging and delayed the weekly board report” is a concrete statement that creates urgency. Do this consistently over a quarter and the case for contracts makes itself.
Two datasets, not a mandate
The practical starting point is small. Pick two high-impact datasets with clear consumers and define a basic contract template stored in Git. Not twenty datasets. Not “all production sources.” Two.
The selection criteria: choose datasets that break your pipelines most often or that feed your most critical dashboards. These are the datasets where the cost of not having a contract is already tangible and where you can demonstrate value fastest.
For those two datasets, the implementation is straightforward:
- Define a basic contract template stored in version control
- Instrument a minimal dashboard tracking completeness and freshness
- Add validation to producer pipelines (or at the load boundary if you can’t reach production systems)
- Integrate checks into CI/CD
Get those working. Collect feedback from both sides. Then expand.
What “expand” means varies by organization. At GoCardless, adoption was faster for inter-service communication events than for analytics-specific data, because the event producers already understood API versioning. The concept of “this is an interface with consumers who depend on its shape” mapped directly to something they already practiced. Some teams will adopt contracts naturally once they see the pattern working. Others need direct engagement.
Each expansion should come from demonstrated value, not from a top-down mandate. “The payments team adopted contracts and their data incidents dropped by 60%” is a better argument for the next team than “leadership says everyone needs contracts by Q3.”
GoCardless and Convoy: two models of adoption
GoCardless deployed about 30 contracts within six months, covering 50-60% of their asynchronous inter-service communication. Their key cultural learning was about sustained communication: “We’ve found it’s important to provide regular communications to remind people why we’re doing this. Data Contracts is a big culture shift… it can be easy to become just something people feel like they have to do, rather than something they want to do.”
When contracts feel like a mandate, teams comply minimally. When contracts address a problem teams actually experience, adoption is self-sustaining.
Convoy measured success differently. Instead of tracking contract coverage as a number, Sanderson tracked “the number of conversations created between data and dev teams.” The outcome: “Data awareness/literacy across the organization improved virtually overnight.” That metric captures something that YAML compliance numbers don’t. A team can have 100% contract coverage and zero cross-team understanding. Conversely, the conversations that happen during contract negotiation can improve data quality even before the contracts are enforced.
The three phases of success measurement
For teams starting out, success measurement follows three phases:
Viability: can we implement this at all? The first phase is about proving the concept works in your specific organization. Can you define a contract for a real dataset? Can you get a producing team to agree to it? Can you enforce it in CI/CD without breaking existing workflows? If the answer to any of these is no, you have technical or organizational blockers to address before thinking about scale.
Repeatability: can other teams adopt it without constant support? Once the first contract is working, the question shifts to whether the pattern is transferable. Can a team that wasn’t involved in the initial implementation adopt contracts without the original champion hand-holding them through every step? If adoption requires a dedicated person to sit with each new team, it won’t scale. The tooling and documentation need to be good enough for self-service adoption.
Scale: is it producing organization-wide impact? This is the long-term goal, but it’s a trailing indicator. You get here by succeeding at viability and repeatability, not by mandating scale upfront.
Useful metrics across all phases: data incident rate (are incidents caused by schema changes declining?), mean-time-to-detection (are you catching problems faster?), change lead time (are schema changes being coordinated instead of discovered?), and SLO compliance (are data products meeting their freshness and quality commitments?).
The documentation-to-enforcement gap
Data Engineering Weekly framed the underlying challenge well: “Software engineering treated specifications as executable constraints. The data industry often treated contracts as descriptive artifacts.” The shift from documentation to enforcement is where the value lives.
A contract that is only a document is a wiki page. A contract that is validated in CI, enforced at write time, and monitored for SLA compliance is infrastructure. The change management work is about moving the organization from the first state to the second, and that transition is incremental. You don’t jump from no contracts to full enforcement. You start with documentation (because you need the conversation to happen), add CI validation (because you need the feedback loop), add runtime monitoring (because you need to track SLAs), and eventually reach a state where the contract is as much a part of the deployment process as the code itself.
The teams that succeed treat the organizational change as the primary work and the tooling as secondary. Writing YAML is straightforward. Getting two teams to have a structured conversation about data expectations, and maintaining that agreement over time, is where the value is created.