Who is building this, and on what standing.
R.I.V.E.R. is a thesis. Until the research program produces formal evidence, the framework's credibility rests partly on the practitioner who proposed it. This page is the standing.
A framework at this stage has to disclose its author.
Most measurement frameworks earn their public standing through years of structured research before they earn their public name. R.I.V.E.R. is not in that posture yet, and pretending otherwise would be the kind of overreach the framework itself is built to avoid.
What the framework currently has is pattern recognition from field practice, expressed rigorously, written down, and put on a research roadmap that will produce the evidentiary basis the framework eventually needs. Until that roadmap delivers, R.I.V.E.R.'s claims are practitioner-grounded. A reader evaluating those claims is entitled to know whose practice grounded them.
This page is that disclosure. It names the author, the relevant background, the institutional affiliation, the independence position the framework takes, and the research program that is being built to retire the dependency on any one practitioner's standing.
Where the pattern recognition came from.
I am a Staff Solutions Architect at LaunchDarkly. Before that, most of my career was spent in vendor professional services working with large enterprise clients: roughly two years at MITRE and a longer stretch at Red Hat as both an architect and a consultant in their professional services organization.
I have spent more than a decade in consultative roles supporting developers, partnering with operations teams, and trying to define the flow of value through software organizations, evaluate that flow, and communicate it back to the parts of the organization that fund and direct the work. At MITRE I was part of a consultative IT organization that built rapid-prototype infrastructure for government research programs; short engagements with rotating teams forced me to align quickly to each contract's goal and let that alignment inform the technology choices I made for the developers I was supporting. At Red Hat I worked with IT organizations adopting open-source platforms, where the recurring conversation was about helping those organizations locate themselves in the value flow so they did not build things their parent organizations would not use. LaunchDarkly is the smallest company I have worked for, and it has put me closer to the developer side of the same problem than the previous roles did.
Across all of those engagements, the pattern I kept picking up is the same. Teams are routinely unaware of where they sit in the flow of value, which means their motivations and their problem-solving instincts drift out of alignment with the larger company initiative. Teams that produce real value but cannot articulate that value get defunded, with downstream consequences for the organization that defunded them. Cultural change initiatives stall not because the ideas are bad but because nobody can answer the question how does this make us more money in language the people who decide funding can use. R.I.V.E.R. is my attempt to formalize the instincts and institutional knowledge that consultative roles produce, and to give the industry a shared way of asking and answering that question.
What changed, and what I am trying to do about it.
Field observation across more than a decade has produced trends I am confident enough about to write down. It has not produced trends I am confident enough to publish as established findings, and the difference matters.
What changed is that the trends became urgent. AI agents are entering software workflows now, and one of the things that fact makes inarguable is that an enormous amount of organizational knowledge passes through humans implicitly: the guardrails, the routing, the value-context that human workers carry without being asked to articulate it. Agents do not carry that context. When they take over a piece of work, what gets lost is exactly the thing that the consultative roles I have spent my career in were trying to surface and make explicit in the first place.
I built R.I.V.E.R. to make those flows, and the value-context they carry, explicit. To define them precisely enough that they can be tracked, communicated, and operated on, and explicit enough that AI agents working alongside human teams can be bound by the same context the humans are. The framework's measurement claims are one surface of that work. The operating discipline of declaring release intent, attaching hypotheses, scoping success, and evaluating outcomes against declared cohorts is the deeper substrate. If AI workflows are going to inherit the value-context human teams currently carry, that substrate is what they need to be embedded in.
This is also worth saying plainly. I have spent my career in consultative work, not in research. The trends I have picked up across more than a decade are anecdotal in the strict methodological sense, and proving them out requires what I cannot build alone: structured research, a wider participant pool, and the kind of methods rigor I do not have. A research collaboration is being formed to anchor Phase 1. Its purpose, in part, is to pull me from practitioner toward researcher, and to make sure the data the program eventually produces back to the industry is solid, vetted, and applicable.
Field observation, then formalization.
R.I.V.E.R. is not a top-down construction. The framework's six intent types, seven metric families, five maturity levels, and the central release-intent artifact were not designed from first principles and then offered to the industry. They were observed, repeatedly, across software organizations operating in the release era, and then named.
Each named element of the framework corresponds to a phenomenon I have watched teams either produce, struggle to produce, or operate without. The release intent is the artifact that cross-functional teams keep accidentally inventing under different names when they get serious about declaring what a release is for. The maturity ladder is the trajectory I have watched teams climb, and stall on, and skip steps in. The asymmetry between DORA's four metrics and R.I.V.E.R.'s seven families is the asymmetry of what the modern stack actually produces in operational data.
Pattern recognition is not the same as evidence. The framework's coherence across the sample of organizations I have observed is high, and consistent enough that I am confident writing it down. It is not yet empirically formalized. Formalization is what the research program produces, and the research program is staged to produce it on a timeline that earns it.
The framework, and the affiliation.
A measurement framework's credibility depends on its independence from any single vendor's product, and R.I.V.E.R. is no exception. The framework's tool-neutrality, stated in the canonical page and built into the thesis, is what allows it to function as industry vocabulary rather than as a marketing asset. That independence is not negotiable.
It is also worth being plain about the position I write from. I currently work at LaunchDarkly as a Staff Solutions Architect. LaunchDarkly's product is built on the operational separation of deploy from release, and watching that separation play out across many customer organizations is what made it clear to me that the deploy-to-release boundary, important as it is, does not capture the whole shape of what release means in practice.
In R.I.V.E.R.'s framing, release is the full measure of value from ideation through adoption and iteration, not the moment a flag flips on. The framework extends from that observation.
The framework is mine. It is also mine because LaunchDarkly's vantage point made it possible for me to see what the framework names. Both of those things are true. The platform that prompted the observation makes R.I.V.E.R. instrumentation materially easier; it is not required, and the framework is constructed to be expressible on any combination of platforms and internal systems that produce the necessary data.
DORA originated at Puppet. It became an independent firm. Google acquired it in 2018. Across all three vendor homes the framework retained credibility, because the methodology and the longitudinal data were independent of any one vendor's product. The framework's portability across platforms was a structural property, not a marketing claim.
R.I.V.E.R. is positioned to follow the same pattern. The research program is designed to produce data that does not depend on any single platform's instrumentation, and the framework's portability is enforced at the definitional level rather than at the marketing level. The independence question matters more for R.I.V.E.R. than it did for DORA, because LaunchDarkly's commercial position benefits from the framework in a way Google's did not, and the framework's structure has to make that benefit incidental rather than load-bearing.
The current stage, and the program built to advance it.
R.I.V.E.R. as published today is a thesis at version 0.1. The framework is internally coherent, grounded in field observation across a sample of practitioners, and written to survive scrutiny. What it does not yet have is structured empirical evidence behind any of its specific claims. That is the gap the research program is built to close.
Each phase corresponds to a different level of evidentiary claim the framework can defensibly make. A framework that overstates its evidentiary basis is a framework that collapses under its first published benchmark. R.I.V.E.R. is positioned to earn the evidentiary basis it needs, in stages, on the timeline that earns it.
How to get in touch.
The best way to engage with R.I.V.E.R. depends on what kind of engagement you have in mind. Below are the channels worth knowing about.
-
Email
-
Sitecortsystem.dev. Personal profile and writing. R.I.V.E.R. is live at river-framework.dev.
-
LinkedIn
-
Phase 1 / InterviewsIf you lead engineering, product, or SRE/operations at an organization operating in the release era and want to be interviewed for Phase 1, the participation page has the details and the signup. cortsystem.dev/participate.