Skip to content
Committed
Methodology · Manifesto

SPECTRA.
AI-assisted engineering, made predictable.

Seven disciplines, applied as a system. Distilled from research and refined across enterprise deployments where being wrong has clinical, regulatory, or financial consequences.

SPECTRA methodology system diagram
30+
Government hospitals running on Committed-built platforms
500K+
Citizens served by Committed-built government services
1,000+
IoT devices monitored under SPECTRA-governed pipelines
11 mo
Median POC-to-production for SPECTRA-led programs

We believe you can build with AI agents quickly, without losing control. We believe in organizations that operate by method, not in organizations that depend on a star developer who knows how to phrase a good prompt. We believe real productivity comes from measurement, not from declaration.

The failure mode is rarely the model

Across roughly forty production deployments we have watched the same arc. An AI-assisted team ships fast for the first ten pull requests. By PR fifty, velocity has collapsed: inconsistent patterns, half-finished refactors, tests that pass but mean nothing, and a growing pile of code that nobody on the team fully understands. The next engineer takes three weeks to onboard instead of three days.

The model is fine. What collapses is the organizational scaffolding around it. Specifications live in someone's head. Policy is enforced by personality. Reviews are skipped under deadline. Audits exist, but only after something breaks. The pattern is not a tooling problem. It is the absence of method - and at AI-assisted velocity, the absence of method becomes catastrophic in months that would otherwise have taken years.

What we value

A written specification - over a verbal prompt. Codified policy - over personal preference. A scoped agent - over an agent with imagination. Knowledge shared across the organization - over knowledge locked in one developer's head. Automated quality gates - over good intentions. Human review calibrated to risk - over review that gets skipped under deadline. Audit by default - over audit after something breaks.

There is value in the right side of each pair. We value the left side more.

Seven disciplines. One method. SPECTRA.

Figure 01

The seven disciplines, as a closed system.

SPECTRA is not a checklist. It is a closed loop where every output feeds the next discipline and every discipline reinforces the spec. The dashed feedback line is what keeps the system honest: an audit event that surfaces a recurring class of error becomes a spec change, which becomes a policy update, which becomes a test in CI.

SPECTRA seven-discipline loop: Spec → Policy → Engine → Context → Tests → Review → Audit, with audit feeding back into Spec
Fig. 01 - The closed-loop methodology. Audit feeds back into Spec.

Each discipline, in detail

S
Spec
A written specification before the first commit.

Not a verbal prompt, not a Slack thread. The specification is the contract between intent and code, versioned in the same repo and updated when the design changes. Without a written spec, every PR is an interpretation - and AI accelerates the rate at which interpretations diverge from each other.

P
Policy
Organizational policy enforced in code.

Naming conventions, security requirements, data-handling rules, model selection, retry behavior, the redaction policy your CISO signed off on - encoded as checks that run on every PR. Policy that lives only in someone's head will lapse the moment that person is on vacation. Policy enforced in CI never lapses.

E
Engine
A bounded, defined agent architecture.

Agents have explicit scope, explicit tools, and explicit boundaries. They do not "figure it out." They call documented APIs with documented arguments and return values, with retries and idempotency that have been thought through. An agent with imagination is a liability the auditor cannot bound.

C
Context
A shared organizational knowledge layer.

Specifications, decisions, runbooks, and prior incidents - retrievable by every agent, every engineer, and every reviewer in the organization. Not knowledge locked inside individuals. The cost of rebuilding context every time a new engineer joins the team is the largest hidden cost in AI-assisted engineering.

T
Tests
Automated quality gates on every output.

Eval harnesses tied to the spec, regression gates that fail CI when behavior drifts, integration tests that run before merge. Tests are how the team agrees on what "done" means without arguing in PR comments. Without them, "done" is a vibe - and vibes do not survive an audit.

R
Review
Human review calibrated to risk.

Not the same review for every change. A docs typo and a billing-flow change should not have the same review path. The riskier the change, the more eyes - and the rules for what counts as risky are written down, not improvised. Review that gets skipped under deadline pressure is the same as no review at all.

A
Audit
Full transparency on prompt, decision, output, and cost.

Every model interaction recorded with its inputs, its outputs, the model and version that produced it, the cost, and the chain of decisions around it. Not as a feature flag. By default. This is what lets you defend the system to a regulator, an internal auditor, or a customer who is asking why the model said what it said.

Figure 02

Review is calibrated to risk, not to ceremony.

A docs typo and a billing-flow change should not travel the same review path. Risk tiers are defined up front, encoded in the merge automation, and applied to every PR. The signed audit log captures the outcome of every tier, every time - so every decision is traceable, regardless of how routine it looked.

Risk-calibrated review: low-risk auto-merges, medium-risk needs one senior reviewer, high-risk needs two reviewers and audit sign-off; all tiers are recorded in the signed audit log
Fig. 02 - Risk-tiered review. Every tier writes to the same signed audit.
Figure 03

How it gets adopted - day, week, month, quarter.

SPECTRA is teachable in a day, applicable on a project, and embeddable in a quarter. Adoption follows a deliberate cadence. The first week is tooling. The first month is workflow. The first quarter is culture. We embed engineers into your team for that quarter and exit when SPECTRA is enforced without us in the room.

Adoption cadence: Day 1 tooling, Week 1 workflow, Month 1 cadence, Quarter 1 culture
Fig. 03 - Adoption cadence. Tooling, workflow, cadence, culture.

What is different from "good engineering practice"

Most of these disciplines existed before AI. What changes with AI-assisted engineering is the velocity at which their absence becomes catastrophic. A team that ships features twice as fast, with twice the surface area per PR, can erode a codebase in months that would have taken years. SPECTRA is not new rules. It is the rules made non-negotiable, encoded into the system, and measured.

We did not design SPECTRA in the abstract. It was distilled across deployments where the cost of getting it wrong was real - hospitals where a misread document is a clinical event, regulated finance where a mis-routed transaction is a regulatory event, citizen services where the cost of an error is a person whose status is wrong in a database. The constraints in those environments are what shaped each discipline.

Where it has shipped

SPECTRA underpins programs we have run with the Israeli Ministry of Health (multilingual patient platforms across thirty-plus government hospitals), the Population and Immigration Authority (citizen services for half a million users), and a global cash-logistics leader (predictive maintenance and back-office automation across thousands of connected devices and hundreds of millions of transactions). In each case, the methodology survived the audit, the cutover, and the first incident.

What CTOs ask us

Is this another framework that will collect dust on a Confluence page?

No. SPECTRA is enforced in code on day one - spec templates, policy checks in CI, an evaluation harness, and an audit pipeline. If a discipline cannot be measured by a green or red CI signal, it is not in the methodology yet. The deliverable is running infrastructure, not a deck.

How does this play with our existing engineering process?

It augments rather than replaces. Most enterprises already have code review, security review, and change management. SPECTRA defines how AI-assisted work plugs into those - which gates apply, when human review is required, what the audit trail looks like - so AI engineering does not become a parallel process that bypasses the controls already in place.

Does SPECTRA require us to use specific tools or cloud providers?

No. We have shipped SPECTRA on AWS, Azure, GCP, and on the Israeli Landing Zone, with model providers ranging from Anthropic to OpenAI to Bedrock-hosted open-weight models. The methodology is independent of the substrate. The substrate is your decision.

How do we know it is working?

Three signals. First, time-to-onboard new engineers drops from weeks to days because the spec, the policy, and the audit all tell the same story. Second, the rate of incidents traceable to "we did not know" goes to zero - everything is in the audit log. Third, your auditors stop asking unscheduled questions, because the answers are already documented.

What does the engagement actually look like?

Day one through quarter end, our principal engineers are embedded with your team. Not a vendor presence - on your standups, on your repo, on your on-call rotation. We hand the methodology over progressively: tooling first, workflow next, culture last. By the end of the quarter, your team is enforcing SPECTRA without us in the room.

Want SPECTRA running in your engineering org this quarter?

The first call is a 30-minute working session with a principal engineer who has shipped SPECTRA into regulated production. Not a sales rep, not a deck.

Talk to an architect See the service