Stable public release • v2.6.1
IOA Core is the open source governance kernel for AI workflows.
Policy checks, evidence bundles, immutable audit trails, memory-backed orchestration,
and quorum-style review patterns inside execution.
7 System Laws in the governance layer
PyPI package live
Audit and evidence artifacts inspectable
Install
pip install ioa-core
What one governed action looks like
01 Request enters runtime
Task, policy context, and model/provider choices arrive together.
02 System Laws and policies evaluate
Checks run before the action proceeds, not only after output exists.
03 Execution and review happen
Provider-neutral and quorum-style review patterns can participate in the decision.
04 Audit and evidence emit
The action leaves behind inspectable records instead of forcing reconstruction later.
The hook is not governance rhetoric. It is inspectable runtime output.
The page needs to answer one practical question fast: what do I actually get if I wire this in?
The answer is a governed execution path with concrete artifacts you can inspect, persist, and route upstream.
from ioa_core.governance import PolicyEngine
engine = PolicyEngine()
result = engine.validate_action({
"action_type": "clinical_summary",
"jurisdiction": "EU",
"contains_sensitive_data": True,
})
print(result.decision)
print(result.audit_id)
{
"policy": "demo-governed",
"decision": "allow_with_mitigation",
"audit_chain_verified": true,
"system_laws_applied": ["Law 1", "Law 5", "Law 7"],
"evidence_id": "ev-0001"
}
Seven System Laws are still part of the story
Yes. IOA Core still carries the seven-law governance framing in code, docs, and evidence output.
They should support the runtime narrative, not replace it.
Why mention them
They make the governance layer legible and give teams a stable vocabulary for policy, fairness, oversight, and sustainability checks.
Why not lead only with them
Developers adopt on concrete behavior. The laws are more persuasive once the page shows what enforcement and evidence actually look like.
Where to go deeper
Read the System Laws page if you want the full framing behind the runtime controls.
Why it exists
Most AI tooling focuses on prompts, outputs, tracing, or post-hoc monitoring. IOA Core focuses on the runtime path itself:
what policy was applied, what evidence was produced, and how a decision can be inspected later.
Policy
Put governance checks in the execution path instead of relying only on dashboards and after-the-fact review.
Evidence
Generate evidence artifacts and audit records as part of governed actions, not as reconstructed summaries later.
Neutrality
Use provider-neutral and quorum-oriented patterns instead of locking governance to one model vendor.
Best fit
IOA Core is most useful when AI workflows affect compliance, safety, privacy, or material decisions.
Who benefits most
- AI platform and ML infrastructure teams
- regulated-domain teams in healthcare, legal, finance, and public sector
- teams building multi-model or multi-agent review flows
- developers who want an OSS runtime layer before buying a hosted control plane
What this page is not claiming
- not a claim that OSS alone is a complete hosted operator platform
- not a claim of MCP support unless an MCP adapter is shipped
- not a claim of A2A support unless that integration is implemented
- not a blanket compliance-certification claim for the OSS package alone
Public release status
The current release line is live, installable, and aligned across package, docs, and website surfaces.
Package
`ioa-core` `v2.6.1` is published on PyPI and verified from a fresh install path.
Validation
Core GitHub Actions are green and the public docs were updated to match the shipped release.
Next step
Install it, run the examples, inspect the audit and evidence outputs, and give technical feedback.
Start with the real thing
Use the package, inspect the repo, and read the docs. That is a stronger developer experience than a pure marketing pitch.