Policy hooks
Governed workflows can run with explicit policy checks instead of relying only on post-hoc review.
AI governance is moving from slideware and policy PDFs into implementation deadlines. IOA Core v2.6.1 is our OSS answer to that shift.
IOA Core v2.6.1 is now live on PyPI under Apache-2.0. It is not a compliance certificate and it is not a full hosted control plane. It is the runtime layer: the part that decides whether governance is actually present when an AI workflow executes.
pip install ioa-core The AI governance conversation has changed. A year ago, many teams could still treat governance as something to revisit after the product worked. In 2026, that posture is much harder to defend.
In the European Union, the AI Act entered into force on 1 August 2024. According to the European Commission's implementation timeline, prohibited AI practices and AI literacy obligations began to apply on 2 February 2025, obligations for general-purpose AI models began to apply on 2 August 2025, and the general date of application is 2 August 2026.
That does not mean every AI team now needs a massive compliance program tomorrow. It does mean the era of completely hand-wavy governance is ending. More teams need a concrete answer to operational questions:
IOA Core is an open-source governance kernel for AI workflows. The center of gravity is not dashboards or policy theater. It is the execution path itself.
Governed workflows can run with explicit policy checks instead of relying only on post-hoc review.
Governed actions produce inspectable evidence artifacts as part of execution.
Hash-chained audit logging makes it harder to treat critical decisions as undocumented side effects.
Context and persistence live in the same runtime substrate rather than as an afterthought.
Use OpenAI, Anthropic, Gemini, DeepSeek, XAI, or Ollama without tying governance semantics to one vendor.
Multi-model review is treated as a runtime pattern, not just a presentation-layer trick.
Not by itself.
The honest answer is narrower and more useful: IOA Core helps teams operationalize some of the runtime mechanics that AI governance regimes make harder to ignore. It gives developers a place to put policy checks, evidence generation, audit continuity, and provider-neutral review logic in code they can run and inspect.
It does not mean that installing `ioa-core` makes a system automatically compliant with the EU AI Act, GDPR, HIPAA, or anything else. Legal classification, risk assessment, organizational controls, documentation obligations, and domain-specific procedures still exist outside the package.
But if a team is trying to move from abstract governance goals to runtime implementation, that is exactly where IOA Core is meant to help.
The package, docs, website, and release messaging now line up around the shipped `v2.6.1` public release.
The main GitHub Actions path, SPDX checks, and PyPI publication path were hardened and verified.
Dependent repos were checked and compatibility fixes landed where they were needed.
A fresh virtual environment install and import smoke test passed against the published PyPI package.
Teams that need policy and evidence in the runtime path instead of only in tracing or monitoring tools.
Healthcare, legal, finance, and public-sector teams that need stronger auditability around AI actions.
Teams building review, escalation, or consensus workflows across providers and local models.
IOA Core is the kernel, not the whole operator stack. Teams that move beyond local evaluation usually also want governance observability, structured documentation exports, review workflows, and domain-specific control mappings.
Those layers belong above the kernel: in hosted operator tooling, commercial workflow surfaces, and QIX domain packs. This release matters because it establishes a real runtime substrate for that next layer rather than another slide-deck architecture.
The most useful reaction to this release is not abstract agreement. It is technical critique. Install it, run the examples, inspect the audit and evidence output, and tell us where the model breaks down in real deployments.