How the IOA Quorum Works
Multi-model consensus governance that provides explainable, auditable, and enforceable AI decisions backed by cryptographic evidence.
Overview
The Quorum is IOA's consensus mechanism that evaluates high-risk AI decisions across multiple models simultaneously. Instead of relying on a single model's judgment, Quorum aggregates responses from 3-5 models (e.g., GPT-4, Claude, Gemini, internal policy engines) to produce a majority vote with full evidence trails.
Bias Mitigation
Single-model biases are reduced through diverse model consensus
Audit Trails
Every vote is recorded with SHA256 signatures for regulatory compliance
Parallel Execution
Models evaluate simultaneously, minimizing latency (2-5s typical)
High Reliability
Automatic fallback if any model fails or times out
How It Works
When a high-risk operation requires governance enforcement, IOA's Quorum mechanism orchestrates a consensus vote across multiple models.
Prompt Enters IOA Engine
User request or AI operation triggers a governance check. IOA identifies the operation as high-risk (e.g., PHI access, financial disclosure, legal citation).
operation: "phi_redaction"
jurisdiction: "US-HIPAA"
risk_level: "HIGH" Multiple Models Evaluate in Parallel
IOA dispatches the same governance query to 3-5 models simultaneously. Each model evaluates based on its training and policy context.
Consensus Vote Recorded
Models return votes (ALLOW / DENY / WARN). IOA aggregates results, applies consensus logic (majority, unanimous, or weighted), and generates a signed evidence bundle.
Evidence ID: EVID-2025-00123
Hash:
b1946ac9... Example Use Cases
Quorum is typically enabled for high-stakes decisions where a single model's error could have serious regulatory, ethical, or financial consequences.
HIPAA PHI Redaction
Scenario: A healthcare AI assistant is asked to include patient names in a public dataset.
Financial Audit Anomaly
Scenario: An AI accounting assistant is asked to hide a revenue discrepancy in quarterly reports.
Legal Citation Validation
Scenario: A legal AI is citing a case that has been overturned. Quorum validates citation status.
Performance & Reliability
Does Quorum introduce delay?
Yes, but minimal. Quorum runs models in parallel, so latency is determined by the slowest model, typically 2-5 seconds for a 3-model quorum. Enterprises can configure:
- Quorum depth: 1 (single model) → 3 (standard) → 5 (high-risk)
- Latency budget: Auto-timeout for slow models (default: 10s)
- Selective quorum: Enable only for high-risk operations
How costly is Quorum?
Approximately 3× the cost of a single LLM call for the same token count (e.g., $0.06 vs $0.02 for 1k tokens with GPT-4). Most enterprises enable Quorum only for:
- High-risk operations (PHI, financial disclosure, legal citations)
- Production deployments (disable in dev/staging)
- Specific user roles (admins, auditors, compliance officers)
IOA's Consensus Pack add-on ($299/mo for 10k consensus requests) provides predictable pricing. View pricing →
What if models disagree?
IOA applies configurable consensus logic:
- Majority: 2/3 or 3/5 votes determine the result (default)
- Unanimous: All models must agree (strictest)
- Weighted: Assign higher weight to internal policy engines or domain-specific models
- Tie-breaking: DENY always wins ties (fail-safe)
Disagreements are logged as high-priority evidence for human review.
Who audits the models?
Each model's behavior is continuously assessed using Aletheia-aligned ethics scoring. IOA tracks:
- Bias metrics: Demographic parity, equal opportunity
- Consistency: Same prompt → same vote across runs
- Integrity: No hallucinations or policy violations
Models with declining performance are automatically downweighted or removed from quorum pools. Learn about Aletheia alignment →
Bias Mitigation & Aletheia Alignment
Single-model AI systems are vulnerable to inherited biases from training data. IOA's Quorum mechanism actively mitigates bias through diverse model consensus and Aletheia v2.0 alignment.
Model Diversity
By combining models from different vendors (OpenAI, Anthropic, Google), IOA reduces single-source bias. Each model brings different training data, architectures, and value alignments.
Fairness Scoring
Every quorum decision is evaluated against Aletheia fairness facets:
- Demographic parity (equal outcomes across groups)
- Equal opportunity (equal TPR/FPR)
- Individual fairness (similar cases → similar outcomes)
Continuous Auditing
IOA logs every quorum vote with demographic metadata (when available). Auditors can replay decisions and test for:
- Protected attribute leakage
- Disparate impact
- Consistency violations
Human Oversight
High-risk disagreements (e.g., 2/3 split votes) trigger human review workflows. Compliance officers can override quorum decisions with signed justifications.
Aletheia v2.0 Integration
IOA automates approximately 65% of Aletheia assessment facets at runtime, including transparency (evidence logging), reliability (consensus voting), accountability (signed audit trails), and fairness (bias detection).
Learn More About Aletheia →Glossary
- Quorum
- A consensus mechanism that aggregates decisions from multiple AI models to produce a single, auditable governance decision. Named after the minimum number of members required for a valid legislative vote.
- Consensus
- The agreement logic applied to quorum votes. Common modes: majority (2/3 or 3/5), unanimous (all agree), weighted (models have different voting power), and federated (distributed quorum across organizations).
- Enforce Mode
- One of IOA's three runtime modes. In Enforce mode, quorum decisions are binding: DENY votes block operations immediately. Other modes: Shadow (log only) and Graduated (warnings + throttling).
- Evidence Bundle
- A cryptographically signed package containing: the original prompt, all model votes, consensus result, timestamps, jurisdiction context, and SHA256 hash. Evidence bundles are immutable and stored for regulatory audits (7+ years for HIPAA, SOX).
- Federated Quorum
- A quorum that spans multiple organizations or data centers. For example, a hospital and insurance company might run a federated quorum to jointly approve a sensitive data-sharing operation, with each party contributing models and requiring majority consensus.
- Aletheia Scoring
- IOA's implementation of the Aletheia v2.0 ethical AI framework. Each quorum decision receives scores across 6 facets: Transparency, Reliability, Accountability, Ethics, Fairness, and Sustainability. Low scores trigger alerts or model replacement.
Explore Evidence Samples
See real evidence bundles from production quorum decisions, including PHI redaction, financial audit, and legal citation validation examples.