CLARA OODA Kill Web Closed-Loop Engagement with AR-Governed Decision Cycles
1. Overview
CLARA (Composable Learned Assured Reasoning Architecture) composes engagement decisions from published doctrine sources using a directed acyclic graph (DAG) of 109 rules across 16 namespaces. The OODA Kill Web demo runs a closed-loop Observe-Orient-Decide-Act-Assess cycle over tactical scenarios, with BDA (Battle Damage Assessment) feedback driving re-engagement decisions.
Key property: Rules from published doctrine (LOAC, CDE, ROE) compose through a precedence hierarchy. Higher-precedence constraints cannot be overridden by lower-level rules or mission objectives. This invariant is formally verified by ErgoAI.
The system is ML-agnostic -- the same rule composition works with any classifier (CNN, Logistic Regression, or a CNN+LR composite). The AR layer composes on top of whatever classification the ML model produces, applying doctrine checks regardless of the ML architecture.
2. The OODA Loop
Each cycle executes five phases:
| Phase | What Happens | Key Output |
|---|---|---|
| Observe | Load/update target list from scenario and BDA feedback. Relocated targets get new positions. | Surviving target list |
| Orient | Run the 16-namespace doctrine DAG. Each target gets a composed decision (engage/hold/escalate) from 109 rules. | Per-target decisions + doctrine flags |
| Decide | Weapon-Target Assignment (WTA) solver assigns platforms and weapons to engage-cleared targets. One round per target per cycle (shoot-look-shoot). | Engagement plan |
| Act | Commit engagements, deduct munitions from platform state. | Rounds fired, munitions remaining |
| Assess | BDA simulation determines outcomes (destroyed/damaged/relocated/missed). Loop controller evaluates 7 rules (L1-L7) to decide: CONTINUE, TERMINATE, or RE-OBSERVE. | BDA results, ESTV update, loop decision |
Loop Control Rules
| Rule | Type | Condition |
|---|---|---|
| L1 threats_neutralized | TERMINATE | ESTV reduced by 99%+ from initial |
| L2 munitions_exhausted | TERMINATE | Zero remaining munitions across all platforms |
| L3 max_cycles | TERMINATE | Cycle count reaches configured maximum |
| L4 roe_change | TERMINATE | ROE changed to weapons_hold (ceasefire) |
| L5 marginal_value | CONTINUE | ESTV reduction last cycle exceeds threshold |
| L6 diminishing_returns | TERMINATE | ESTV reduction is near-zero (positive but below threshold) |
| L7 force_reobserve | RE-OBSERVE | Unassessed targets past reobs window, or ESTV increased (relocated targets raised threat) |
ESTV (Expected Surviving Threat Value)
ESTV measures remaining threat: ESTV = sum(threat_value * (1 - combined_Pk)) across all targets. It starts high (initial threat) and drops as targets are destroyed or damaged. The ESTV curve across cycles shows engagement effectiveness. The loop terminates when further cycles produce negligible ESTV reduction.
3. Doctrine DAG (16 Namespaces)
The doctrine DAG organizes 109 rules into 16 namespaces connected by 54 directed edges. Each engagement decision traverses the full DAG via 10-hop BFS.
| Namespace | Rules | Precedence | Source |
|---|---|---|---|
| target_assessment | 5 | Tactical | JP 3-60 |
| weapons_pairing | 8 | Tactical | FM 3-09 |
| no_strike_list | 15 | NSL (Level 2) | CJCSI 3160.01 Encl B |
| collateral_objects | 8 | CDE | CJCSI 3160.01 Table B-1/B-2 |
| roe_compliance | 5 | ROE (Level 3) | ROE matrix |
| cde_level_1 through cde_level_5 | 31 | CDE (Level 4) | CJCSI 3160.01 Encl D |
| engagement_authority | 6 | Authority | JP 3-60 II-30 |
| loac_compliance | 5 | LOAC (Level 1) | DoD Law of War Manual; AP I |
| tactical_priority | 7 | Tactical | FM 3-09; JP 3-60 |
| mission_objectives | 4 | Objective | Mission-specific |
| bda_assessment | 8 | OODA | JP 3-60 BDA |
| loop_control | 7 | OODA | OODA spec |
4. Rule Architecture and Precedence
Rules follow a strict 6-level precedence hierarchy. Higher levels cannot be overridden by lower levels.
| Level | Category | Override Policy | Example |
|---|---|---|---|
| 1 | LOAC | Non-derogable. Cannot be overridden by any rule. | Distinction requirement (confidence >= 0.6) |
| 2 | NSL | Override requires dual-use confirmation + commander auth | Category I NSL entity within collateral radius |
| 3 | ROE | Commander can adjust within theater ROE bounds | Weapons tight requires positive ID |
| 4 | CDE | Adjustable per mission profile | CDE Level 1-5 methodology checks |
| 5 | Tactical | Fully adjustable | Target priority scoring |
| 6 | User | Can only ADD restrictions, never weaken protections | Custom engagement range limits |
Safety invariant: User rules (Level 6) can add restrictions (hard caps, soft penalties) but cannot boost scores in ways that conflict with higher-precedence constraints. This is enforced at rule validation time and verified by ErgoAI's \overrides/2 non-interference proofs.
Rule Types
- Hard cap (constraint): Blocks engagement entirely if condition is met. E.g., LOAC distinction failure.
- Soft penalty (constraint): Reduces engagement score. E.g., CDE collateral risk.
- Soft boost (objective): Increases engagement score. E.g., mission priority for air defense targets.
- Loop decision (OODA): Controls the OODA loop. E.g., terminate on munitions exhausted.
5. Mission Profiles and Objectives
Mission profiles activate subsets of the mission_objectives namespace rules to shape engagement scoring for specific operational contexts. Objective rules use soft boosts to prioritize mission-relevant targets without modifying constraint rules.
| Profile | Active Objectives | Effect |
|---|---|---|
| SEAD Mission | obj_prioritize_air_defense, obj_time_critical | Boosts air defense targets, elevates time-sensitive targeting |
| Urban Protection | obj_urban_protection | Increases collateral sensitivity, tighter CDE thresholds |
| Time Critical | obj_time_critical | Elevates fleeting/time-sensitive targets in priority scoring |
| Defensive | obj_defensive_posture | Favors defensive engagement, penalizes offensive overreach |
Constraint invariance: The SHA-256 hash of the constraint rule set is identical across all mission profiles. Objective rules compose on top of constraints at precedence Level 5 -- they cannot override LOAC (Level 1), NSL (Level 2), ROE (Level 3), or CDE (Level 4). This is verified by ErgoAI's \overrides/2 non-interference proofs.
To use a mission profile in the demo: select a profile from the Pipeline page before running the OODA loop. The profile's objective rules will be loaded into the doctrine DAG for every cycle. The constraint hash badge after the run confirms constraint rules were unchanged.
6. Formal Verification (ErgoAI)
The rule set is independently verified by ErgoAI (Flora-2 / XSB), a logic programming engine that uses F-logic with defeasible reasoning.
Verification Checks
| Check | What It Proves | Method |
|---|---|---|
| Rule Consistency | No contradictory rules within the same namespace | contradictory_same_namespace/3 |
| Constraint Invariance | Objective namespaces cannot override constraint namespaces | \overrides/2 over 96 namespace-level facts |
| Per-Target Proof | ErgoAI's final_decision/2 matches the pipeline's decision | Insert target features, query final_decision/2, cross-check |
Decision Traces vs. Formal Proofs: The pipeline produces decision traces (audit trails showing which checks each decision traversed). These are not independent proofs -- they are the pipeline documenting its own reasoning. ErgoAI provides the actual formal verification by independently re-deriving the decision in F-logic.
7. BDA and Loop Control
BDA Outcomes
After each engagement cycle, BDA simulation determines target outcomes based on the effective Pk (probability of kill), modified by environmental conditions:
- Destroyed -- target eliminated (threat value = 0)
- Damaged -- target degraded (threat value reduced by 50%)
- Relocated -- target moved to new position (threat value unchanged, requires re-observation)
- Missed -- no effect (target unchanged)
Environmental Conditions
Scenarios can specify environmental conditions that degrade weapon effectiveness:
| Condition | Pk Multiplier |
|---|---|
| Night operations | x 0.85 |
| Dust / smoke | x 0.70 |
| Fog / rain | x 0.80 |
| Reduced visibility | x 0.75 |
| Degraded visibility | x 0.60 |
These modifiers compound. A night operation in dust with reduced visibility has an effective Pk multiplier of 0.85 x 0.70 x 0.75 = 0.45, roughly halving weapon effectiveness and forcing more OODA cycles to neutralize threats.
Shoot-Look-Shoot
The WTA solver assigns at most one round per target per cycle. After firing, BDA assesses the outcome before the next cycle decides whether to re-engage. This prevents munitions waste on already-destroyed targets and forces the loop to iterate.
8. Scenarios and Conditions
| Scenario | Targets | ROE | Challenge |
|---|---|---|---|
| Open Terrain | 20-50 | Weapons Free | Efficiency under degraded conditions |
| Urban CAS | 15-30 | Weapons Tight | Collateral avoidance, NSL entities |
| Mixed Threat | 30-60 | Weapons Tight | Target prioritization across types |
| Defended Airspace | 10-20 | Weapons Hold | Full doctrine chain, escalation |
Each scenario includes target positions, platform positions with munitions, NSL entities, protected structures, and environmental conditions. Scenarios are seeded for reproducibility (40 seeds per type used in evaluation).
9. Evaluation Results
40-scenario evaluation across all scenario types:
Zero ROE violations across all 40 scenarios and all OODA cycles.
Average 1.6 cycles per scenario (range: 1-4 depending on conditions).
32% average ESTV reduction per scenario.
Zero wasted engagements (no rounds fired at already-destroyed targets).
The evaluation data is pre-computed and available in the demo's Evaluation tab under the System section.
10. Demo Interface Guide
The demo organizes tabs into two groups:
Scenario Group (run-dependent)
These tabs require selecting and running a scenario.
| Tab | What It Shows | Requires |
|---|---|---|
| Select | Four scenario cards with target counts, ROE level, and challenge description | -- |
| Situation | Tactical map with target/platform positions, target list, platform loadouts | Select a scenario |
| Pipeline | Mission profile selector, ML model selector, OODA loop controls (max cycles, run loop / single cycle), results summary | Select a scenario |
| DAG | 16-namespace DAG visualization with edge weights, per-target DAG walk | Run pipeline |
| Verify | ErgoAI formal verification (consistency, invariance, per-target proof) at top. Decision traces with namespace tags below. | Run pipeline |
| OODA Cycles | Cycle 0 (initial state) through final cycle. Per-target decision table with class, priority, prior BDA, outcome. BDA results summary per cycle. | Run pipeline |
| ESTV Curve | SVG chart of ESTV reduction across cycles with BDA annotations. Per-cycle breakdown table with destroyed/damaged/relocated/missed counts. | Run pipeline |
| BDA Overlay | Tactical map colored by BDA status with cycle slider (C0 through final). Status bars, ESTV, and platform munitions state per cycle. | Run pipeline |
| Loop Trace | Per-cycle table of all 7 loop control rules (L1-L7) with their evaluated values and fired/not-fired status. | Run pipeline |
System Group (pre-computed)
These tabs show system-level data independent of the selected scenario.
| Tab | What It Shows | Data Source |
|---|---|---|
| Rules | Precedence hierarchy diagram. All 109 rules browsable by namespace. Mission objective rules per profile. User rule editor with precedence validation. | YAML rule files + ErgoAI |
| Evaluation | 40-scenario aggregate: avg cycles, total engagements, ESTV reduction, zero violations. Per-scenario results table. | ooda_evaluation.json |
| AR-ML | How AR doctrine rules shape ML training: cost matrix, adaptive thresholds, hard examples, training comparison. | Pre-computed eval data |
Recommended Demo Flow
- Select -- pick Open Terrain scenario
- Situation -- show the tactical layout (22 targets, 4 platforms)
- Pipeline -- optionally select a mission profile (SEAD), run the OODA loop
- OODA Cycles -- walk through C0 (initial) to C4 (terminate), show BDA feedback driving re-engagement
- ESTV Curve -- show threat reduction across cycles
- BDA Overlay -- slide through cycles on the tactical map
- Verify -- click ErgoAI consistency check, show constraint invariance, run a per-target formal proof
- Rules -- show precedence hierarchy, try adding a user rule (restriction accepted, permissive rule blocked by LOAC)
- Evaluation -- show 40-scenario zero-violation results
CLARA OODA Kill Web -- Open Demo