CLARA OODA Kill Web Closed-Loop Engagement with AR-Governed Decision Cycles

Contents 1. Overview 2. The OODA Loop 3. Doctrine DAG (16 Namespaces) 4. Rule Architecture and Precedence 5. Mission Profiles and Objectives 6. Formal Verification (ErgoAI) 7. BDA and Loop Control 8. Scenarios and Conditions 9. Evaluation Results 10. Demo Interface Guide

1. Overview

CLARA (Composable Learned Assured Reasoning Architecture) composes engagement decisions from published doctrine sources using a directed acyclic graph (DAG) of 109 rules across 16 namespaces. The OODA Kill Web demo runs a closed-loop Observe-Orient-Decide-Act-Assess cycle over tactical scenarios, with BDA (Battle Damage Assessment) feedback driving re-engagement decisions.

Key property: Rules from published doctrine (LOAC, CDE, ROE) compose through a precedence hierarchy. Higher-precedence constraints cannot be overridden by lower-level rules or mission objectives. This invariant is formally verified by ErgoAI.

The system is ML-agnostic -- the same rule composition works with any classifier (CNN, Logistic Regression, or a CNN+LR composite). The AR layer composes on top of whatever classification the ML model produces, applying doctrine checks regardless of the ML architecture.

2. The OODA Loop

Each cycle executes five phases:

PhaseWhat HappensKey Output
ObserveLoad/update target list from scenario and BDA feedback. Relocated targets get new positions.Surviving target list
OrientRun the 16-namespace doctrine DAG. Each target gets a composed decision (engage/hold/escalate) from 109 rules.Per-target decisions + doctrine flags
DecideWeapon-Target Assignment (WTA) solver assigns platforms and weapons to engage-cleared targets. One round per target per cycle (shoot-look-shoot).Engagement plan
ActCommit engagements, deduct munitions from platform state.Rounds fired, munitions remaining
AssessBDA simulation determines outcomes (destroyed/damaged/relocated/missed). Loop controller evaluates 7 rules (L1-L7) to decide: CONTINUE, TERMINATE, or RE-OBSERVE.BDA results, ESTV update, loop decision

Loop Control Rules

RuleTypeCondition
L1 threats_neutralizedTERMINATEESTV reduced by 99%+ from initial
L2 munitions_exhaustedTERMINATEZero remaining munitions across all platforms
L3 max_cyclesTERMINATECycle count reaches configured maximum
L4 roe_changeTERMINATEROE changed to weapons_hold (ceasefire)
L5 marginal_valueCONTINUEESTV reduction last cycle exceeds threshold
L6 diminishing_returnsTERMINATEESTV reduction is near-zero (positive but below threshold)
L7 force_reobserveRE-OBSERVEUnassessed targets past reobs window, or ESTV increased (relocated targets raised threat)

ESTV (Expected Surviving Threat Value)

ESTV measures remaining threat: ESTV = sum(threat_value * (1 - combined_Pk)) across all targets. It starts high (initial threat) and drops as targets are destroyed or damaged. The ESTV curve across cycles shows engagement effectiveness. The loop terminates when further cycles produce negligible ESTV reduction.

3. Doctrine DAG (16 Namespaces)

The doctrine DAG organizes 109 rules into 16 namespaces connected by 54 directed edges. Each engagement decision traverses the full DAG via 10-hop BFS.

NamespaceRulesPrecedenceSource
target_assessment5TacticalJP 3-60
weapons_pairing8TacticalFM 3-09
no_strike_list15NSL (Level 2)CJCSI 3160.01 Encl B
collateral_objects8CDECJCSI 3160.01 Table B-1/B-2
roe_compliance5ROE (Level 3)ROE matrix
cde_level_1 through cde_level_531CDE (Level 4)CJCSI 3160.01 Encl D
engagement_authority6AuthorityJP 3-60 II-30
loac_compliance5LOAC (Level 1)DoD Law of War Manual; AP I
tactical_priority7TacticalFM 3-09; JP 3-60
mission_objectives4ObjectiveMission-specific
bda_assessment8OODAJP 3-60 BDA
loop_control7OODAOODA spec

4. Rule Architecture and Precedence

Rules follow a strict 6-level precedence hierarchy. Higher levels cannot be overridden by lower levels.

LevelCategoryOverride PolicyExample
1LOACNon-derogable. Cannot be overridden by any rule.Distinction requirement (confidence >= 0.6)
2NSLOverride requires dual-use confirmation + commander authCategory I NSL entity within collateral radius
3ROECommander can adjust within theater ROE boundsWeapons tight requires positive ID
4CDEAdjustable per mission profileCDE Level 1-5 methodology checks
5TacticalFully adjustableTarget priority scoring
6UserCan only ADD restrictions, never weaken protectionsCustom engagement range limits

Safety invariant: User rules (Level 6) can add restrictions (hard caps, soft penalties) but cannot boost scores in ways that conflict with higher-precedence constraints. This is enforced at rule validation time and verified by ErgoAI's \overrides/2 non-interference proofs.

Rule Types

5. Mission Profiles and Objectives

Mission profiles activate subsets of the mission_objectives namespace rules to shape engagement scoring for specific operational contexts. Objective rules use soft boosts to prioritize mission-relevant targets without modifying constraint rules.

ProfileActive ObjectivesEffect
SEAD Missionobj_prioritize_air_defense, obj_time_criticalBoosts air defense targets, elevates time-sensitive targeting
Urban Protectionobj_urban_protectionIncreases collateral sensitivity, tighter CDE thresholds
Time Criticalobj_time_criticalElevates fleeting/time-sensitive targets in priority scoring
Defensiveobj_defensive_postureFavors defensive engagement, penalizes offensive overreach

Constraint invariance: The SHA-256 hash of the constraint rule set is identical across all mission profiles. Objective rules compose on top of constraints at precedence Level 5 -- they cannot override LOAC (Level 1), NSL (Level 2), ROE (Level 3), or CDE (Level 4). This is verified by ErgoAI's \overrides/2 non-interference proofs.

To use a mission profile in the demo: select a profile from the Pipeline page before running the OODA loop. The profile's objective rules will be loaded into the doctrine DAG for every cycle. The constraint hash badge after the run confirms constraint rules were unchanged.

6. Formal Verification (ErgoAI)

The rule set is independently verified by ErgoAI (Flora-2 / XSB), a logic programming engine that uses F-logic with defeasible reasoning.

Verification Checks

CheckWhat It ProvesMethod
Rule ConsistencyNo contradictory rules within the same namespacecontradictory_same_namespace/3
Constraint InvarianceObjective namespaces cannot override constraint namespaces\overrides/2 over 96 namespace-level facts
Per-Target ProofErgoAI's final_decision/2 matches the pipeline's decisionInsert target features, query final_decision/2, cross-check

Decision Traces vs. Formal Proofs: The pipeline produces decision traces (audit trails showing which checks each decision traversed). These are not independent proofs -- they are the pipeline documenting its own reasoning. ErgoAI provides the actual formal verification by independently re-deriving the decision in F-logic.

7. BDA and Loop Control

BDA Outcomes

After each engagement cycle, BDA simulation determines target outcomes based on the effective Pk (probability of kill), modified by environmental conditions:

Environmental Conditions

Scenarios can specify environmental conditions that degrade weapon effectiveness:

ConditionPk Multiplier
Night operationsx 0.85
Dust / smokex 0.70
Fog / rainx 0.80
Reduced visibilityx 0.75
Degraded visibilityx 0.60

These modifiers compound. A night operation in dust with reduced visibility has an effective Pk multiplier of 0.85 x 0.70 x 0.75 = 0.45, roughly halving weapon effectiveness and forcing more OODA cycles to neutralize threats.

Shoot-Look-Shoot

The WTA solver assigns at most one round per target per cycle. After firing, BDA assesses the outcome before the next cycle decides whether to re-engage. This prevents munitions waste on already-destroyed targets and forces the loop to iterate.

8. Scenarios and Conditions

ScenarioTargetsROEChallenge
Open Terrain20-50Weapons FreeEfficiency under degraded conditions
Urban CAS15-30Weapons TightCollateral avoidance, NSL entities
Mixed Threat30-60Weapons TightTarget prioritization across types
Defended Airspace10-20Weapons HoldFull doctrine chain, escalation

Each scenario includes target positions, platform positions with munitions, NSL entities, protected structures, and environmental conditions. Scenarios are seeded for reproducibility (40 seeds per type used in evaluation).

9. Evaluation Results

40-scenario evaluation across all scenario types:

Zero ROE violations across all 40 scenarios and all OODA cycles.

Average 1.6 cycles per scenario (range: 1-4 depending on conditions).

32% average ESTV reduction per scenario.

Zero wasted engagements (no rounds fired at already-destroyed targets).

The evaluation data is pre-computed and available in the demo's Evaluation tab under the System section.

10. Demo Interface Guide

The demo organizes tabs into two groups:

Scenario Group (run-dependent)

These tabs require selecting and running a scenario.

TabWhat It ShowsRequires
SelectFour scenario cards with target counts, ROE level, and challenge description--
SituationTactical map with target/platform positions, target list, platform loadoutsSelect a scenario
PipelineMission profile selector, ML model selector, OODA loop controls (max cycles, run loop / single cycle), results summarySelect a scenario
DAG16-namespace DAG visualization with edge weights, per-target DAG walkRun pipeline
VerifyErgoAI formal verification (consistency, invariance, per-target proof) at top. Decision traces with namespace tags below.Run pipeline
OODA CyclesCycle 0 (initial state) through final cycle. Per-target decision table with class, priority, prior BDA, outcome. BDA results summary per cycle.Run pipeline
ESTV CurveSVG chart of ESTV reduction across cycles with BDA annotations. Per-cycle breakdown table with destroyed/damaged/relocated/missed counts.Run pipeline
BDA OverlayTactical map colored by BDA status with cycle slider (C0 through final). Status bars, ESTV, and platform munitions state per cycle.Run pipeline
Loop TracePer-cycle table of all 7 loop control rules (L1-L7) with their evaluated values and fired/not-fired status.Run pipeline

System Group (pre-computed)

These tabs show system-level data independent of the selected scenario.

TabWhat It ShowsData Source
RulesPrecedence hierarchy diagram. All 109 rules browsable by namespace. Mission objective rules per profile. User rule editor with precedence validation.YAML rule files + ErgoAI
Evaluation40-scenario aggregate: avg cycles, total engagements, ESTV reduction, zero violations. Per-scenario results table.ooda_evaluation.json
AR-MLHow AR doctrine rules shape ML training: cost matrix, adaptive thresholds, hard examples, training comparison.Pre-computed eval data

Recommended Demo Flow

  1. Select -- pick Open Terrain scenario
  2. Situation -- show the tactical layout (22 targets, 4 platforms)
  3. Pipeline -- optionally select a mission profile (SEAD), run the OODA loop
  4. OODA Cycles -- walk through C0 (initial) to C4 (terminate), show BDA feedback driving re-engagement
  5. ESTV Curve -- show threat reduction across cycles
  6. BDA Overlay -- slide through cycles on the tactical map
  7. Verify -- click ErgoAI consistency check, show constraint invariance, run a per-target formal proof
  8. Rules -- show precedence hierarchy, try adding a user rule (restriction accepted, permissive rule blocked by LOAC)
  9. Evaluation -- show 40-scenario zero-violation results

CLARA OODA Kill Web -- Open Demo