Cormorant Foraging Framework
Cross-Reference to Established Fields
Purpose
This document maps the Cormorant Foraging Framework to established decision-making, control theory, and intelligence frameworks. It serves to:
- Position CF within existing knowledge
- Demonstrate completeness through parallel structure
- Clarify what is borrowed vs. unique
- Provide familiar entry points for practitioners from different fields
Executive Summary
| Aspect | Status |
|---|---|
| Feedback loop structure | Standard (borrowed) |
| Three orthogonal dimensions | Unique formulation |
| Biomimetic grounding | Unique |
| Layered derivation (0 → 1 → 2) | Less common |
| Specific formulas | Unique |
| Observable anchoring principle | Unique |
The loop is borrowed. The bird is yours.
Framework Comparison Matrix
| Framework | Domain | Loop Structure | CF Equivalent |
|---|---|---|---|
| OODA | Military Strategy | Observe → Orient → Decide → Act | Sense → Measure → Act → Loop |
| PID Controller | Engineering | Measure → Compare → Correct | 3D → DRIFT → Fetch |
| Cybernetics | Systems Theory | Input → Process → Output → Feedback | Foundation → Derivation → Action → Loop |
| Reinforcement Learning | Machine Learning | State → Action → Reward → Update | Sense → Fetch → Outcome → Re-sense |
| Scientific Method | Science | Observe → Hypothesize → Test → Revise | Sense → DRIFT → Fetch → Learn |
| PDCA (Deming) | Quality Management | Plan → Do → Check → Act | Measure → Fetch → Outcome → Adjust |
| System 1/2 | Cognitive Psychology | Fast → Slow → Decision | Chirp (fast) → DRIFT (slow) → Fetch |
| Double-Loop Learning | Organizational Theory | Action → Result → Reflect → Reframe | Fetch → Outcome → DRIFT → Foundation |
| Bayesian Inference | Statistics | Prior → Evidence → Update → Posterior | Wake → Chirp/Perch → DRIFT → Updated Wake |
Detailed Cross-References
1. OODA Loop (Military Strategy)
Origin: Colonel John Boyd, US Air Force Domain: Combat decision-making, competitive strategy
The OODA Structure
Observe → Orient → Decide → Act
↑ |
└────────────────────────┘Mapping to Cormorant Foraging
| OODA | CF Layer | CF Component | Function |
|---|---|---|---|
| Observe | Layer 0 | Chirp + Perch + Wake | Gather signals, structure, context |
| Orient | Layer 1 | DRIFT | Assess position relative to goal |
| Decide | Layer 2 | Fetch calculation | Determine if/how to act |
| Act | Layer 2 | Fetch execution | Perform action |
Key Differences
| OODA | Cormorant Foraging |
|---|---|
| "Orient" is implicit/cultural | DRIFT is explicit/calculated |
| No formula for decision | Fetch = Chirp × |DRIFT| × Confidence |
| Binary (decide or don't) | Threshold-based (execute/confirm/queue/wait) |
| No dimensional decomposition | Three orthogonal dimensions |
What CF Adds
- Quantification: OODA describes; CF calculates
- Thresholds: CF provides explicit decision gates
- Decomposition: "Observe" breaks into Chirp (signal), Perch (structure), Wake (memory)
2. PID Controller (Control Engineering)
Origin: Control theory, early 20th century Domain: Industrial automation, robotics, process control
The PID Structure
┌─────────────────────────────────┐
│ │
Setpoint ─┼─→ [Error] → [P + I + D] → Output
│ ↑ │
│ └─────────────────────┘
│ (Feedback)
└─────────────────────────────────┘Components:
- P (Proportional): React to current error
- I (Integral): React to accumulated error
- D (Derivative): React to rate of change
Mapping to Cormorant Foraging
| PID | CF Layer | CF Component | Function |
|---|---|---|---|
| Setpoint | — | Goal/Target | What you're aiming for |
| Current State | Layer 0 | Chirp + Perch + Wake | Current measurement |
| Error | Layer 1 | DRIFT | Gap between current and goal |
| P (Proportional) | Layer 2 | Chirp in Fetch | Immediate response to signal |
| I (Integral) | Layer 0 | Wake | Accumulated history |
| D (Derivative) | Layer 1 | DRIFT change over time | Rate of gap closure |
| Output | Layer 2 | Fetch decision | Action taken |
The PID ↔ CF Formula Parallel
PID Output:
u(t) = Kp·e(t) + Ki·∫e(t)dt + Kd·de(t)/dtCF Fetch:
Fetch = Chirp × |DRIFT| × Confidence
= Signal × Gap × ReadinessKey Differences
| PID | Cormorant Foraging |
|---|---|
| Continuous numerical control | Threshold-based decisions |
| Single error signal | Three-dimensional sensing |
| Abstract variables | Biomimetic grounding |
| Tuning via Kp, Ki, Kd | Weighting via dimension scores |
What CF Adds
- Confidence gating: PID always outputs; CF can choose not to act
- Dimensional decomposition: Error isn't monolithic—it has structure
- Semantic grounding: "Wake" vs "Integral" carries meaning
3. Cybernetics (Systems Theory)
Origin: Norbert Wiener, 1948 Domain: Systems theory, communication, control
The Cybernetic Loop
Input → Processor → Output
↑ │
└───── Feedback ────┘Core principle: Goal-directed behavior through negative feedback
Mapping to Cormorant Foraging
| Cybernetics | CF Layer | CF Component |
|---|---|---|
| Input | Layer 0 | Chirp + Perch + Wake |
| Comparator | Layer 1 | DRIFT |
| Effector | Layer 2 | Fetch |
| Output | — | Action/Outcome |
| Feedback | Loop | Re-measurement |
| Goal | — | DRIFT = 0 |
Cybernetic Concepts in CF
| Cybernetic Concept | CF Implementation |
|---|---|
| Negative feedback | Fetch reduces DRIFT toward zero |
| Homeostasis | Loop continues until DRIFT ≈ 0 |
| Variety | Three dimensions provide requisite variety |
| Black box | Each layer can be treated as black box |
Ashby's Law of Requisite Variety
"Only variety can absorb variety."
CF's three dimensions provide variety to match environmental complexity:
- Sound (Chirp) → Temporal/urgency variety
- Space (Perch) → Structural variety
- Time (Wake) → Historical variety
4. Reinforcement Learning (Machine Learning)
Origin: Sutton & Barto, computational learning theory Domain: AI, robotics, game playing
The RL Structure
┌────────────────────────┐
│ │
▼ │
State → Agent → Action → Environment
│
▼
Reward
│
└──→ Update PolicyMapping to Cormorant Foraging
| RL | CF Layer | CF Component | Function |
|---|---|---|---|
| State | Layer 0 | Chirp + Perch + Wake | Current perception |
| Policy | Layer 2 | Fetch formula | Decision rule |
| Action | Layer 2 | Fetch execution | What the agent does |
| Reward | — | DRIFT reduction | Feedback signal |
| Value function | Layer 1 | DRIFT | Expected distance to goal |
Key Parallel: Value as Distance
RL Value Function:
V(s) = Expected cumulative reward from state s
CF DRIFT:
DRIFT = Distance from current state to goal state
Both measure "how far from goal" — RL in reward space, CF in methodology space.
Key Differences
| Reinforcement Learning | Cormorant Foraging |
|---|---|
| Learns policy from experience | Policy is explicit formula |
| Requires training | Works immediately |
| Optimizes for reward | Optimizes for gap closure |
| State is abstract vector | State is three semantic dimensions |
| Exploration vs exploitation | Threshold-based confidence gating |
What CF Adds
- Interpretability: CF dimensions are human-readable
- No training required: Formulas work out of the box
- Explicit confidence: RL explores blindly; CF gates on confidence
5. Scientific Method
Origin: Ancient, formalized 17th century Domain: Knowledge generation
The Scientific Loop
Observation → Hypothesis → Experiment → Analysis → Revision
↑ │
└──────────────────────────────────────────────┘Mapping to Cormorant Foraging
| Scientific Method | CF Layer | CF Component |
|---|---|---|
| Observation | Layer 0 | Chirp + Perch + Wake |
| Hypothesis | Layer 1 | DRIFT ("I think the gap is X") |
| Experiment | Layer 2 | Fetch (test the action) |
| Analysis | Loop | Measure new DRIFT |
| Revision | Loop | Update foundation |
Parallel: Observable Anchoring
Scientific principle:
Claims must be testable against observation
CF principle:
"Every measurement ties to observable behavior, not speculation"
Both reject unfalsifiable assertions.
6. PDCA / Deming Cycle (Quality Management)
Origin: W. Edwards Deming, 1950s Domain: Quality management, continuous improvement
The PDCA Structure
Plan → Do → Check → Act
↑ │
└──────────────────┘Mapping to Cormorant Foraging
| PDCA | CF Layer | CF Component |
|---|---|---|
| Plan | Layer 0 + 1 | Sense + DRIFT calculation |
| Do | Layer 2 | Fetch execution |
| Check | Loop | Re-measure DRIFT |
| Act | Loop | Adjust approach |
Key Difference
PDCA is prescriptive (tells you to plan). CF is descriptive (tells you where you are).
7. Kahneman's System 1/2 (Cognitive Psychology)
Origin: Daniel Kahneman, "Thinking Fast and Slow" Domain: Cognitive psychology, behavioral economics
The Dual System
| System 1 | System 2 |
|---|---|
| Fast | Slow |
| Automatic | Deliberate |
| Intuitive | Analytical |
| Low effort | High effort |
Mapping to Cormorant Foraging
| System | CF Component | Reasoning |
|---|---|---|
| System 1 | Chirp (high) | Fast signal detection, urgency |
| System 2 | DRIFT + Confidence | Deliberate gap analysis |
| Override | Fetch threshold | System 2 can block System 1 impulse |
The Override Mechanism
High Chirp (impulse to act)
↓
Low Confidence (Perch or Wake weak)
↓
Fetch score below threshold
↓
System 2 blocks actionCF provides explicit System 2 override through the confidence multiplier.
8. Double-Loop Learning (Organizational Theory)
Origin: Chris Argyris, 1970s Domain: Organizational learning, management
Single vs Double Loop
Single Loop: Action → Result → Adjust action Double Loop: Action → Result → Question assumptions → Reframe
Mapping to Cormorant Foraging
| Loop Type | CF Implementation |
|---|---|
| Single Loop | Fetch → Outcome → Adjust Fetch |
| Double Loop | Fetch → Outcome → Re-evaluate DRIFT → Re-evaluate 3D weights |
CF Supports Both
- Single loop: Keep same formula, adjust inputs
- Double loop: Question whether Chirp/Perch/Wake weights are correct
9. Bayesian Inference (Statistics)
Origin: Thomas Bayes, 18th century Domain: Probability, statistics, epistemology
The Bayesian Update
P(H|E) = P(E|H) × P(H) / P(E)
Prior × Likelihood → PosteriorMapping to Cormorant Foraging
| Bayesian | CF Component | Function |
|---|---|---|
| Prior | Wake | What we believed before |
| Evidence | Chirp + Perch | New observations |
| Likelihood | DRIFT change | How much evidence shifts belief |
| Posterior | Updated Wake | New belief state |
The Parallel
Both frameworks update beliefs based on evidence:
- Bayesian: Mathematical probability update
- CF: Wake (memory) updates based on outcomes
Dimensional Comparison
How Other Frameworks Decompose "State"
| Framework | State Decomposition |
|---|---|
| OODA | Implicit (not decomposed) |
| PID | Single error signal |
| RL | Abstract state vector s |
| Cybernetics | Input signal |
| CF | Three orthogonal dimensions |
CF's Unique Decomposition
State = Sound × Space × Time
= Chirp × Perch × Wake
= Signal × Structure × MemoryWhy this matters:
- Each dimension can be measured independently
- Bottleneck identification (which dimension is weak?)
- Targeted improvement (strengthen weak dimension)
Formula Comparison
Gap/Error Measurement
| Framework | Gap Formula |
|---|---|
| PID | e(t) = setpoint − current |
| RL | δ = r + γV(s') − V(s) |
| CF | DRIFT = Methodology − Performance |
Action Calculation
| Framework | Action Formula |
|---|---|
| PID | u = Kp·e + Ki·∫e + Kd·de/dt |
| RL | a = argmax Q(s,a) or π(s) |
| CF | Fetch = Chirp × |DRIFT| × min(Perch,Wake)/100 |
CF's Unique Properties
- Multiplicative gating: Any zero blocks action
- Confidence floor: min(Perch, Wake) creates conservative bound
- Threshold decisions: Not continuous output, but discrete states
Uniqueness Analysis
What CF Borrows
| Element | Source |
|---|---|
| Feedback loop structure | Control theory, cybernetics |
| Gap measurement concept | PID error, RL value |
| Learning through iteration | Scientific method, RL |
| Dimensional thinking | Linear algebra, factor analysis |
What CF Contributes
| Element | Uniqueness |
|---|---|
| Biomimetic grounding | All components map to cormorant behavior |
| Semantic dimensions | Chirp/Perch/Wake vs x₁/x₂/x₃ |
| Observable anchoring | Explicit rejection of speculation |
| Layered derivation | Level 0 → 1 → 2 dependency chain |
| Multiplicative confidence gate | Action requires alignment across dimensions |
| Threshold-based decisions | Execute/Confirm/Queue/Wait states |
| Emerged, not designed | Framework discovered through use |
Completeness Proof via Cross-Reference
A complete decision framework must handle:
| Requirement | Standard Solution | CF Solution | ✓ |
|---|---|---|---|
| Sensing | Input/Observation | Chirp + Perch + Wake | ✅ |
| State representation | State vector | Three orthogonal dimensions | ✅ |
| Goal comparison | Error/Value function | DRIFT | ✅ |
| Decision logic | Policy/Controller | Fetch formula | ✅ |
| Action gating | — | Confidence threshold | ✅ |
| Feedback | Loop closure | Re-measurement | ✅ |
| Learning | Update/Revision | Loop iteration | ✅ |
CF addresses all requirements that established frameworks address.
Entry Points by Background
If You Come From...
| Background | Start Here | Familiar Parallel |
|---|---|---|
| Military/Strategy | OODA mapping | Observe = Sense, Orient = DRIFT |
| Engineering | PID mapping | Error = DRIFT, Output = Fetch |
| Data Science | RL mapping | State = 3D, Policy = Fetch formula |
| Psychology | System 1/2 mapping | Chirp = fast, DRIFT = slow |
| Quality/Ops | PDCA mapping | Check = DRIFT, Act = Fetch |
| Science | Scientific method | Hypothesis = DRIFT, Experiment = Fetch |
Summary Table
| Dimension | Control Theory | Cybernetics | RL | CF |
|---|---|---|---|---|
| Input | Sensor reading | Input signal | State s | Chirp + Perch + Wake |
| Goal | Setpoint | Reference | Reward | DRIFT = 0 |
| Gap | Error e(t) | Deviation | Value V(s) | DRIFT |
| Decision | Controller | Processor | Policy π | Fetch |
| Output | Actuator | Effector | Action a | Action |
| Feedback | Sensor loop | Feedback loop | State update | Re-sense |
Conclusion
The Position
Cormorant Foraging is not a replacement for established frameworks. It is a biomimetic implementation of universal decision-making principles with specific additions:
- Grounding: Abstract math becomes observable behavior
- Decomposition: Single state becomes three orthogonal dimensions
- Gating: Action requires confidence alignment
- Emergence: Discovered through practice, not designed in theory
The Claim
"A biomimetic decision framework that implements established control theory principles through three orthogonal sensing dimensions, layered derivation, explicit confidence gating, and observable anchoring—grounded in cormorant foraging behavior."
The Invitation
Practitioners from any field can enter CF through their familiar framework:
- The loop is the same
- The structure is parallel
- The grounding is new
- The bird makes it memorable
References
Established Frameworks
| Framework | Key Reference |
|---|---|
| OODA Loop | Boyd, J. "Patterns of Conflict" (1986) |
| PID Control | Åström & Murray, "Feedback Systems" (2008) |
| Cybernetics | Wiener, N. "Cybernetics" (1948) |
| Reinforcement Learning | Sutton & Barto, "RL: An Introduction" (2018) |
| PDCA | Deming, W.E. "Out of the Crisis" (1986) |
| System 1/2 | Kahneman, D. "Thinking, Fast and Slow" (2011) |
| Double-Loop Learning | Argyris, C. "On Organizational Learning" (1999) |
Cormorant Foraging Framework
| Resource | URL |
|---|---|
| Main Framework | cormorantforaging.dev |
| DRIFT | drift.cormorantforaging.dev |
| Fetch | fetch.cormorantforaging.dev |
| Research | semanticintent.dev/papers |
| DOI | 10.5281/zenodo.17114972 |
"The loop is borrowed. The bird is yours." 🦅