Creative Abduction with NQD

Pattern B.5.2.1 · Stable Part B - Trans-disciplinary Reasoning Cluster

Status. Normative binding to B.5.2 Abductive Loop that delegates candidate generation to Γ_nqd.generate (C.18 NQD-CAL) and exploration/exploitation policy to E/E-LOG (C.19); the kernel remains unchanged.

Non‑duplication & parsimony. “Introduces no new kernel primitives; reuses the CHR kit (A.17/A.18) to define measurable Characteristics. This pattern does not introduce new eligibility conditions. Application is permitted only when USM coverage holds for the target slice and the performer’s RSG state is enactable (eligibility), without prescribing any team workflow. Per A.11 Ontological Parsimony, only a context‑local CHR import and a Method are added; no changes to Γ/LOG. All generation is performed via Γ_nqd. (C.18)* and all exploration/exploitation control via E/E-LOG (C.19). Terminology discipline. Use NQD consistently (Novelty–Quality–Diversity). Treat S/I as secondary metrics unless explicitly promoted by policy (see §3, §5).

  • Conceptual binding: B.5.2 Abductive Loop (this pattern specifies the how for Steps 2–3).
  • FPF pattern: a domain‑neutral Creativity‑CHR (C‑cluster) that declares the Characteristics used here (see §2). (No change to Γ/LOG.) This binding also references C.18 NQD-CAL (operators Γ_nqd.*) and C.19 E/E-LOG (EmitterPolicy).
  • Manager’s mental model (informative): “We add measurable characteristics for newness, spread, and fit, then use a generator that explores widely and returns a Pareto set (not a single winner) of non‑dominated options.”
  • Operational loops: compatible with B.4 Canonical Evolution Loop (ideas generated here flow into Run→Observe→Refine→Deploy) and with B.5 Canonical Reasoning Cycle (ADI), preserving abductive primacy.
  • Agency note. Decisions are taken by a system in role. Contexts publish measurement spaces and admissible policies as semantic frames; they do not enact choices.

Keywords

  • NQD
  • novelty
  • quality
  • diversity
  • open-ended search
  • Pareto front
  • E/E-LOG.

Relations

Content

Problem Frame

  • Conceptual binding: B.5.2 Abductive Loop (this pattern specifies the how for Steps 2–3).
  • FPF pattern: a domain‑neutral Creativity‑CHR (C‑cluster) that declares the Characteristics used here (see §2). (No change to Γ/LOG.) This binding also references C.18 NQD-CAL (operators Γ_nqd.*) and C.19 E/E-LOG (EmitterPolicy).
  • Manager’s mental model (informative): “We add measurable characteristics for newness, spread, and fit, then use a generator that explores widely and returns a Pareto set (not a single winner) of non‑dominated options.”
  • Operational loops: compatible with B.4 Canonical Evolution Loop (ideas generated here flow into Run→Observe→Refine→Deploy) and with B.5 Canonical Reasoning Cycle (ADI), preserving abductive primacy.
  • Agency note. Decisions are taken by a system in role. Contexts publish measurement spaces and admissible policies as semantic frames; they do not enact choices.

Intent & Problem

Intent. Turn Step 2 (generate) and Step 3 (filter) of the Abductive Loop from ad‑hoc brainstorming into a disciplined, instrumented exploration that can (i) produce many distinct, plausible hypotheses and (ii) surface the few worth pursuingwithout bloating the kernel or forcing a specific creative method.

Problem. Unstructured ideation routinely fails on two fronts: it either produces too little variety (pet ideas win by seniority) or too little plausibility (grand theories with no testable predictions). B.5.2 names these failure modes; this pattern adds a minimal, measurable counter‑mechanism aligned to FPF’s assurance lanes and state machine.

The Creativity‑CHR (references only; no re‑definitions here)

This binding references the context‑local Creativity‑CHR (see C.17) and does not restate measurement templates. The primary coordinates are: • Novelty@context (C.17 §5.1), • ΔDiversity_P (marginal; C.17 §5.5), and • Q components (per A.18).
Surprise and Illumination are secondary: Illumination is report‑only telemetry (published as IlluminationSummary over Diversity_P); both act as tie‑breakers unless explicitly promoted by policy (C.19).
Use‑Value (alias: ValueGain) is informative for decision lenses (Decsn‑CAL) and MUST NOT enter NQD dominance by default (see C.17 §5.2).

All listed Characteristics are context‑local with explicit units/ranges and polarity↑. They are measurements, not eligibility conditions; eligibility conditions are supplied by USM/RSG. (Complies with A.18 measurement discipline; does not overload assurance semantics.)

Lexical discipline. The items above are Characteristics in the sense of A.17/A.18; avoid reserved names such as “validity” or “operation.” Normalization note. If a QualityVector has heterogeneous units, Contexts SHALL normalize or nondimensionalize each component before Pareto analysis (see CC‑B.5.2.1‑7). D vs I (normative). D = ΔDiversity_P (marginal gain) and is eligible for the primary dominance test. I is portfolio illumination (report/visual); it SHALL NOT be part of the primary dominance test and is usable only as an explicit tie-break per policy. Measurement invariants. Distances, grids, and transforms MUST be declared once per run, versioned, and referenced from provenance (§3, §5).

Solution — Binding to Γ_nqd.generate (C.18)

Method name (Plain/Unified Tech). NQD‑Generate — a U.Method that, given (i) a HypothesisSpace and (ii) a CharacteristicSpace with a CoverageGrid, returns a finite, non‑dominated set of candidate hypotheses that maximize Quality (per‑component) while maintaining Diversity and encouraging Novelty.

Minimal signature.

  • Inputs (declared in MethodDescription): HypothesisSpace, CharacteristicSpace, Seeds?, Budget (time/compute), EmitterPolicy (E/E-LOG policy id), QualityMeasures (Q components), NoveltyMetric, CoverageGrid/Granularity, CellCapacity K? (default=1), EpsilonDominance ε? (default=0), TieBreakPolicy? (S/I), DedupThreshold?, Policy(TimeWindow), DeterminismSeed?

  • Outputs: CandidateSet = {h_i: (desc_i, Q_i, N_i, D_i:=ΔDiversity_P(h_i | Pool), S_i, I_i, UseValue_i?), genealogy_i?, provenance_i (including DHCMethodRef.edition and policyId from E/E-LOG)} where Q_i is a vector and provenance_i captures generator settings and evaluation sources. If Use‑Value is present, include the objective id / acceptanceSpec, counterfactual method (if predicted), and model edition per C.17. Note: S and I are tie-breakers only unless promoted by explicit Context policy; Use-Value is informative for decision lenses and SHALL NOT enter the dominance set.

Strategy (notation‑neutral).

  1. Seeding. Initialize with seeds (known solutions, random draws, or prior L0 artifacts).
  2. Iterated illumination. Propose variations, evaluate Q (per‑component); maintain up to K elites per cell (or descriptor bucket); compute N/D/S/I on the fly; deduplicate by DedupThreshold in CharacteristicSpace.
  3. Budget‑bounded loop. Iterate until budget or coverage‑convergence; return the (ε‑)Pareto front over {Q₁…Q_k, D, N, ΔDiversity_P} (do not collapse to a single scalar). Illumination is excluded from the dominance set by default; Surprise and Illumination act only as tie-breakers unless a Context policy explicitly promotes them. Use-Value may appear as a side note for decision discussions but MUST NOT be mixed into NQD dominance set.
  4. Traceability. Emit a Design Rationale Record (DRR): grids/metrics versions, seed(s), policy and TimeWindow, which cells were filled, why items were dominated (list Characteristics), and how the final set was produced (including ε, K, and dedup). (Lightweight DRR is permitted per B.4 guidance.)
  5. Algorithmic freedom (informative). Implementations MAY use MAP‑Elites/illumination, novelty search with local competition, Bayesian/surrogate‑assisted search, or deterministic enumerations; ε‑dominance or knee‑point thinning MAY be used after recording the full front in provenance.

No kernel growth. This is a Method (C.4 Method‑CAL) plus a CHR import; no new Γ‑operator is added (per A.11).

Implementation & Binding into B.5.2 (two injection points)

Step 2 — Generate candidates. Precondition (USM+RSG). Generation is permitted only when the Claim/Work Scope covers the TargetSlice (USM) and the performer’s RoleAssignment is in an enactable RSG state (Green-Gate law).

When the pattern is imported, replace or supplement freeform brainstorming with NQD‑Generate; the output is a pool of L0 hypotheses annotated by {N, D, Q, S, I, V?} plus provenance/DRR refs. The abductive step remains abduction (a conjecture), now instrumented and diverse by construction.

Step 3 — Plausibility filters. Apply B.5.2’s plausibility criteria, now with explicit hooks:

  • Falsifiability → filter out ideas with no testable predictions in the Shaping/Evidence states (B.5 alignment).
  • Explanatory power → prioritize candidates whose Q‑improvements (and attached rationales) align with the framed anomaly.

The selected “prime hypothesis” proceeds exactly as in B.5.2: formalize it as a new U.Episteme at L0, then move to Deduction/Induction.

Primary dominance test: compute the (ε-)Pareto front over {Q components}. By default, N (Novelty@context) and ΔDiversity_P act only as tie-breakers unless a policy explicitly promotes them into the dominance set; S (Surprise) and I (Illumination) are also tie-break/report-only by default; Use-Value remains non-dominant.

Defaults (if policy is unspecified)

Dominance: {Q components}, with ConstraintFit=pass as eligibility gate.
Tie‑breakers: Novelty@context, ΔDiversity_P, and Surprise; IlluminationSummary (telemetry summary over Diversity_P) remains report‑only unless a CAL policy promotes it.
Archive: K=1, ε=0, deduplication in CharacteristicSpace.
Policy: UCB‑class with moderate temperature; explore_share ≈ 0.3–0.5.
Provenance (minimum): record DescriptorMapRef.edition, DistanceDefRef.edition, EmitterPolicyRef, TimeWindow, Seeds.

Scope‑of‑claim annotation (descriptive). Record the BoundedContext and TimeWindow that delimit where each N/Q/D measurement is intended to hold; this is for reasoning traceability only (no operational gates).”

Note — Status Surprise (scope and default role): By default in B.5.2.1, Surprise functions solely as a secondary tie‑break among candidates that are otherwise Pareto‑equivalent on the Context’s primary characteristics. A Context policy MAY elevate Surprise into the dominance set, allowing it to enter the CreativitySpace dominance alongside the primary characteristics. If no Context policy is specified, the default tie‑break role applies.

Conformance Checklist (normative)

CC‑B.5.2.1‑1 (CHR discipline). If a Context uses this pattern, it SHALL declare the Creativity‑CHR Characteristics with A.18‑style templates (type, unit/range, polarity). No new kernel terms are introduced. CC‑B.5.2.1‑2 (Instrumented generation). Step 2 of B.5.2 SHALL either (a) invoke NQD‑Generate or (b) justify a Context‑specific generator of equivalent effect (diversity + quality + novelty with measurable Characteristics). CC‑B.5.2.1‑3 (Diversity coupling). When this pattern is used, D MUST be ΔDiversity_P computed against the current candidate Pool using the C.17 definition of Diversity_P under the same Context, CharacteristicSpace, kernel, and TimeWindow. CC‑B.5.2.1‑Eligibility: Eligibility requires (i) ConstraintFit = pass for the candidate (Norm‑CAL must‑set), then (ii) USM coverage for the TargetSlice and (iii) an enactable RSG state for the performer; only then may calls to Γ_nqd.* occur. CC‑B.5.2.1‑4 (Non‑dominated shortlist). The CandidateSet MUST include the Pareto front over {Q₁…Q_k, N, D}; any pruned candidate MUST carry a DRR note (“dominated by … on {Characteristics}”). CC‑B5.2.1‑5 (Abductive primacy preserved). The pattern MUST NOT bypass the ADI ordering mandated by B.5: induction may not start before deduction; abductive L0 creation remains the start. CC‑B.5.2.1‑6 (Normalization for Pareto). When Q has multiple components with different units/scales, Contexts SHALL normalize or use declared utility‑free monotone transforms before dominance tests. **CC‑B.5.2.1‑7 (Use‑Value separation). ** If Use‑Value (C.17 §5.2) is recorded, it SHALL remain outside Assurance scores; it MAY inform decision lenses (Decsn‑CAL). Do not alter R/G semantics based on Use‑Value. (see C.17 §5.2 for Use-Value / ValueGain definition) CC‑B.5.2.1‑8 (Provenance). Each h_i in the CandidateSet MUST reference its provenance_i sufficient to reproduce scores given the same Policy(TimeWindow), score/metric versions, and DeterminismSeed?. CC‑B.5.2.1‑9 (Secondary metrics). I (illumination) and S (surprise) SHALL be used only for tie‑breaking/reporting unless explicitly promoted by policy; the primary dominance test is over {Q components} by default. CC‑B.5.2.1‑10 (Cell capacity & ε). If K>1 or ε>0 are used, the values MUST be declared and recorded in provenance; any thinning AFTER recording the front SHALL be documented in the DRR. CC‑B.5.2.1‑11 (Dominance set). By default the dominance set SHALL be {Q components}; N (Novelty@context) and ΔDiversity_P act as tie‑breakers unless explicitly promoted by policy (record the policy‑id in provenance).

Cognitive Load & Kernel Growth Budget

For engineers/managers (user cognitive load).

  • Added steps: selecting descriptor Characteristics & granularity; reading a Pareto table (non‑statisticians tip: scan the “front” row; ignore dominated rows).
  • Mitigations: provide a one‑screen “NQD Cards” template analogous to RSG cards; default grids and metrics per Context. (Keep ≤ 7 visible Characteristics—mirrors RSG human‑scale guidance.)
  • Reader quickstart (engineer‑manager): (1) Pick 2–3 Q characteristics aligned to the anomaly + a simple CharacteristicSpace (2–4 dimensions). (2) Accept defaults for NoveltyMetric, grid granularity, and K=1. (3) Run NQD‑Generate to a fixed budget; read the front row first. (4) Apply Step 3 filters; log decisions in the DRR.

For the framework (kernel growth).

  • Zero new primitives; only a CHR import and a Method. Passes A.11 minimal‑sufficiency.

Placement in the Reasoning Cycle (ADI)

This pattern only structures hypothesis exploration (Abduction) and does not define or imply any operational gates. It respects ADI ordering (Abduct → Deduct → Induct) and leaves deployment/readiness concerns to patterns outside this spec.

Context‑Level KPIs (optional, informative)

Contexts may monitor these—not as gates, but to improve practice:

  1. Generativity (Gv). Fraction of abductive cycles whose selected candidate reaches L1/L2 within policy windows (time‑to‑L1; time‑to‑evidence). (Maps onto state transitions driven by B.5.)
  2. Frontier‑Hit Rate (FHR). % of cycles where the chosen candidate lies on the Pareto front of {Q, N, D} at selection time.
  3. Coverage Gain (ΔI, report). Change in the illumination summary (coverage map/%filled cells) per cycle (how much of the descriptor space is now “lit”).
  4. Exploration Cost Ratio (ECR). Compute/time spent in NQD‑Generate divided by downstream Shape/Evidence cost saved (tracks whether the pattern pays for itself).
  5. Refutation Learning Yield (RLY). Among refuted candidates, % that added new coverage or raised SurpriseScore—turning “failures” into map‑building.

Trade‑offs & mitigations

  • Cognitive effort. Interpreting Pareto sets and coverage maps adds thinking overhead. Mitigation: standard “NQD Card” + default grids; keep Characteristics small in number (≤ 7). Manager shortcut: pick 2–3 Q characteristics that reflect the anomaly, then run with defaults.
  • Locality. Novelty/diversity are context‑local; Cross‑context reuse requires re‑measurement or an explicit mapping. This pattern does not define Cross‑context operational controls.
  • Not a magic idea machine. Abduction remains human/agentic; the pattern structures search, it does not automate insight. B.5’s abductive primacy stands.
  • Metric gaming & collinearity. Avoid making N and S redundant by policy; when strong collinearity is detected, freeze one as informative only and record rationale in the DRR.

Trade‑offs & mitigations

  • Cognitive effort. Interpreting Pareto sets and coverage maps adds thinking overhead. Mitigation: standard “NQD Card” + default grids; keep Characteristics small in number (≤ 7). Manager shortcut: pick 2–3 Q characteristics that reflect the anomaly, then run with defaults.
  • Locality. Novelty/diversity are context‑local; Cross‑context reuse requires re‑measurement or an explicit mapping. This pattern does not define Cross‑context operational controls.
  • Not a magic idea machine. Abduction remains human/agentic; the pattern structures search, it does not automate insight. B.5’s abductive primacy stands.
  • Metric gaming & collinearity. Avoid making N and S redundant by policy; when strong collinearity is detected, freeze one as informative only and record rationale in the DRR.
  • Extends: B.5.2 Abductive Loop (Step 2/3 operationalization).
  • Driven by / feeds: B.5 Canonical Reasoning Cycle (Abduction→Deduction→Induction), B.4 Evolution Loop (Observe/Refine).
  • Uses: A.17/A.18 for characteristic discipline and B.5 ADI ordering. May refer to Context‑specific MAP‑Elites/novelty‑search implementations in the MethodDescription. No operational gating is in scope here. C.17 (Use‑Value / ValueGain, normative definition).
  • Respects: A.11 (no kernel growth beyond CHR template import + Method).

B.5.2.1:End