Creativity‑CHR — Characterising Generative Novelty & Value

Pattern C.17 · Stable Part C - Kernel Extension Specifications

Status. Mechanism specification (CHR) — normative where stated. Depends on. A‑kernel (A.1–A.15), CHR‑CAL (C.7), MM‑CHR measurement infrastructure (C.16), KD‑CAL and Sys‑CAL for carriers and holons, Decsn‑CAL (utility), Norm‑CAL (constraints/ethics). Coordinates with. B.5.2.1 NQD (abductive generator) for search instrumentation, Agency-CHR (C.9) for agential capacity, B-cluster trust/assurance (B.3), Canonical Evolution Loop (B.4), Role Assignment & Enactment Cycle (Six-Step) (F.6) and Naming Discipline for U.Types & Role Names (F.5). Guard‑rails. Obeys E‑cluster authoring rules (Notational Independence; DevOps Lexical Firewall; Unidirectional Dependency).

What this pattern provides (exports):

This pattern exports Characteristics and measurement templates only. It does not export any Γ_* operators, portfolio composition rules, or selection/scalarization policies; those live in C.18 NQD-CAL and C.19 E/E-LOG (or Decsn-CAL for decision lenses). A Context publishes the measurement space and admissible policies; a decision is taken by an agent in role using a named lens within that space.

  • U.CreativitySpace — a CharacteristicSpace (CHR) with named Characteristics and scale metadata for evaluating creative work/outcomes inside a U.BoundedContext.
  • U.CreativityProfile — a vector of coordinates in U.CreativitySpace attached by a U.Evaluation to a specific Outcome (usually an U.Episteme produced by U.Work).
  • Core Characteristics (kernel nucleus; Context‑extensible):
  1. Novelty@context — distance from a ReferenceBase in the current Context/time window; ∈ [0, 1].
  2. Use‑Value (alias: ValueGain) — measured or predicted improvement against a declared objective; interval/ratio scale per Context.
  3. Surprise — negative log‑likelihood under a GenerativePrior; bits or nats.
  4. ConstraintFit — degree of must‑constraint satisfaction (Norm‑CAL / Service acceptance); ∈ [0, 1].
  5. Diversity_P (portfolio-level) — coverage/dispersion (set-level). Illumination is a report-metric over Diversity_P (coverage/QD-score summaries). It is report-only and never part of the primary dominance test.
  6. AttributionIntegrity — provenance/licensing discipline for lawful, transparent recombination; ∈ [0, 1].
  7. FamilyCoverage — (count, polarity ↑, scope=portfolio, unit=families, provenance: F1‑Card)
  8. MinInterFamilyDistance — (ratio [0,1] or metric units, polarity ↑, scope=portfolio, DistanceDef@F1‑Card)
  9. AliasRisk — (ratio [0,1], polarity ↓, diagnostic; drop if dSig ≥3/5 characteristics collide)
  10. U.DomainDiversitySignature (dSig) — 5‑tuple over discrete characteristics [Sector, Function, Archetype, Regime, MetricFamily] attached to each U.BoundedContext. Used for Near‑Duplicate diagnostics and AliasRisk. Policy: flag as Near‑Duplicate when ≥3 characteristics match; see F.1 invariants and SCR‑F1‑S08..S09.
  11. Note (AliasRisk binding). AliasRisk MAY be computed using dSig collision diagnostics; a Context MUST declare the collision rule and policy id in DescriptorMap provenance when AliasRisk is reported.
  • Supporting types (linking points):

    • U.ReferenceBase — the domain‑local corpus (by Context & time window) used to compute Novelty@context.
    • U.SimilarityKernel — a declared similarity metric class for the Context (text/image/design/code/etc.), with invariance notes.
    • U.GenerativePrior — a predictive model over the Context’s artifacts/behaviours used to compute Surprise.
    • U.CreativeEvaluation — a specialisation of U.Evaluation that yields a U.CreativityProfile and the Evidence Graph Ref.
    • EffortCost (advisory) — resource outlay to achieve the outcome; from WorkLedger (Resrc‑CAL). (For normalization and planning; not itself “creativity.”)
  • Operators (first tranche): composeProfiles (set → portfolio), dominates (partial order in space), frontier (Pareto set), normaliseByEffort. (Formal laws introduced in Quarter 2.)

  • Relations (informative; not exported): dominance relation (partial order in the space), frontier predicate (Pareto set), portfolio composition view. C.17 exports no operators; these are mathematical relations only.

Scope note. This pattern does not define who is “a creative person.” It characterises creative outcomes and episodes as observed in Work and expressed as Epistemes. Agency (capacity to originate) is measured in Agency‑CHR (C.9); here we measure what came out and how it scores against stated goals and references. A Context publishes the measurement space and admissible policies; a decision is made by an agentic system in role, using a named lens within that space. CHR exports no Γ‑operators and no team workflow rules.

Problem we solve. Teams talk past each other about “creativity”: some prize novelty, others business value, others originality or risk‑managed invention. Without a shared, context‑local measurement space, reviews derail, portfolios drift, and safety constraints are waived ad‑hoc.

Keywords

  • creativity
  • novelty
  • value
  • surprise
  • innovation
  • ideation.

Relations

C.17explicit referenceCreative Abduction with NQD
C.17explicit referenceAgency‑CHR
C.17explicit referenceCanonical Evolution Loop
C.17explicit referenceDomain‑Family Landscape Survey
C.17explicit referenceRole Taxonomy
C.17explicit referenceEvidence Graph Referring (C-4)
C.17explicit referenceKD‑CAL

Content

Motivation & Intent (manager’s read‑first)

Problem we solve. Teams talk past each other about “creativity”: some prize novelty, others business value, others originality or risk‑managed invention. Without a shared, context‑local measurement space, reviews derail, portfolios drift, and safety constraints are waived ad‑hoc.

Intent. Provide a small, universal measurement kit that turns “this is creative” into checkable, context‑local statements — grounded in evidence, aligned to objectives, and composable from individuals to portfolios.

Manager’s one‑screen summary (what you can do with it):

  1. Score a design/code/theory change on Novelty–Value–Surprise–ConstraintFit with declared references and models.
  2. Compare options in a Pareto sense (no single magic score forced).
  3. Consider constraints as a coordinate in the space; compare options on frontiers while keeping Context for high‑novelty options
  4. Track a portfolio’s Diversity to avoid local maxima and groupthink.
  5. Defend decisions with an auditable CreativeEvaluation that cites what was new relative to which base, how value was measured, and why this counts here.

Forces

ForceTension we must resolve
Universality vs. domain detailOne kit must serve hardware design, software, policy, and science, yet let each Context pick similarity kernels, priors, and value models.
Invention vs. constraintCreative leaps are valuable; safety, ethics, and acceptance are non‑negotiable.
Local truth vs. Cross‑context reuseMeaning is context‑local (A.1.1); yet we need Bridges to compare across organisations/disciplines.
Single score vs. frontierManagement wants a number; reality is multi‑objective.
Randomness vs. intentionRandom noise looks “novel” yet useless; planned recombination can be highly creative.

Design answer. A context‑local CreativitySpace with a small set of characteristics, each with clear measurement templates and Evidence Graph Ref; composition uses frontiers and partial orders, not forced scalarisation.

Solution Overview — The context‑local CreativitySpace

Idea. Creativity is not a type; it is a profile measured on an outcome (episteme) or episode (set of works) inside a bounded context. The context supplies the ReferenceBase, SimilarityKernel, GenerativePrior, objective function(s), and acceptance constraints.

Objects in play (A‑kernel alignment):

  • A system (person, team, service) performs U.Work under a role (A.2).
  • That work yields a carrier (doc/model/design/code), i.e., an U.Episteme.
  • We apply a U.CreativeEvaluation to that episteme (and linked work) to produce a U.CreativityProfile with evidence.

Cre­ativitySpace (first‑class CHR): U.CreativitySpace(Context) := 〈Novelty@context, ValueGain, Surprise, ConstraintFit, Diversity_P, AttributionIntegrity, EffortCost?〉 with scale/unit metadata from MM‑CHR (C.16), and Context‑specific measurement methods bound by MethodDescription.

Design/run split (A.4):

  • Design‑time: score concepts or specs against surrogate value models and priors; record assumptions (USM scopes; A.2.6).
  • Run‑time: recompute ValueGain and ConstraintFit from Work evidence (service acceptance, KPIs) and refresh Surprise if priors update.

Vocabulary (CHR terms & D‑stubs)

Names are context‑local; below are kernel terms. Roles like “Designer/Reviewer” are contextual (A.2). Documents don’t act (A.7/A.12); they are evaluated.

  1. U.ReferenceBase (D). A curated, versioned set of artifacts (epistemes) and/or behaviours that define “what exists already” in this Context and time window. Conformance (RB‑1): must declare inclusion criteria, time span (TimeWindow), and coverage notes.

  2. U.SimilarityKernel (D). A declared metric family with invariances (e.g., text: cosine over embeddings, image: LPIPS, code: AST graph edit). Conformance (SK‑1): must cite MethodDescription and test corpus; state limits.

  3. U.GenerativePrior (D). A model that yields likelihood of artifacts given the Context’s history (n‑gram/LM, design grammar, trend model). Conformance (GP‑1): must publish training slice, fit method, perplexity/fit metrics, and refresh policy.

  4. U.CreativeOutcome (D). Any U.Episteme put forward for creative evaluation (e.g., new design, algorithm, spec, policy draft). Note. If the outcome is a system change without a single carrier, attach the evaluation to a bundle (set) of carriers referenced from Work.

  5. U.CreativeEvaluation (D). A U.Evaluation that outputs a U.CreativityProfile and anchors to ReferenceBase, Kernel/Prior, objective(s), acceptance tests, and Work evidence.

  6. U.CreativityProfile (D). The coordinate tuple in U.CreativitySpace with provenance to the above inputs and USM scopes. Conformance (CP‑1): profile must include scales/units, scopes, confidence bands (B.3), and the edition of space definitions.

The Core Characteristics (kernel nucleus)

Each characteristic is specified per MM‑CHR (C.16) with: name, intent, carrier, polarity, scale type, measurement template, evidence, scope (USM), and didactic cues. Context profiles MAY add characteristics; kernel characteristics MAY NOT be removed without a Bridge.

Novelty@context — “How unlike the known set is this?”

  • Intent. Quantify distinctness of the outcome relative to U.ReferenceBase (global or targeted slice).

  • Carrier. U.Episteme (the outcome).

  • Polarity. Higher is “more novel.”

  • Scale. [0, 1]; ratio (0 = duplicate under kernel; 1 = maximally distant).

  • Measurement template (normative pattern):

    1. Declare ReferenceBase B and TimeWindow window.
    2. Declare SimilarityKernel σ and its invariances.
    3. Compute Novelty@context := 1 − max_{b∈B} sim_σ(outcome, b), or a robust variant (top‑k mean).
    4. Publish sensitivity note (how results shift with kernel/B).
  • Evidence. Kernel/version id; top‑k neighbours with distances; ablation on invariances.

  • Scope hooks (USM). B must be a declared slice; Cross‑context use needs a Bridge with CL and loss notes.

  • Didactic cues.

    • Not “randomness.” Noise has high novelty, low value.
    • Local, not global. Novelty is to this Context now, not timeless originality.

Use‑Value (alias: ValueGain) — “What good did this add under our objective?”

  • Intent. Quantify benefit vs a baseline objective (Decsn‑CAL utility, Service acceptance, KPI).

  • Carrier. Outcome (episteme) with Work evidence.

  • Polarity. Higher is better.

  • Scale. Interval/ratio, unit declared by the Context (e.g., ΔSNR, % defects, profit/period).

  • Measurement templates (pick one):

    • Measured: ValueGain := metric_after − metric_before (declare counterfactual method).

    • Predicted: E[ValueGain | model] with error bars; update post‑run.

    • Evidence. Declared objective/criterion; measurements or credible predictions; counterfactual method (A/B, back‑test, causal inference).

    • Scope. State the context window used for the objective; claims outside that window are informative only.

    • Didactic cues.

    • Value is relative to stated objective; if the objective is wrong, the value reflects it.

    • Keep counterfactual discipline; otherwise “gain” is storytelling.

Surprise — “How improbable under our learned world?”

  • Intent. Capture unexpectedness given U.GenerativePrior.

  • Carrier. Outcome.

  • Polarity. Higher surprise = more unexpected.

  • Scale. bits or nats: Surprise := −log p_prior(outcome).

  • Measurement template:

    1. Declare GenerativePrior (training slice, model class).
    2. Encode outcome for the prior; compute likelihood proxy.
    3. Publish calibration curve (reliability diagram / PIT histogram).
  • Evidence. Model cards; fit metrics; OOD diagnostics; refresh policy.

  • Scope. Training slice declared as ContextSlice; Bridges penalise R (trust), not the value itself (A.2.6).

  • Didactic cues.

    • Novelty vs Surprise: high novelty under one kernel may be low surprise under a broad prior; publish both.

ConstraintFit — “Did it honour the non‑negotiables?”

  • Intent. Ensure mandatory constraints (safety, ethics, standards, SLOs) are satisfied.
  • Carrier. Outcome + Work evidence.
  • Polarity. Higher is better (1 = all mandatory satisfied).
  • Scale. [0, 1], ratio or pass/fail.
  • Measurement template: declare set C_must (Norm‑CAL / Service acceptance), compute ConstraintFit := |{c∈C_must : pass(c)}| / |C_must|; optionally weight per criticality.
  • Evidence. Checklists, tests, audits; Who/Role performed the SpeechActs (approvals/waivers).
  • Scope. Constraints are context‑local; Cross‑context requires Bridge; waivers are SpeechAct Work with RSG gates (A.2.5).
  • Interpretation note. Low ConstraintFit signals tension with declared must‑constraints and warrants reframing or redesign; this pattern does not prescribe go/no‑go rules.

Diversity_P (portfolio‑level) — “Are we exploring the space?”

  • Intent. At the set level, avoid myopic exploitation; promote coverage.
  • Carrier. A set of outcomes.
  • Polarity. Higher means broader coverage (not “better” per se).
  • Scale. Set‑functional; Context defines metric (e.g., average pairwise distance, k‑cover over features).
  • Template. Declare kernel and covering policy; compute score and coverage map (illumination); relate to USM ClaimScopes.
  • Alignment note. The illumination/coverage view corresponds to IlluminationScore used by B.5.2.1 NQD‑Generate; no separate characteristic is introduced here—measure it as part of Diversity_P.
  • Evidence. Distance matrix/cover plots; sensitivity to kernel.
  • Didactic cue. Use Diversity_P to shape portfolios, not to pick single winners.
  • Marginal gain (for generators) — normative. For a candidate h and current set S, ΔDiversity_P(h | S) := Diversity_P(S ∪ {h}) − Diversity_P(S). Contexts using NQD SHALL compute D as this marginal and publish the Diversity_P definition alongside the CharacteristicSpace/kernel and TimeWindow.

Heterogeneity Characterisation

  • FamilyCoverage (polarity ↑) — count of distinct domain‑families covered by a portfolio/triad; unit: families; window: declared.
  • MinInterFamilyDistance (polarity ↑) — min distance between selected families in DescriptorMap; unit: per DistanceDef; window: declared.
  • AliasRisk (polarity ↓) — collinearity/near‑duplicate risk indicator for contextual signatures; unit: score (0–1) with policy id.

Lexical special case (F.18 naming).
For lexical CandidateSets used by Name Cards (F.18), Diversity_P SHALL be computed over head-term families, not over raw strings. Variants that share the same lexical head (e.g., “Reference plane”, “Plane of reference”, “Planar reference”) MUST be treated as one family for coverage and distance; only candidates with distinct heads contribute to lexical Diversity_P. This aligns lexical use of Diversity_P with FamilyCoverage / AliasRisk and prevents inflating diversity by near-synonyms of a single head.

AttributionIntegrity — “Did we credit sources and licences correctly?”

  • Intent. Discourage “novelty theft”; ensure recombination is lawful and transparent.
  • Carrier. Outcome + provenance graph.
  • Polarity. Higher is better.
  • Scale. [0, 1]; fraction of required attributions/licence duties satisfied.
  • Template. Trace graph coverage against Context policy; licence constraints as Norm‑CAL rules.
  • Evidence. PROV‑style links; licence scans; acknowledgements.
  • Didactic cue. High AttributionIntegrity signals lawful and transparent recombination; low values indicate unacceptable practice in most Contexts.
  • Default role. AttributionIntegrity is measurable but non‑dominant. It MAY serve as a policy filter/tie‑break (C.19). If certain attribution duties are must‑constraints, they belong to ConstraintFit (Norm‑CAL) and act as eligibility gates. It is not part of the default dominance set.
  • Dominance & gating note (normative). AttributionIntegrity is a measurable Characteristic; it is not in the default dominance set. Contexts MAY use it as a filter or tie‑break via policy (C.19). Legal/ethical must‑fit checks live in ConstraintFit (Norm‑CAL); failing those blocks eligibility before dominance.

EffortCost (advisory) — “What did it take?”

  • Intent. Normalise comparisons by cost; not part of “creativity” per se.
  • Carrier. WorkLedger.
  • Polarity. Lower is better when used as denominator.
  • Scale. Resource units (hours, energy, $).
  • Template. Sum cost categories over Work that produced the outcome.
  • Evidence. Time/resource logs; BOM deltas.
  • Didactic cue. Use CreativityPerCost := f(Novelty@context, ValueGain, Surprise)/EffortCost for operations planning, not for excellence awards.

Conformance Checklist (first tranche)

IDRequirement (normative)Purpose / audit hint
CC‑CR‑1 (context‑locality)Every CreativityProfile MUST name the U.BoundedContext and the edition of U.CreativitySpace.Prevents Cross‑context slippage.
CC‑CR‑2 (Declared bases)Novelty@context claims MUST declare ReferenceBase, SimilarityKernel, and TimeWindow; Surprise claims MUST declare GenerativePrior and its training slice.Makes “new to whom?” and “unexpected under what?” explicit.
CC‑CR‑3 (Objective anchor)ValueGain MUST reference the objective (KPI/utility) and counterfactual method (if predicted, the model).Stops free‑form value stories.
CC‑CR‑4 (Must‑fit)If must constraints exist, ConstraintFit MUST be present; enactment decisions SHALL treat ConstraintFit<1 as fail, unless an explicit waiver SpeechAct exists.Keeps safety & ethics non‑negotiable.
CC‑CR‑5 (Evidence)Each coordinate MUST have Evidence Graph Ref (neighbours, tests, logs, model cards).Enables audit & replication.
CC‑CR‑6 (Scopes)Profiles MUST include USM scopes (ClaimScope/WorkScope) relevant to measurement; off‑scope claims are advisory.Ties numbers to where they hold.
CC‑CR‑7 (No scalarisation by default)The pattern SHALL NOT force a single scalar “creativity score.” If a Context defines one, it MUST publish the weighting and its drift policy.Keeps decisions on a Pareto frontier unless a policy opts‑in.
CC‑CR‑8 (Bridge discipline)Cross‑context comparisons MUST use a Bridge with CL and recorded losses; any mapped coordinate MUST note penalties in the R lane, not silently alter the value.Honest portability.

Manager’s Quick‑Start (apply in 5 steps)

  1. Name the Context (context + edition).
  2. Pick measurement defaults (kernel, prior, objective, constraints) from the Context’s handbook.
  3. Score outcomeNovelty@context, Use‑Value, Surprise, ConstraintFit.
  4. Decide by frontier: shortlist non‑dominated options; use ConstraintFit as a gate; apply policy if a scalar is approved.
  5. Record a CreativeEvaluation with evidence; if crossing Contexts, attach the Bridge id.

Mental check. New to our base? Helpful to our objective? Unexpected under our model? Safe & licenced? If any answer is “unknown,” you are not done measuring.

Archetypal Grounding (three domains)

(a) Manufacturing design change) Outcome. New impeller geometry for Pump‑37. Context. PlantHydraulics_2026. Novelty@context 0.42 (shape‑descriptor kernel vs last 5 years). ValueGain. +6.8% flow @ same power (bench Work). Surprise. 1.3 bits (within evolutionary trend prior). ConstraintFit. 1.0 (materials, safety, noise). Decision. Frontier winner: modest novelty, clear value, safe. Portfolio keeps Diversity_P by also funding one high‑surprise concept for exploration.

(b) Software architecture refactor) Outcome. New concurrency model for ETL. Context. DataPlatform_2026. Novelty_G. 0.27 (AST/edit kernel vs internal corpus). ValueGain. −20% latency, −35% p95 stalls (A/B Work). Surprise. 0.5 bits (trend prior expected co‑routines). ConstraintFit. 0.83 (fails SoD—same author as reviewer). Decision. Return for SoD fix; then likely adopt. Creativity is not a waiver over governance.

(c) Scientific hypothesis) Outcome. A new scaling law claim. Context. GraphDynamics_2026. Novelty_G. 0.66 (formula kernel vs literature base). ValueGain. Predicted: explains 12 prior anomalies (model check). Surprise. 3.7 bits (strongly unexpected under prior). ConstraintFit. 1.0 (ethics N/A; evidence roles bound with decay windows). Decision. Fund replication Work; track R decay per policy.

Anti‑Patterns (fast fixes)

Anti‑patternWhy it failsFix with this FPF pattern
“Creativity = randomness.”Noise yields high Novelty@context, low ValueGain and often low ConstraintFit.Evaluate all four characteristics; require ConstraintFit=1 for musts.
Global originality claims.Ignores context‑local meaning and current corpus.Declare Context & ReferenceBase; cross Contexts only via Bridge.
One magic score.Hides trade‑offs; fragile under drift.Decide on Pareto frontier; publish scalar only with explicit weights/policy.
Hand‑wavy value.No objective → no audit.Tie to Service/KPI or utility; state counterfactual.
Silent borrowing.Legal/ethical risk; reputational damage.Track AttributionIntegrity; licence scans in evidence.

Relations

  • A.2 Role & A.15 Run‑alignment. Creative Work is performed by systems in roles; outcomes are epistemes. Creativity is measured by U.Evaluation, not “done by a document.”
  • B.3 Trust/Assurance. Coordinates carry confidence bands; Bridges lower R by CL. Evidence roles (A.2.4) bind datasets/benchmarks used in measurements.
  • C.9 Agency‑CHR. Agency measures capacity to originate; a high‑agency system may still output low‑creativity outcomes (and vice versa with strong scaffolding).
  • A.2.6 USM (Scope). All measurements sit on ContextSlices; G‑ladder is explicitly not used (C.17 follows A.2.6’s set‑valued scopes).
  • D‑cluster ethics. ConstraintFit is where must constraints, ethics, and safety bind the evaluation; waivers are explicit SpeechActs.

Authoring Aids (didactic cards)

  • Write the Context. Context + edition on every profile.
  • Name the base & kernel. Without them, Novelty@context is undefined.
  • State the objective. Value without a KPI is a story.
  • Publish priors. Surprise needs a trained model with cards.
  • Gate by musts. ConstraintFit < 1 blocks enactment unless waived.
  • Prefer frontiers. Shortlist non‑dominated options; let governance decide trade‑offs.
  • Bridge explicitly. Cross‑context talk needs CL and loss notes.

CSLC recap and the Creativity CharacteristicSpace

Purpose. Ground “creativity” as a measurable family of characteristics (CHR) rather than a role, capability, or virtue. Each characteristic is scoped to a U.BoundedContext, evaluated on U.Work (episodes), artifacts (epistemes, e.g., design sketches, models), or holders (systems/teams) via MM‑CHR exports (U.DHCMethodRef, U.Measure, U.Unit, U.EvidenceStub), using the CSLC discipline (Characteristic / Scale / Level / Coordinate).

Strict Distinction (A.7) reminders. Creativity is not a Role (no one “plays CreativityRole”). It’s a characterisation of outcomes/process. Creativity is not Work (no resource deltas). Work produces artifacts we later characterise. Creativity is not a service promise clause (no external promise). Promise clauses are judged from Work; creativity may correlate with value.

The Creativity CharacteristicSpace (CHR‑SPACE)

The core characteristics below are kernel‑portable names; Contexts specialise them (rename if needed, but keep semantics). Each characteristic declares: what we measure, on what carrier, typical scale, and where it lives in FPF.

Characteristics (kernel name)What it captures (intuitive)Measured onTypical scale (CSLC)Lives with / checked by
Novelty@contextDistance from known ideas in this ContextArtifact / Work setRatio or bounded [0..1] via similarity→distanceKD‑CAL corpus + U.BoundedContext
Use‑ValueBenefit vs a declared objectiveArtifact / EvaluationOrdinal (Fail/Partial/Pass) or scalar KPIB.3 Evidence & U.Evaluation
SurpriseUnexpectedness under the Context’s GenerativePriorArtifactbits or nats (−log‑likelihood)Prior cards & calibration
ConstraintFitDegree of must‑constraints satisfied while exploringWork / Artifact% satisfied (0–100)Norm‑CAL + step guards
Diversity_PPortfolio coverage/dispersion (incl. coverage map view)Set of artifactsSet‑functional; coverage indexΓ_ctx fold + USM ClaimScopes
AttributionIntegrityLawful & transparent provenance/licensingArtifact + provenance[0,1]PROV + Norm‑CAL

Locality. Every characteristic is context‑local (e.g., Novelty@context). Cross‑context claims must use a Bridge and record CL penalties (B.3). No global novelty.

Context extensions & policy‑level characteristics (non‑kernel)

The following context‑local characteristics remain available but are not part of the kernel nucleus; use them as derived or policy measures:

  • ReframeDelta — change in the problem frame that improves solvability (episteme‑pair; ordinal).
  • Compositionality — degree of re‑use and new relations among parts (artifact; boolean + structure score).
  • Transferability@X — portability to Context X via a Bridge (artifact; ordinal + CL penalty).
  • DiversityOfSearch — breadth of approach classes tried (work set; count/rate).
  • Time‑to‑First‑Viable — elapsed time to first Use‑Value = Pass (work; duration).
  • Risk‑BudgetedExperimentation — planned vs realized exploration share (workplan vs work; ratio; policy gate).

Compatibility note. This split removes duplicate “core lists” and aligns C.17 with B.5.2.1 NQD and C.16/A.17–A.18: the kernel nucleus captures creativity qualities; the items above instrument process/policy or portfolio shaping.

Scale choices (CSLC discipline)

For each characteristic, declare the scale explicitly (nominal / ordinal / interval / ratio). Do not average ordinal scores; fold with medians or distributional summaries. Choose units (when applicable) and coordinate semantics (e.g., what “distance” means).

  • Novelty@context. Coordinate = 1 − max_similarity(candidate, corpus) with a declared encoder (text, graph, CAD). Unitless in [0..1]. Document encoder & corpus freeze (A.10 Evidence Graph Ref).
  • Use‑Value. Pass iff acceptanceSpec (from U.PromiseContent or Decision KPI) is met from Work evidence; else Partial/Fail. For scalar KPIs, publish mean ± CI and the acceptance threshold; predicted values carry error bars and are updated post‑run.
  • ConstraintFit. Ratio = satisfied / declared must constraints. Constraints are Norm‑CAL rules; count only declared ones (no unspoken “norms”).

Metric templates (normative kernels + manager‑ready variants)

Template syntax (MM‑CHR): U.DHCMethod { name, context, carrierKind, definition, unit?, scale, EvidencePin, acceptanceHook? } Note: Data instances carry DHCMethodRef pointing to this template.

Templates (kernel definitions)

  1. MT.Novelty@context
  • carrierKind: Artifact|WorkOutput.
  • definition: 1 − max_sim(encode(x), encode(y)) over y in ReferenceSet@Context.
  • scale: ratio [0..1].
  • EvidencePin: {ReferenceSetId, EncoderId, Version}; frozen by A.10.
  • notes: Publish encoder & corpus drift in RSCR.
  1. MT.Use‑Value
  • carrierKind: Work (fulfillment) → artifact (decision memo).
  • definition: Evaluation of an outcome against a declared objective/criterion for the current context (or predicted value with explicit model & error).
  • scale: ordinal {Fail, Partial, Pass} or scalar KPI.
  • EvidencePin: links to U.Work that fulfilPromiseContent`; cite acceptanceSpec edition.
  1. MT.ConstraintFit
  • carrierKind: Work / Artifact.
  • definition: |{c∈C_must : pass(c)}| / |C_must| within the MethodDescription scope; optional weighting by criticality allowed if declared.
  • scale: ratio [0..1].
  • EvidencePin: constraint list from Norm‑CAL; checks from Work telemetry.
  1. MT.ReframeDelta
  • carrierKind: Episteme pair (ProblemStatement v0→v1).
  • definition: Categorise frame change as {None, Local, BoundaryShift, Systemic}; justify with a Scope diff (A.2.6 U.ContextSlice delta) and causal map simplification.
  • scale: ordinal 0–3.
  • EvidencePin: diff artifact + Bridge notes if Cross‑context.
  1. MT.DiversityOfSearch
  • carrierKind: Work set (episode).
  • definition: Count of distinct approach classes tried (domain‑local typology) / time.
  • scale: count; derived rate.
  • EvidencePin: tagged Work items; typology lives in the Context glossary.
  1. MT.Compositionality
  • carrierKind: Artifact.
  • definition: set aggregator (Compose‑CAL) of reused components ≥ K and presence of novel relation among ≥ 2 parts.
  • scale: boolean + secondary “structure score” (e.g., depth or edge novelty).
  • EvidencePin: component graph + provenance of parts.
  1. MT.Transferability@X
  • carrierKind: Artifact.
  • definition: Applicability in target Context X via a Bridge; report CL and residual scope slice.
  • scale: ordinal {not portable, portable with loss, near‑iso}; record CL (0–3).
  • EvidencePin: Bridge id + pilot Work in X.
  1. MT.Time‑to‑First‑Viable
  • carrierKind: Work episode.
  • definition: elapsed wall‑clock to first UsefulnessEvidence = Pass.
  • scale: duration.
  • EvidencePin: first passing U.Work id.
  1. MT.Risk‑BudgetedExperimentation
  • carrierKind: WorkPlan vs Work.
  • definition: (Planned exploratory spend) / (Allowed risk budget) and realised counterpart; flag overrun.
  • scale: ratio + policy gate (pass/fail).
  • EvidencePin: WorkPlan ledger vs WorkLedger.

Manager’s quick checks (plain‑language adapters)

  • Novelty without a frozen corpus is storytelling—freeze corpus, fix encoder, then score.
  • Use‑Value without a consumer‑facing acceptance is a proxy—bind to a Service or explicit Objective.
  • Diversity counts approach classes, not color‑swap variants—publish your typology.

Novelty & transfer are context‑local (Bridges mandatory)

Rule N‑1 (Locality). Novelty@context is defined only within its U.BoundedContext. Never compare scores across Contexts without an Alignment Bridge (F.9).

Rule N‑2 (Directional mapping). A Bridge may assert a directional substitution (e.g., Novelty@DesignLab → Novelty@Manufacturing with CL = 2, loss: aesthetics encoder absent). Reverse mapping is not implied.

Rule N‑3 (Penalty to R, not to G). Cross‑context novelty does not change scope G; it reduces R (reliability) by the CL penalty (B.3), unless validated by pilot Work in the target Context.

Practical pattern. Publish novelty with its Context tag and—when reused—attach the Bridge id and target‑context pilot outcomes.

Anti‑Goodhart guard (use creativity metrics safely)

Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” — We bake in guards so creativity scoring improves outcomes instead of gaming them.

Guard‑rails (normative)

  • G‑1 Paired appraisal. Never assess Novelty in isolation; pair it with Use‑Value or ConstraintFit to avoid proxy myopia
  • G‑2 Frozen references. Novelty requires frozen corpus + encoder; changes create a new edition and RSCR rerun. Portfolio/selection heuristics are policy-level (see C.19); do not “reward” Illumination beyond its role as a report-metric.
  • G‑3 Time‑lag sanity. Include a post‑fact check (e.g., 30–90‑day retention or cost‑to‑serve delta) before celebrating “creative wins.”
  • G‑4 Exploration budget. Tie DiversityOfSearch to Risk‑BudgetedExperimentation; flag overspend.
  • G‑5 No ordinal averaging. Do not average ordinal scales; use distributions/medians or convert only under declared models.

Conformance Checklist — CC‑C17‑M (metrics & guards)

IDRequirementPractical test
CC‑C17‑M.1Each metric instance MUST cite its Context, edition, and evidence hooks (corpus/encoder, acceptanceSpec, constraint set).Scorecard lists ContextId, Edition, and hook ids resolvable via A.10.
CC‑C17‑M.2Novelty scores MUST NOT be used to approve Work without a paired gate (Use‑Value or ConstraintFit).Find decisions referencing novelty; check co‑gate present.
CC‑C17‑M.3Cross‑context reuse MUST cite a Bridge and record CL; R is penalised accordingly.Scorecards with foreign Context tag lacking Bridge → fail.
CC‑C17‑M.4Ordinal metrics MUST be summarised with medians/distributions, not means, unless a declared model justifies numeric treatment.Reports using a mean on ordinal without model → fail.
CC‑C17‑M.5Metric templates MUST be versioned; changing encoder, reference set, or acceptanceSpec creates a new edition.Diff shows changed hooks without edition bump → fail.

Worked mini‑cases (engineer‑manager focus)

All names are context‑local; bridges and editions are explicit. We show (a) what is measured, (b) who acts, (c) what is accepted, and (d) how evidence flows.

Case A — Hardware ideation sprint (manufacturing design)

  • Context. DesignLab_2026.
  • Objective. Reduce fastener count by ≥ 30 % without tooling changes.
  • MethodDescription. “Morphological matrix ideation v2.”
  • Work. 1‑day sprint, 6 sessions.
  • Metrics. Novelty@context (encoder: CAD‑graph v1; ReferenceSet: in‑house assemblies), ConstraintFit (no‑tooling‑change), Use‑Value (acceptance: Pass if sim shows ≤ +5 % assembly time).
  • Roles. Performers = design cell (#TransformerRole); Observer = methods coach (#ObserverRole ⊥).
  • Outcome. 22 candidates; 4 Pass usefulness; best Novelty=0.41 with 100 % constraints respected; Time‑to‑First‑Viable = 3 h 40 m.
  • Evidence. Scorecard episteme holds metrics; links to Work ids; acceptance tied to internal promise content “Design‑for‑Assembly Simulation”.

Manager’s read. “We didn’t just produce ‘novel’ shapes; 4 passed the sim and respected constraints, within the day.”

Case B — Data‑science hypothesis generation (health analytics)

  • Context. Cardio_2026.
  • Objective. Find a new risk factor candidate for readmission (< 30 days).
  • MethodDescription. “Causal discovery v3 + clinician review.”
  • Metrics. DiversityOfSearch (approach classes: feature ablation, IVs, DAG‑learners), Novelty@context (text encoder over prior hypotheses), Use‑Value (AUROC uplift ≥ 0.03 on hold‑out), Transferability@Hospital_B (Bridge CL=2).
  • Roles. SRE pipeline (#ObserverRole) computes metrics; clinicians (#ReviewerRole) set acceptance; data squad (#TransformerRole) performs experiments.
  • Outcome. Two candidates; one meets AUROC uplift; Transferability requires follow‑up (CL penalty).
  • Evidence. Episteme bundle: model cards, hold‑out plots, Bridge note.

Manager’s read. “One candidate works here; plan a pilot at Hospital B (we recorded CL=2).”

Case C — Product squad reframing (software UX)

  • Context. SaaS_Onboarding_2026.
  • Objective. Reduce time‑to‑value (TTV) by 20 %.
  • MethodDescription. “JTBD interviews + onboarding flow experiments.”
  • Metrics. ReframeDelta (BoundaryShift: split onboarding into ‘job setup’ and ‘first result’), Use‑Value (TTV ‑22 % on A/B), Risk‑BudgetedExperimentation (within cap), Compositionality (reuse of existing workflow widgets).
  • Roles. UX researcher (#ObserverRole), squad (#TransformerRole), product ops (#ReviewerRole).
  • Outcome. Frame changed; TTV target passed; experiments within budget.
  • Evidence. Reframing episteme with Scope diff + A/B report.

Manager’s read. “We changed the problem frame and proved the value drop—within risk limits.”

C.17:15.4 What these cases illustrate (tie‑backs)

  • Locality. All novelty/usefulness claims are Context‑tagged; Cross‑context steps use Bridges with CL.
  • Dual‑gate. Novelty never acts alone; usefulness/constraints co‑gate decisions.
  • SoD & Evidence. Observers are separate from performers; metrics live on epistemes with frozen hooks; Work proves fulfillment.

Working examples

Software (algorithmic/architectural ideation)

Kernel characteristics (↑/↓/gate). Novelty↑ (algorithmic / compositional), Use‑Value↑ (targeted user/job metric), ConstraintFit=gate (resource/latency envelope), Cost‑to‑Probe↓ (hours to runnable spike), Evidence‑Level↑ (tests/benchmarks confidence), Option‑Value↑ (paths unlocked), RegretRisk↓ (blast radius if wrong).

Priors.

  • Novelty prior skeptical beyond nearest known family (discount by conceptual distance).
  • Evidence prior at L0 (B.3) until benchmarks exist; regression tests act as ObserverRole evidence.

Context card (one screen).

  • Γ_bundle: Cost = sum; ConstraintFit = AND; Novelty = subadditive; Evidence = min (chain) / SpanUnion (indep).

Hardware (mechanical/electro‑mechanical concepting)**

Kernel characteristics. Novelty↑ (principle/material), Use‑Value↑ (performance delta), ConstraintFit=gate (manufacturability window), Time‑to‑Probe↓ (bench jig), Cost‑to‑Probe↓, SafetyRisk↓ (hazard), Evidence‑Level↑ (bench data), Option‑Value↑ (platform reuse).

Priors.

  • SafetyRisk has WLNK priority (R must cover hazard chain).
  • ConstraintFit must pass manufacturing gate before frontier inclusion.

Context card.

  • Γ_bundle: Hazard = max; ConstraintFit = AND; Cost = sum+coupling; Evidence = min on chain; Scope via WorkScope (A.2.6).

Policy design (rules/standards/programs)

Kernel characteristics. Novelty↑ (institutional), Use‑Value↑ (measurable social/operational effect), ConstraintFit=gate (legal/operational), Cost‑to‑Probe↓ (pilot), Evidence‑Level↑ (triangulated), EthicalRisk↓ (D‑cluster), Option‑Value↑ (coalitions/pathways), Scope (ClaimScope G) explicit.

Priors.

  • EthicalRisk uses status‑only eligibility conditions; Evidence aging (decay) is fast; cross‑context Bridges carry CL penalties.

Context card.

  • Γ_bundle: EthicalRisk = max; ConstraintFit = AND (legal & operational); Cost = sum; Evidence = min/SpanUnion; Scope = ClaimScope (A.2.6).

Consequences & fit (for engineer‑managers)

  • You can reason on paper about creativity: compare with dominance, pick along a frontier, and steer exploration with a few policy characteristics.
  • Changes to the space (scales, eligibility conditions, operators) are handled by RSCR, so decisions are explainable over time.
  • The Context handbooks are a thinking OS: one screen to start ideating without importing tool stacks or management playbooks.

Relations

  • Builds on: B.1 Γ‑algebra (WLNK/COMM/IDEM/MONO), B.3 Trust & Assurance (F–G–R, CL), A.2.6 USM (Claim/Work scopes), A.10 Evidence Graph Referring.
  • Coordinates with: A.2 Role suite (Observer/Evidence roles for probes), A.15 (Work & plans for probes), C.16 MM‑CHR (scale polarity & units). C.18 NQD-CAL (generation/illumination operators Γ_nqd.*) and C.19 E/E-LOG (policies, selection, and portfolio rules). This CHR remains measurement-only.
  • Defers to: F.9 Bridges for Cross‑context transfers; D‑cluster for ethical/speech‑act gates.

Quick reference cards (tear‑out)

  • Dominance test: apply signs + eligibility conditions + trust; then partial order.
  • Frontier use: show frontier + name the lens that picked your choice.
  • Portfolio policy: keep ExploreShare and WildBetQuota; set BackstopConfidence; rebalance on cadence.

Conformance Checklist (pattern‑level, normative)

Pass these and your CS modelling remains a thinking architecture, not a team‑management manual.

CC‑C17‑1 (context‑local CS). Every CreativitySpace (the characteristic set where ideation and selection are measured) MUST be defined inside one U.BoundedContext; all characteristics and their scales are local to that Context. (Bridges with CL penalties are required across Contexts; see §C.17.16.)

CC‑C17‑2 (Characteristics, not “characteristics”). Each CS dimension SHALL be a named Characteristic per MM‑CHR, with kind (qualitative, ordinal, interval, ratio, or set‑valued), unit/scale, polarity, and admissible operations. No free‑floating coordinates. (A.CHR‑NORM / A.CSLC‑Kernel.)

CC‑C17‑3 (Profile ≠ plan). A Profile is a state description over characteristics (what the option is in CS); a Plan or Method is how you will act. Never encode choices or schedules into the profile.

CC‑C17‑4 (Portfolio = set + rule). A Portfolio is a set of candidate profiles plus a selection rule (objective + constraints) declared in the same Context. Presenting only a scatterplot is non‑conformant.

CC‑C17‑5 (Dominance operator well‑typed). A dominance claim MUST name the characteristic subset and polarity under which it is evaluated. Dominance on incomparable scales (or mixed polarities without explicit transformation) is invalid.

CC‑C17‑6 (Frontier from rule, not from taste). A Frontier (Pareto or constraint‑bound) SHALL be computed from the declared selection rule; drawing a “nice hull” by eye fails conformance.

CC‑C17‑7 (Search–Exploit as dynamics, not policy dogma). Exploration/exploitation MUST be expressed as a dynamics on the portfolio measure(s) (e.g., exploration share as a function of marginal value of information), not as a prescriptive budget recipe. (Design‑time statements belong to Decsn‑CAL; see §C.17.16.)

CC‑C17‑8 (Evidence Graph Referring for scores). Any numeric score in a profile MUST cite its MeasurementTemplate (MM‑CHR) and the observation/evaluation that yielded it. No anonymous numbers.

CC‑C17‑9 (Separable uncertainty lanes). Keep aleatory vs epistemic uncertainty separate on characteristics; their combination rule MUST be stated (e.g., interval arithmetic, conservative bound).

CC‑C17‑10 (Time is explicit). Comparisons across iterations MUST state TimeWindow (snapshot window) and whether drift or refit occurred (§C.17.14). “Latest” is not a time selector.

CC‑C17‑11 (No proxy collapse). If a composite “creativity index” is used, its aggregation algebra (weights, monotone transforms) MUST be declared; the primitive characteristics remain queryable.

CC‑C17‑12 (Work stays on Work). Resource/time actuals and run logs live on U.Work; CS never carries actuals. We reason about profiles/portfolios; we do not audit operations here.

Worked‑Context Handbooks (concept cards, not runbooks)

Each Context publishes one page per card. These are thinking kernels: priors, objectives, admissible characteristics, and example transforms. No staffing, no process charts.

(a) Kernel Card — “What is a creative win here?”

  • Context: <Context/Edition>
  • Purpose Characteristic(s): what “win” means (e.g., Novelty, Usefulness, Adoptability), with polarity and admissible ops.
  • Constraint Characteristics: Risk, Cost of change, Time to learn, etc.
  • Objective (Decsn‑CAL pointer): Maximise <purpose> subject to declared constraints.
  • Frontier Rule: Pareto over {purpose ↑, risk ↓, cost ↓, time ↓}.
  • Evidence Hooks: which observations/evaluations populate each characteristic.

(b) Priors Card — “What we believe before seeing data.”

  • Default priors on uncertainty for each characteristic (e.g., Beta for adoption probability).
  • Bridge policy: minimal CL acceptable for imported profiles.
  • Exploration prior: initial exploration share as a function of prior entropy.

(c) Objective Variants Card — “Admissible objective shapes.”

  • Catalog the few objective forms this Context allows (lexicographic tie‑break, ε‑constraint, max‑min fairness), with didactic pictures of their frontiers.
  • State when to switch objective (e.g., during bootstrapping vs exploitation).

(d) Ready‑to‑use transforms (MM‑CHR aligned)

  • Monotone maps (e.g., log utility), normalizations, ordinal→interval “do & don’t” (only with evidence of order‑to‑interval validity).
  • Forbidden transforms list (e.g., averaging ordinal ranks).

These cards are conceptual fixtures; Tooling may implement them, Pedagogy may teach them, but C.17 only standardises their content as thinking affordances.

Placement sanity‑check across the pattern language (avoid scope creep)

  • MM‑CHR (C.16): defines Characteristic/Scale/Unit/Measure and the characterisation discipline. All CS dimensions live there; C.17 uses them, never re‑defines scales.
  • A.CHR‑SPACE (A.19): exports CharacteristicSpace & Dynamics hooks; C.17 is a Contexted specialisation for creative reasoning (profiles/portfolios/selection).
  • Decsn‑CAL (C.11): hosts objective functions, constraints, preference orders, utility proofs, and the search–exploit dynamics as decision policies. C.17 only names the hooks (objective, rule), keeps policy math out.
  • KD‑CAL (C.2) & B.3 (Trust): carry evidence provenance, assurance and congruence penalties (CL) for Cross‑context reuse. C.17 requires anchors; it does not invent confidence calculus.
  • Compose‑CAL (C.13): governs set/union/slice aggregation; the portfolio set is a Γ_m.set over profiles; frontier is derived without ad‑hoc geometry.
  • B.4 Canonical Evolution Loop: where Run→Observe→Refine→Deploy sits. C.17 supplies the view in which refinement is judged.

Out of scope here: team staffing, budgeting workflows, data‑governance procedures, ticket states, any “how to manage people”. This pattern organises thought, not teams.

Anti‑patterns & canonical rewrites (conceptual hygiene)

  1. characteristic‑speak. “Along the novelty characteristic…” → Rewrite: “Along the Novelty characteristic (ordinal; higher is better)…”.
  2. Pretty hulls. Drawing a convex hull and calling it a frontier → Rewrite: compute Pareto under declared characteristic polarities.
  3. Ordinal arithmetic. Averaging ranks or Likert values → Rewrite: either treat as ordinal and use order‑safe operators, or justify an interval mapping via MM‑CHR evidence.
  4. Proxy tyranny. Single composite index driving choice unseen → Rewrite: publish primitive characteristics, index formula, and sensitivity.
  5. Policy‑as‑math. “10% wild bets” as a rule → Rewrite: declare an exploration dynamics tied to value‑of‑information; if keeping a heuristic, label it as such.
  6. Global meaning. Porting a profile from another Context by name → Rewrite: attach a Bridge with CL and loss notes; adjust trust, not scales.
  7. Plan‑profile blur. Putting milestones into profiles → Rewrite: move schedules to U.WorkPlan; keep CS for how options compare, not how to execute.

Minimal didactic cards (one screen each)

(1) Profile Card

  • Option id & Context
  • Characteristics table (value, unit/scale, uncertainty split)
  • Evidence Graph Ref (Observation/Evaluation ids)
  • Notes (bridges used, CL penalties)

(2) Portfolio‑with‑Rule Card

  • Set of candidate profiles (refs)
  • Objective & constraints (Decsn‑CAL pointer)
  • Dominance subset & Frontier snapshot (with TimeWindow)
  • Delta vs previous (entered/exited/moved)

(3) Search–Exploit Card (conceptual)

  • Exploration share as function of marginal VOI (symbolic)
  • Update cadence (TimeWindow policy)
  • Stop conditions (e.g., VOI below threshold; risk bound reached)

(4) RSCR Summary Card

  • What changed? (refit/Δ±)
  • Sentinels status
  • Frontier churn
  • Bridge CL drift

These cards are thinking scaffolds; they do not prescribe org process.

Consequences (informative)

BenefitWhy it matters
context‑local rigourCreative comparison is made decidable where meaning lives; Cross‑context reuse is explicit and penalised only in trust, not scale.
Frontier honestyDecisions rest on declared characteristics and polarities; frontiers follow rules, not taste.
Temporal comparabilityRSCR prevents silent drift; “better/worse” claims retain meaning over iterations.
Method independenceAny tooling can implement the cards; C.17 remains a conceptual API for thought.

Trade‑offs: upfront ceremony (declare characteristics, polarity, TimeWindow) and disciplined bridges. The payoff is comparability and explainability.

C.17:26- Open questions (non‑normative, research hooks)**

  • Information geometry of CS: can certain Contexts justify canonical distance metrics across characteristics without violating MM‑CHR parsimony?
  • Multi‑agent exploration: how to couple individual CS frontiers into a co‑exploration equilibrium without importing team governance?
  • Learning‑to‑rank vs measurement: what minimal evidence suffices to treat an ordinal characteristic as interval for the purpose of frontier estimation?

C.17:End