A.CSLC-KERNEL — Minimal CSLC in Kernel (Characteristic/Scale/Level/Coordinate)
Pattern A.18 · Stable Part A - Kernel Architecture Cluster
Aliases (for narrative use only): “Axis” (≈ Characteristic), “Point” (≈ Coordinate). (These colloquial aliases may be used in Plain language explanations, but never in formal identifiers or normative text.)
We often need to characterize some aspect of a subject (be it a single artefact or a relationship between artefacts) in a rigorous way. Whether it’s recording a physical quantity, an architectural property, or a performance rating, the characterization must:
Keywords
- CSLC
- scale
- level
- coordinate
- measurement Standard.
Relations
Content
Problem Frame
We often need to characterize some aspect of a subject (be it a single artefact or a relationship between artefacts) in a rigorous way. Whether it’s recording a physical quantity, an architectural property, or a performance rating, the characterization must:
-
remain domain-neutral (work for engineering metrics, subjective scores, etc.),
-
ensure that two measurements are comparable if and only if they share the same defined aspect and scale, and
-
accommodate both ordered tiers (qualitative levels like Low/Medium/High) and numeric magnitudes (continuous or interval values) without mixing them up.
In FPF’s kernel, the CSLC pattern (CG‑frame–Scale–Level–Coordinate) provides the minimal vocabulary and constraints to achieve this. It defines how one Characteristic ties to one Scale, and how any measured value can be treated as a Coordinate on that scale (with an optional named Level if the scale is discrete or tiered). The context here is the need for a unified Standard so that every single measurement can be interpreted and compared on common grounds.
Problem
Uninterpretable values. A raw number or label means nothing without knowing what aspect it measures and how it is measured. The string “4”, the label “High”, or the real number 9.81 convey no insight unless we know which Characteristic they pertain to and the Scale that gives them meaning. In cross-disciplinary work this ambiguity is magnified: a “5” could be a risk rank (ordinal), a length in meters (ratio), or a satisfaction score (perhaps interval). Common failure modes include:
-
In ordinal settings (e.g. expertise levels Novice < Skilled < Expert), one can rank values but not meaningfully add or average them. Treating ordinal labels like numbers (e.g. averaging Novice=1, Expert=3) produces invalid results.
-
In cardinal settings (e.g. seconds, meters, degrees Kelvin), arithmetic operations do make sense – but only if units are respected and zero is meaningful (for ratio scales). If we strip away units or mix scales (seconds vs. minutes), we again get nonsense.
Without a strict Standard, one team might treat “High” and “Medium” as having a numeric gap, another might average 4 (on a 5-star scale) with 4 (as 4 seconds) because both are “4”. Inconsistent practices make cross-domain reasoning impossible. We need a kernel-level solution that fixes: (a) the aspect being measured, (b) the scheme by which it’s measured, and (c) the type of scale structure (ordinal vs. metric), and that ensures each reported value is bound to that scheme. At the same time, the Standard should not force artificial numeric detail where it isn’t applicable (e.g. we shouldn’t assign meaningless numbers to purely qualitative tiers just to satisfy a structure).
Forces
-
F1 – Transdisciplinarity. The pattern must uniformly handle measurements in physical domains (e.g. length, time, temperature), system attributes (e.g. a module’s coupling or reliability), and human judgments (e.g. user satisfaction scores). It needs to be neither overly quantitative (alienating softer domains) nor overly qualitative (lacking precision for hard science).
-
F2 – Comparability vs. freedom. We want to compare “like with like” – e.g. two readings of the same Characteristic on the same Scale – with absolute confidence. At the same time, the system should allow different Scales for the same Characteristic when necessary (for example, one project might measure Quality on a 0–5 star scale, another on a 0–100 percentage scale). The pattern must permit such flexibility without letting those differing scales be conflated.
-
F3 – Ordinal vs. cardinal integrity. The Standard should preserve the nature of the data: order-only vs order+distance. If something is ordinal (ranks, grades), the framework should prevent unwarranted numeric operations on it. If it’s cardinal (real-valued with units), the framework should enable arithmetic but still keep track of units and zero. In essence, it must protect ordinal data from “leaking” into interval arithmetic.
-
F4 – Named tiers vs. continuous magnitudes. In many domains, named Levels (tiers or grades) are useful – e.g. Technology Readiness Levels or bond credit ratings – whereas in others, a continuous scale is needed. The pattern should support optional Level labels (for tiered scales) without forcing every scale to have such labels. In other words, Levels are an add-on for discrete/tiered scales, not a requirement for truly continuous measures.
-
F5 – Method agnosticism. The kernel Standard should say what must be defined (Characteristic, Scale, etc.) but not prescribe how measurements are obtained. Whether a value comes from a sensor reading, a simulation, or an expert judgment is up to the respective patterns (e.g. Sys-CAL vs. KD-CAL). The pattern must not bake in any process or scoring methodology; it only ensures that once a measurement exists, it’s well-formed and comparable. This avoids locking in any particular assessment method.
Solution
Adopt a minimal “one characteristic – one scale – one coordinate (value)” Standard for all measurements. In the FPF kernel, any metric must bind exactly one Characteristic to exactly one Scale, and any observation produces one Coordinate (value) on that Scale (with an optional Level name if the scale has discrete tiers). We nickname this the CSLC clause:
Exactly one Characteristic + exactly one Scale ⇒ one Coordinate (value), with an optional Level.
Concretely, the parts of this clause are defined as follows:
-
Characteristic: the aspect or feature being measured (the “CG‑frame” along which comparison is made). It answers “What are we measuring?” – e.g. Distance, Temperature, Quality, Reliability.
-
Scale: the organized set of possible values that the Characteristic can take, including the type of scale (ordinal, interval, or ratio), the measurement Unit (if applicable), and any bounds or structure. The Scale defines “How do we measure it?” – e.g. “meters on a linear scale from 0 up to 1000” or “ratings 1 through 5 with ordering only”.
-
Coordinate: a concrete measured value that locates the subject on the chosen scale. This could be a number (for a numeric scale) or a category label (for an ordinal scale). It answers “What is the result?” – e.g. 7.4 (meters), or Expert (level).
-
Level (optional): a named tier or category on the scale, used only if the scale is tiered or discretized. For example, an ordinal scale might have Levels Low, Medium, High. A Level is essentially a human-friendly label for certain coordinates or ranges. On purely continuous scales, Level is not used.
Using this CSLC structure, every measurement is unambiguous and self-contained: the Characteristic tells us the context, the Scale tells us how to interpret the value, and the Coordinate is the outcome on that scale (with a Level label if appropriate). Notably, this pattern forbids bundling multiple characteristics into one metric – each metric template is one-characteristic-per-template to keep semantics crisp. If something needs to assess multiple factors, it should be modeled as multiple CSLC metrics or a higher-level composite (see §8 below). This one-aspect-one-scale rule is what allows unambiguous comparison and prevents hidden complexity.
Finally, the solution ensures tier optionality: If a domain uses named Levels, we include them; if not, we don’t force it. For example, one can have a Bug Severity Characteristic with Levels {Minor, Major, Critical} on an ordinal scale, whereas a Length Characteristic would have a continuous scale (no predefined levels, just units). Both fit the pattern.
Archetypal Grounding (System & Episteme Examples)
In a physical scenario (U.System): Consider an athlete’s long jump. We define a Characteristic Jump Distance with a Scale “meters (m)” ranging from 0 upward (ratio scale with meters as the unit). When the athlete jumps and lands at 7.45 m, we record a Coordinate of 7.45 m for the Jump Distance Characteristic. Here, Jump Distance is the Characteristic, the meter-scale is the declared Scale, and 7.45 m is the value (Coordinate). Because this is a cardinal measurement, we can meaningfully say one jump is 1.5 m longer than another, etc. Now consider another metric in the system: Battery Health of a device, which might be categorized qualitatively. We could define an ordinal Scale with Levels like Good, Fair, Poor for the Battery Health Characteristic. If a particular device is rated “Poor”, that is a Coordinate on the Battery Health scale (with Poor as the Level name). No arithmetic is done on these labels, but we can order devices by health (Good > Fair > Poor). Both examples illustrate the one-characteristic-one-scale rule: the jump’s distance is not combined with any other aspect; the battery’s health is evaluated on its own defined scale.
In a knowledge context (U.Episteme): Consider measuring an author’s expertise in a certain domain. We introduce a Characteristic Expertise Level for a person, with an ordinal Scale defining tiers such as Novice, Competent, Expert. Alice might be assessed at Expert level in software engineering – that’s a Coordinate on the Expertise Level scale for the Characteristic “Software Engineering Expertise”. Bob might be at Competent. We cannot average Alice’s and Bob’s levels, but we can say the scale is ordered (Expert > Competent > Novice). For a more quantitative episteme example, consider a Characteristic Hypothesis Confidence for a scientific claim, with a Scale 0–1 (or 0–100%) representing probability or confidence level (ratio scale). One hypothesis might have a confidence of 0.95, another 0.7; these are Coordinates on the Confidence scale. We can compare them numerically (0.95 is higher than 0.7, and 0.95 implies a stronger belief), and we could even combine multiple confidence values through Bayesian formulas (if justified) – but crucially, we would only do so in a way that respects their scale (probabilities combined properly, not treated as arbitrary scores). The Expertise Level and Hypothesis Confidence examples show how the CSLC pattern accommodates both an ordinal qualitative measure and a continuous quantitative measure in the knowledge domain, each with one Characteristic and one defined Scale.
Bias-Annotation
The CSLC-Kernel pattern is crafted to be maximally inclusive of different measurement types while imposing just enough structure to ensure consistency. It does not privilege any particular domain or modality of measurement: a subjective 5-star rating is treated with the same formal rigor as a physical length in meters. In terms of the FPF principle lenses, this pattern consciously balances the Architectural/Ontological needs (clear structure for data) with the Pragmatic/Didactic needs (flexibility and clarity for users). There is little risk of cross-domain bias here because the pattern explicitly supports both extremes (ordinal and ratio, qualitative and quantitative). By remaining method-agnostic, it avoids bias toward certain validation techniques – e.g. it doesn’t assume every measurement comes from an instrument (it could come from expert judgment just as well). One might argue the pattern enforces a somewhat formal approach to what could be informal measures (forcing definition of scale and characteristic), but this formalism is lightweight and is precisely what makes the metric interpretable. In summary, A.18 embodies neutrality: it’s a container that fits any content as long as that content is well-labeled. It reinforces P‑2 (Didactic Primacy) by making all metrics self-explanatory in terms of what and how, and respects P‑1 (Cognitive Elegance) by using a minimal, uniform scheme. No cultural or disciplinary assumptions are baked in – an anthropologist’s “Cultural Significance” scale can live alongside an engineer’s “Voltage” scale with equal status. The pattern’s requirement for declaring polarity (“higher is better” vs “lower is better” vs target range) further avoids bias in interpretation – it prevents the assumption that “more is always better,” which might be untrue in many contexts (e.g. for error rates, lower is better). All these considerations ensure that A.18 introduces no hidden skew; it merely provides a fair playing field for all metrics.
Conformance Checklist
When defining a new metric template or using measurements, practitioners SHALL verify the following:
-
One characteristic, one scale: Each metric template binds exactly one Characteristic to exactly one Scale. If you find a metric trying to cover multiple things at once, split it into separate metrics.
-
Polarity declared: For any ordered Scale (ordinal/interval/ratio), the polarity (“higher‑is‑better”, “lower‑is‑better”, “targeted optimum (symmetric or asymmetric around a declared target)”) SHALL be declared at the template that binds a Characteristic to a Scale. State whether higher values are better, lower are better, or if an optimal range/target exists. (For example: *“higher is better” for a performance score, *“lower is better” for error count, or “target 37 °C” for body temperature where deviation in either direction is worse.) This ensures that anyone comparing two values knows which way is “up.”
-
Unit and level clarity: If the Scale is quantitative, specify the Unit (e.g. seconds, meters, %) and make sure all values include or assume that unit. If the Scale has named Levels, list them clearly and use them consistently. Do not use the same label to mean different things on different scales, and avoid using unit terms in Characteristic names (the unit belongs with the scale).
-
Scale-appropriate operations only: Only perform those comparisons or calculations that are valid for the given scale type. For a nominal scale, you can check equality but not order. For an ordinal scale, you can order or rank values but not do math like “A minus B.” For interval scales, addition/subtraction is OK (with unit conversion if needed), but ratio comparisons (A is twice B) might not make sense without a true zero. For ratio scales, all arithmetic operations are allowed with proper attention to units. This check prevents logical errors (e.g. averaging “High” (3) and “Medium” (2) and getting 2.5 — which is meaningless).
-
No bare numbers: Never present a raw number or value without its context of Characteristic and Scale. If someone sees “42” in your output, they should also see or know “42 of what, measured how.” A reader who is not aware of the metric’s template should not be left guessing what a given value signifies. In practice, this means labeling reports and data with the metric name or identifier so that values can be traced back to their meaning.
-
Template bridges for cross-metric comparison: If you intend to compare or aggregate measurements from different templates (different Characteristics/Scales), ensure an explicit ScoringMethod or conversion is defined. For example, if you need to combine a “usability score” (0–5 stars) with a “security score” (0–100%), you might define a new Score that maps both onto a common 0–10 scale via monotonic functions. Without such a bridge, do not directly mix metrics – keep them separate in analysis. This guarantees that any cross-metric reading has a well-founded basis.
-
Level optionality respected: If your Characteristic doesn’t naturally have tiers, don’t force it to have Level names (you can leave the Level concept unused). Conversely, if your Characteristic is commonly described in categories, it’s fine to define Levels for clarity. The key is to use the Level field intentionally: either not at all (for truly continuous measures) or in a fixed, non-overlapping way (for discrete categories). Do not use “Level” for something that behaves like a continuous value (it would be confusing to assign a label where a number would do, or vice versa).
-
Comparability test: Two Coordinates are comparable iff same Characteristic+Scale (incl. unit, polarity). Otherwise — Score‑level only after a declared SCP to a bounded range.
(The above serve as normative checkpoints. Many of these are automatically supported by using the standard metric templates in software: e.g. the system will enforce one Characteristic per template, require a unit for ratio scales, etc. The Lexical rules from A.17/E.10 are assumed: use canonical names and notations for all parts of the metric.)
Consequences
Adopting the minimal CSLC Standard in the kernel yields a number of benefits:
-
Universal interpretability: Every measurement is intrinsically self-describing. One cannot have a “mystery number” floating around; by design you must know it’s X (Coordinate) on Y Scale of Z Characteristic. This dramatically reduces miscommunication in reports and data exchange. An engineer and an analyst can share a metric knowing they interpret it the same way, because the context travels with the value. Level is optional when scale is tiered or discreet.
-
Safe comparison and aggregation: Values can only be compared when they belong to the same Characteristic and Scale (or when an authorized SCP converts them). This prevents the common error of comparing apples to oranges. When cross-comparison is needed, the pattern funnels us into creating a proper normalization, which improves the soundness of composite scores. Essentially, it’s now impossible to accidentally average an uptime percentage with a user satisfaction rating, for example, without explicitly defining how to map one to the other.
-
Flexibility across domains: The pattern is transdisciplinary. It doesn’t matter if the measurement is temperature in Kelvin, length in inches, code complexity in “abstract points,” or user satisfaction on a five-level Likert scale – all are handled uniformly. This makes it easier to plug new patterns for new domains into FPF, since they don’t need special rules for their metrics; they just instantiate the CSLC template in their context.
-
Ordinal and cardinal handled with equal rigor: By explicitly classifying scales, the pattern gives ordinal data the respect it deserves (no pretending it’s numeric) and gives ratio data the formal context it needs (units, zero, etc.). This balance means both qualitative assessments and quantitative measurements live side by side, each with their constraints respected. Domains that lean heavily on categorical ratings benefit from the Level concept (with no pressure to assign fake numbers), and domains that use real measurements benefit from unit enforcement and type-aware computations.
-
Clarity in multi-factor scoring: The prohibition of implicit multi-characteristic measures means that any “overall” score or index has to be constructed out of known pieces. This tends to improve the transparency of complex scoring schemes. If an organization wants to create a single index from 5 different metrics, A.18 forces them to introduce a defined ScoringMethod function that combines those 5 Coordinates into one Score, with declared monotonicity and bounds. The consequence is that composite metrics become auditable and debatable (you can examine the weighting or formula) rather than opaque sums.
-
Methodological neutrality (and innovation): Because the kernel imposes no method for obtaining the values – only how to frame them once obtained – patterns and tool builders are free to innovate in how they measure things. The Standard just ensures that once they do, everyone else can understand and use the results correctly. This separation of concerns (what vs. how) accelerates multi-disciplinary collaboration: a social scientist’s observational scale can feed into a systems model without any confusion, as long as it’s couched in the CSLC terms.
On the downside, users must do a bit more upfront work to define their metrics. The pattern’s requirements (declare Characteristic, define Scale, etc.) mean one cannot simply say “we’ll track a risk score” without further detail. In practice, this is a desirable trade-off: the extra effort (perhaps a few minutes to set up a metric template) prevents far greater confusion down the line. Another possible trade-off is multiplicity of scales – the pattern allows the same Characteristic to have multiple scales (in different contexts or versions), which might fragment data if not managed (e.g. two teams measuring “Performance” on different scales). However, it also provides the remedy: make the difference explicit and, if needed, build a conversion ScoringMethod. This explicitness is actually beneficial, as it highlights when “Performance (0–5)” is not directly comparable to “Performance (Percentage)”. In short, any fragmentation is out in the open and can be dealt with via alignment or bridging.
Overall, A.18’s consequences are overwhelmingly positive: measurements become first-class, well-understood citizens of the model. The cost is a slight increase in definition effort and discipline, which is a small price for coherence. Once this pattern is in place, higher-level patterns (in Parts B, C, D) that reason about metrics can rely on it. For example, trust calculations (Part D) can assume that any metric they consume has a known scale and meaning, and knowledge dynamics algorithms (Part B or C) can safely combine evidence knowing the comparisons are valid. The minimal CSLC Standard is thus a foundational enabler for robust, cross-domain assurance in FPF.
Rationale
The rationale behind A.18 is to enforce semantic clarity at the data level, thereby solving a host of downstream problems. Without this pattern, one must constantly ask, “What does this number mean? Can I combine these two values?” – questions that have led to many project errors. By building the answers into the framework (“every number knows its unit, scale, and aspect”), we front-load the work and eliminate ambiguity. The solution directly addresses each force:
-
Transdisciplinarity: We include both ordinal and cardinal mechanisms so that no discipline’s metrics are left out. This was informed by observing multi-disciplinary teams: e.g., in a single project, a human factors specialist might rate usability (ordinal) while an engineer measures throughput (ratio). A.18 gives them a common language and prevents one from misusing the other’s data. It embodies the idea that universal structure enables local freedom: everyone’s metric can plug in, as long as they specify it properly.
-
Comparability vs. freedom: The pattern strikes a balance by tying comparability to explicit commonality. If two metrics truly measure the same thing in the same way, then of course you can compare them – they’ll share Characteristic and Scale. If they differ, the framework doesn’t stop you from defining them (freedom), but it does stop you from conflating them inadvertently. The introduction of polarity declarations is a direct response to this tension: it adds a tiny burden (must declare “higher is better” etc.) but yields big pay-off in avoiding mis-ordered interpretations and enabling safe composite scoring (monotonic ScoringMethods).
-
Ordinal vs. cardinal separation: The rationale here is guided by measurement theory: we want to preserve information content. Treating ordinal data with only order operations preserves all its information; doing more (like adding them) injects false information. The pattern’s strictness on scale types forces modelers to be honest about what their data can and cannot do. This not only prevents errors but also encourages best practices (e.g. if you find you desperately want to average an ordinal score, perhaps you should refine it into an interval scale in your methodology). The outcome is a framework that respects both the qualitative and quantitative realms appropriately, aligning with FPF’s Pillar of Pragmatism – use formalism where it’s justified, but not beyond its limits.
-
Optional Levels: Requiring Levels in every case would have been too rigid (not everything has named tiers), but not supporting them would fail domains that rely on them (like maturity models or grading systems). The rationale for making Level optional is to accommodate both. We saw in practice that many metrics naturally form tiers (e.g. technology readiness levels TRL 1–9) and giving them a slot in the model (instead of burying them in definitions) makes those metrics much easier to work with and integrate. Meanwhile, continuous metrics carry no baggage of unused fields. This design was checked against existing standards (like ISO 25024 for quality measures) to ensure we aren’t deviating from industry expectations: indeed, separating the concept (Characteristic) from the scheme (Scale) aligns well with standards, and including an optional categorization aligns with common practice in capability maturity models, etc.
-
Method neutrality: The decision to not include any measuring procedures in A.18 (no specific formulas, no mandated evidence type) comes from the principle of separation of concerns. The kernel should provide the what and how (structurally), while patterns provide the how (procedurally). This keeps the kernel lean (P‑1 Cognitive Elegance) and allows domain experts to implement whatever method is appropriate, merely committing to wrap their results in the CSLC form. By doing so, we avoid any bias toward empirical vs analytical, or manual vs automated measurements – FPF welcomes all, as long as they conform to the schema. This was rationalized by examining case studies: e.g., some reliability metrics come from formal proofs (analysis), others from testing (empirical) – the kernel can host both results identically, requiring only that each result says what it measured and on what scale.
In essence, A.18 is the infrastructure of meaning for metrics. It may appear as a simple template, but it’s profoundly enabling. It forces clarity at creation time, so we don’t have to infer or debate meaning at usage time. The pattern’s strength lies in preventing errors that don’t have to happen. It encodes lessons from both metrology (the science of measurement) and everyday data science (where unit errors and mis-comparisons are infamous issues). The rationale is backed by these lessons: fix the interpretation rules in the design, and you eliminate entire classes of confusion and mistakes. By having this in the kernel, every mechanism – from knowledge scoring to system performance – benefits immediately, and their results become interoperable to a degree that would be impossible without a common structure.
Relations
-
Extends/Uses: A.17 (CHR-NORM) – A.18 explicitly builds on the canonical terminology established in A.17. It uses the term Characteristic as defined there (and no other synonyms) and carries forward the edict that “axis/dimension” be treated as mere narrative aliases. It also leverages the Entity-vs-Relation Characteristic distinction from A.17: Section 7.4 of this pattern references tests for disambiguating relational metrics. Essentially, A.17 provides the lexical and conceptual groundwork (what a Characteristic is, and the basic vocabulary), while A.18 provides the structural and normative rules for linking Characteristics to measurements.
-
Core foundation for metrics: This pattern underpins the Measurement & Metrics Characterization spec (C.MM‑CHR) – the pattern that implements metric storage and computation. In MM-CHR, every
U.DHCMethodRefandU.Measurefollows the CSLC format defined by A.18. By lifting CSLC rules to the kernel, we ensure all FPF patterns (like KD-CAL for knowledge dynamics, Sys-CAL for systems, or any custom CAL/CHR) share a common approach to metrics. A.18 also informs the design of CHR-CAL (Characterisation Calculus), which generalizes measurable property templates: CHR-CAL relies on the one-Characteristic-per-metric assumption and the comparability rules set here to compose higher-level characterizations. -
Enables dynamic reasoning: A.18’s insistence on well-defined Scales allows patterns like A.3.3
U.Dynamics(system dynamics models) to incorporate measurement dimensions as state variables without ambiguity. For example, astateSpacein a dynamics model can be explicitly defined as a set of Characteristics (each with units and ranges), making simulations and traces dimensionally consistent. If A.18 were not in place, one model might treat “performance” as a 1–5 score and another as a probability – combining them would be incoherent. With A.18, such differences must be reconciled via a ScoringMethod or kept separate, preserving coherence in multi-model analyses. -
Coordinates with assurance patterns: Many patterns in Part B and D (for trust, assurance, and ethics) involve scores and metrics. For instance, B.3 (Assurance Levels) computes overall assurance from evidence scores; A.18 ensures those input scores are well-defined and comparable (e.g. all are 0–1 or all are percentages, with polarity noted). D.4 (Trust-Aware Calculus) might combine trust metrics across domains – again, A.18 provides the common ground so that a “trust score” coming from an operational metric and one coming from a social rating can be normalized and compared meaningfully. In summary, any pattern that aggregates or uses measurements is constrained (in a positive way) by A.18’s rules. They “plug into” this framework.
-
Constrained by lexical rules: This pattern’s content is part of the formal lexicon governance. It works within E.10 LEX-BUNDLE, which means the terms Characteristic, Scale, Coordinate, Level, etc., are controlled vocabulary. A.18 localizes some generic requirements from A.17 (for example, A.17 mandates polarity in principle; A.18 requires it be declared per template in practice). It also aligns with external standards: by having explicit scale types and units, it dovetails with ISO/IEC measurement terminology and allows straightforward mapping to frameworks like ISO 80000 (quantities and units) and Stevens’s scale types. This relation to standards is deliberate – it eases F.9 (Alignment Bridge) construction to external ontologies by having a clean internal schema (A.18 provides that schema). In effect, A.18 is where FPF’s internal consistency meets external compatibility, ensuring our measurement semantics can relate to those outside FPF when needed.