G–Human Learning Framework

An Invariant Framework for Human Knowledge Formation

Author: U. Warring Affiliation: Institute of Physics, University of Freiburg Version: 0.2.2 Last updated: 2025-12-22 License: CC BY 4.0 Status: Constitutional invariant framework Domain: Epistemic (human-bound) Non-status: ❌ Not a handbook · ❌ Not pedagogy · ❌ Not tool-specific


Constitutional Preamble

Boundary and Relationship Statement

This document specifies an invariant framework governing the conditions under which humans may legitimately claim propositional knowledge about any phenomenon.

It does not define, derive, or limit the existence of phenomena themselves.

Human knowledge about a phenomenon and the phenomenon itself are distinct domains.

The framework applies to paradigmatic propositional knowledge — knowledge that involves not merely tracking truth but understanding why one's beliefs are warranted and how they might be wrong. It acknowledges that "knowledge" may be a family-resemblance concept encompassing both interpretive propositional knowledge and non-interpretive reliable registration.


1. Scope and Non-Scope

1.1 In Scope

The framework constrains:

  • Necessary conditions for epistemic legitimacy of human propositional claims

  • Structural constraints on interpretation, framing, and meaning

  • Invariant boundaries independent of tools, pedagogy, or institutions

  • Claims that enter the space of reasons — contexts where one may be asked "Why do you believe this?" and where "because it works" is not sufficient

1.2 Explicitly Out of Scope

The framework does not govern:

  • Teaching strategies or curricula

  • Learning efficiency optimisation

  • Machine learning or AI as epistemic agents (see §4.4)

  • Discovery procedures or workflows

  • Skill acquisition or training design

  • Tacit knowledge and procedural competence (see §4.2)

  • Reliable registration without justificatory access (see §4.1)

1.3 Invariance Conditions

This framework remains valid even if:

  • current tools disappear,

  • pedagogical models change,

  • institutions evolve.

It does not claim validity across:

  • all possible knowledge types,

  • non-human cognitive systems,

  • domains where justificatory access is unnecessary.


2. Invariant 0 — Propositional Warrant Requires Error-Sensitive Interpretation

2.1 Invariant Statement (Qualified)

Propositional knowledge that merits epistemic status — as distinct from mere information registration — paradigmatically requires integration within normative frameworks where claims can be assessed, justified, and revised.

While reliable mechanisms can produce true beliefs, paradigmatic knowledge characteristically involves placement within what Sellars called "the space of reasons" — contexts where error-recognition enables self-correction and rational revision.

This is a claim about epistemic possibility and normative structure, not about pedagogy, performance, or all possible knowledge types.

2.2 Grounding

The invariant draws support from convergent philosophical traditions:

Sellars's Myth of the Given (1956): Raw sensory data cannot serve as epistemic foundations because anything capable of justifying beliefs must already be conceptually structured. Knowing is not empirical description but placement in the logical space of reasons.

Popper's Falsificationism: Error-correction is constitutive of knowledge growth, not merely corrective. The tetradic schema — Problem → Tentative Theory → Error-Elimination → New Problem — positions error as the engine of epistemic progress.

Dewey's Pragmatism: Inquiry emerges from problematic situations where existing habits of thought fail. Knowledge is constructed through active problem-solving, not extracted passively from data.

Theory-Ladenness (Hanson, Kuhn): Observation is never theory-neutral. The same retinal stimulation yields different observations depending on theoretical commitments — what Hanson called "seeing as."

2.3 Neuroscientific Support (With Caveats)

Prediction error signalling provides partial empirical grounding:

Schultz, Dayan & Montague (1997): Dopamine neurons encode reward prediction errors — firing patterns that precisely match temporal difference learning algorithms. This demonstrates that learning mechanisms operate through error detection.

Friston's Free Energy Principle: The brain continuously generates predictions; perception minimises "surprise" through active inference.

Caveat: The predictive processing framework faces unfalsifiability objections. Critics note it may be mathematical truth rather than empirical claim. The framework acknowledges this support as suggestive rather than grounding.

2.4 Anthropically Bounded Scope

Invariant 0 applies only where humans are the meaning-making agents and only to propositional knowledge claims entering the space of reasons.

It makes no claims about:

  • non-human cognitive systems

  • hypothetical autonomous discoverers

  • tacit knowledge and procedural competence

  • reliable but non-justificatory belief formation

2.5 Invariant 0-F (Falsifiability)

Invariant 0 fails if any of the following are demonstrated:

  1. Paradigmatic propositional knowledge reliably emerges without any integration into justificatory frameworks

  2. Claims can be assessed, warranted, and revised without error-recognition capacity

  3. The space of reasons can be entered without interpretive acts

  4. Scientific knowledge accumulates without any error-correction mechanisms


3. Derived Epistemic Constraint — The DEEP Partition

3.1 DEEP as Epistemic Partition (Not a Method)

DEEP names a partition of epistemic responsibility, derived from Invariant 0.

Partition
Epistemic Role
Delegability

Deposit

World → traces

Delegable

Extract

Framing, relevance, representation

❌ Non-delegable

Process

Correlation ordering

Delegable

Elevate

Claims, non-claims, interpretation

❌ Non-delegable

DEEP does not describe how learning happens. It constrains what counts as a legitimate propositional knowledge claim.

3.2 The Non-Delegability Thesis

Extract and Elevate remain non-delegable because:

Extract determines what counts as relevant — which features of the world enter consideration. This framing decision shapes all downstream processing. Delegation would transfer epistemic responsibility without transferring justificatory access.

Elevate determines what is claimed and what is explicitly not claimed. The distinction between "the model shows X" and "X is true" cannot be delegated without collapsing the human knower's epistemic position into the tool's.

3.3 Tool–Extract Coupling (Clarification)

Tool availability may inform Extract decisions (practical constraints), but cannot replace the human act of framing.

Example: The availability of a mass spectrometer may determine which chemical analysis is feasible, but the decision that chemical composition is relevant to the question at hand remains a human framing act.

Epistemic responsibility for Extract and Elevate remains human.

3.4 Article 1-F (Falsifiability)

DEEP fails if:

  • Extract or Elevate steps are implicit or absent in legitimate knowledge claims

  • Tools function as epistemic agents with independent justificatory standing

  • Claims are legitimately elevated without explicit non-claims

  • Delegation of framing produces equivalent epistemic standing


4. Counterpositions and Boundary Cases

4.1 Reliabilism and Externalist Epistemology

Challenge: Alvin Goldman's reliabilism defines knowledge as true belief produced by reliable cognitive processes. The believer need not have access to what makes their belief justified. Armstrong's thermometer analogy: if a belief reliably indicates truth, knowledge may constitute nothing more than truth-tracking.

Framework Response: The framework acknowledges that reliabilism identifies a genuine species of epistemic success — what might be called reliable registration. However, it distinguishes this from paradigmatic propositional knowledge that involves justificatory access and rational revisability. The DEEP partition applies to the latter, not the former.

Boundary Marker: Where reliabilist conditions suffice for a knowledge attribution, DEEP constraints do not apply. The framework governs claims entering the space of reasons, not all true beliefs.

4.2 Implicit Learning and Tacit Knowledge

Challenge: Reber's artificial grammar learning demonstrates knowledge acquisition without conscious interpretation. Participants distinguish grammatical from non-grammatical strings without articulating rules. Polanyi's tacit knowledge — "we can know more than we can tell" — identifies vast domains of procedural competence resistant to propositional articulation.

Framework Response: The framework's scope is explicitly propositional knowledge claims. Tacit knowledge and implicit learning produce genuine competence but do not generate propositional claims requiring DEEP partition. The framework does not deny their reality; it excludes them from scope.

Boundary Marker: Where knowledge cannot be propositionally articulated or does not enter justificatory discourse, DEEP constraints do not apply.

4.3 Direct Realism and Ecological Psychology

Challenge: Gibson's ecological psychology argues perception involves "direct pickup" of affordances without representational mediation. Experimental evidence sometimes supports direct perception over information-processing accounts.

Framework Response: Gibson's direct realism primarily concerns perceptual contact with the environment, not propositional knowledge claims. Even if perception is direct, the formulation of claims about what is perceived involves entry into the space of reasons. The framework is agnostic about perceptual mechanisms; it governs the epistemic status of claims derived from perception.

Boundary Marker: Perceptual directness does not entail propositional directness. DEEP applies at the claim level, not the perceptual level.

4.4 Machine Learning and Algorithmic Optimisation

Challenge: Neural networks trained by gradient descent literally respond to error — adjusting parameters to minimise cost functions. If this constitutes error-correction, ML systems satisfy Invariant 0's requirements.

Framework Response: The framework acknowledges genuine ambiguity here. Two positions are defensible:

Position A (Narrow reading): Gradient descent is mechanical optimisation, not error-recognition. Recognition implies normative assessment — understanding that something is an error and why. Algorithms process signals; they do not recognise errors.

Position B (Broad reading): The distinction between "genuine" error-recognition and "mere" mechanical optimisation lacks principled grounding. If so, the framework must either (a) accept ML systems as potential knowers, or (b) identify what additional conditions human error-recognition satisfies.

Current Resolution: The framework adopts Position A provisionally, noting that the human capacity for meta-cognitive reflection on error — asking "why was I wrong?" and revising frameworks accordingly — remains undemonstrated in current ML systems. This boundary may require revision as AI capabilities evolve.

Boundary Marker: Pending further clarification, the framework applies to human epistemic agents. ML systems are treated as tools within the Deposit and Process partitions.

Archivist Note: §4.4 marks a provisional boundary. Any future revision must explicitly state whether ML systems are granted access to Elevate or merely expanded Process capabilities. This boundary is subject to Council review as AI capabilities evolve.

4.5 Hebbian Learning Without Error Signals

Challenge: Hebbian learning — "neurons that fire together wire together" — extracts statistical regularities through pure correlation, without explicit error signals.

Framework Response: Hebbian learning operates at the sub-personal level, producing neural configurations that may subsequently support propositional knowledge. The framework governs claims, not neural mechanisms. Sub-personal processes that lack error signals may underwrite personal-level knowledge that does involve error-recognition.

Boundary Marker: Sub-personal mechanisms are not subject to DEEP constraints. The framework applies at the level of propositional claims, not neural implementation.


5. Internal Tensions and Resolutions

5.1 The Bootstrapping Problem

Tension: If all meaning requires interpretation, how does the first interpretive act get off the ground without prior meaning?

Resolution: The framework does not claim interpretation creates meaning ex nihilo. It claims that propositional knowledge requires interpretive integration. Pre-propositional capacities — embodied skills, perceptual attunement, linguistic competence acquired through immersion — provide the scaffolding for interpretive acts without themselves being propositional knowledge claims.

5.2 The Self-Application Problem

Tension: If "meaning arises only through interpretation" is claimed as absolutely true, it appears self-refuting — asserting something non-interpretively about interpretation.

Resolution: The framework is itself an interpretive artefact, subject to revision through error-correction. It does not claim exemption from its own constraints. Its authority flows from use and survivability under criticism, not from privileged access to truth. This is consistent with Popper: all knowledge, including meta-epistemological knowledge, is conjectural and fallible.

5.3 The Scope Problem

Tension: Does the framework apply to logical and mathematical truths?

Resolution: Mathematical knowledge involves proof — a form of error-correction where mistakes are identified and eliminated. However, mathematical truths once established do not require ongoing error-response in the way empirical claims do. The framework applies most naturally to empirical propositional knowledge. Mathematical knowledge may constitute a boundary case requiring separate treatment.

5.4 The Structure Problem

Tension: If meaning doesn't exist in structure alone, how does the framework itself convey meaning through its linguistic structure?

Resolution: The framework conveys meaning to interpreters capable of error-recognition — its readers. The structure is necessary but not sufficient; interpretation by competent readers completes the meaning. A reader who cannot recognise error in their own interpretation cannot derive meaning from this framework. This is consistent with Invariant 0: structure alone does not generate meaning; structure interpreted by error-sensitive agents does.


6. Asymmetric Architecture

6.1 One-Way Constraint Principle

This framework constrains human claims about phenomena. It does not constrain phenomena themselves.

No statement within downstream documents may:

  • derive phenomena from learning,

  • make existence contingent on discovery,

  • collapse ontology into epistemology.

6.2 Non-Merge Rule

This framework must remain distinct from:

  • discovery frameworks,

  • physical theories,

  • sociological analyses of science,

  • clock frameworks, measurement models, or standards/procedures,

  • any other Council-governed domain.

Any merger constitutes a category error.

6.3 The Resistance of Reality

The framework presupposes that reality provides normative constraint on interpretation. Error-recognition requires something against which errors can be recognised. This "resistance of reality" — even Kuhn acknowledged we cannot make reality be anything we want — provides the external constraint that prevents interpretation from becoming arbitrary.

The framework does not specify the metaphysics of this constraint. It requires only that something makes some interpretations wrong.


7. Falsifiability and Failure (Invariant Level)

7.1 Structural Failure Modes

The framework fails if:

  • Interpretation is hidden inside tools and this produces legitimate propositional knowledge

  • Meaning is reliably inferred from metrics alone without interpretive acts

  • Success narratives omit failure conditions and remain epistemically legitimate

  • Propositional claims are elevated without any capacity for error-recognition

7.2 Empirical Disconfirmation Conditions

The framework faces serious challenge if:

  • Cognitive science demonstrates genuinely theory-neutral observation at the propositional level

  • Scientists with radically different theoretical commitments consistently converge on identical propositional claims without interpretive mediation

  • AI systems produce genuine propositional knowledge with justificatory standing independent of human interpretation

  • Cumulative scientific progress becomes inexplicable under the framework's constraints

7.3 Meta-Falsification (Article M)

This framework fails if, in practice:

  1. DEEP becomes rhetorical rather than enforced

  2. Anthropomorphic language is tolerated in epistemic contexts ("the model understood")

  3. Epistemic legitimacy is replaced by performance metrics

  4. The counterpositions in §4 are ignored rather than addressed

These are observable social and institutional indicators.

7.4 Pragmatic Failure

The framework fails if:

  • It cannot account for cumulative scientific progress

  • It cannot explain how errors are recognised without contact with mind-independent reality

  • Its application systematically impedes rather than enables legitimate knowledge claims


8. Boundary Stress Test (Acid Test)

A discovery may be ontologically valid while a human claim about it is epistemically invalid.

Acceptance of discovery does not imply acceptance of claims.

Test Case: A neural network identifies a novel protein structure that is subsequently confirmed by crystallography. The structure exists (ontological validity). But the claim "we know the structure because the network identified it" is epistemically problematic unless:

  • Extract: The framing of the problem as protein structure prediction was a human interpretive act

  • Elevate: The network output was interpreted, not merely accepted; non-claims were articulated (e.g., "the network does not explain why this structure is stable")

The Acid Test: If Extract and Elevate cannot be identified, the knowledge claim fails DEEP constraints regardless of the discovery's validity.


9. Violation Mirror (Audit Function)

Any of the following invalidate a propositional knowledge claim unless explicitly corrected:

  • "The model learned / understood / discovered the physics"

  • Extract or Elevate steps omitted or implicit

  • Accuracy presented as epistemic warrant without justificatory structure

  • Bidirectional arrows between learning and existence

  • Performance metrics substituted for epistemic assessment

  • Counterpositions dismissed rather than addressed

This list is sufficient, not exhaustive.


10. Cross-Framework Protocol

10.1 Permitted References

  • Discovery frameworks may note that human claims are constrained here

  • This framework may reference discoveries as worked examples

  • Other Council-governed frameworks may cite this as epistemic constraint on their claims

10.2 Forbidden Couplings

  • No derivation of phenomena from learning principles

  • No learning framework defining existence conditions

  • No merger with physical theories, metrology frameworks, or ontological claims

  • No circular dependence between this framework and frameworks it constrains


11. Status of This Document

This is a map of coastlines, not a handbook for sailors.

The coastline marks where the land of legitimate propositional knowledge meets the sea of claims that fail epistemic constraints. It does not prescribe routes, cargo, or destinations.

Downstream documents (handbooks, curricula, protocols) must:

  • cite this framework,

  • remain falsifiable against it,

  • accept invalidation by it,

  • address the counterpositions when relevant.


Appendix A — Reference Anchors

Epistemology (Grounding)

  • Sellars, W. (1956). "Empiricism and the Philosophy of Mind" — myth of the given, space of reasons

  • Popper, K. (1963). Conjectures and Refutations — error and knowledge growth

  • Dewey, J. (1938). Logic: The Theory of Inquiry — inquiry as response to problems

  • Lakatos, I. (1978). The Methodology of Scientific Research Programmes — sophisticated falsificationism

  • Kuhn, T. (1962). The Structure of Scientific Revolutions — theory-ladenness, paradigms

Epistemology (Counterpositions)

  • Goldman, A. (1979). "What is Justified Belief?" — reliabilism

  • Plantinga, A. (1993). Warrant and Proper Function — proper functionalism

  • Boghossian, P. (2006). Fear of Knowledge — critique of relativism and constructivism

Neuroscience (Supporting, Not Grounding)

  • Schultz, W., Dayan, P., & Montague, P.R. (1997). "A Neural Substrate of Prediction and Reward" — prediction error

  • Friston, K. (2010). "The Free-Energy Principle: A Unified Brain Theory?" — predictive processing

  • Colombo, M. & Wright, C. (2018). "First Principles in the Life Sciences" — FEP critique

Psychology (Supporting)

  • Reber, A.S. (1967). "Implicit Learning of Artificial Grammars" — implicit learning

  • Polanyi, M. (1966). The Tacit Dimension — tacit knowledge

  • Gibson, J.J. (1979). The Ecological Approach to Visual Perception — direct perception

Philosophy of Science

  • Hanson, N.R. (1958). Patterns of Discovery — theory-ladenness of observation

  • van Fraassen, B. (1980). The Scientific Image — constructive empiricism

Archivist Note: References anchor domains where the invariant is already implicit or explicitly contested. They are not prerequisites for using the framework. Counterposition references are included to demonstrate good-faith engagement with challenges.


Appendix B — Glossary

Epistemic Legitimacy: The property of a claim that satisfies necessary conditions for counting as knowledge rather than mere opinion, guess, or information registration.

Error-Sensitive Interpretation: The capacity to recognise that a belief or claim might be wrong and to revise accordingly. Distinct from mere mechanical response to signals.

Space of Reasons: Sellars's term for the normative domain of justification — the context in which claims are assessed for warrant, coherence, and truth, distinct from the causal order of nature.

DEEP Partition: The four-part division of epistemic responsibility: Deposit (world → traces), Extract (framing), Process (correlation), Elevate (claims).

Non-Delegable: Cannot be transferred to tools or automated systems without loss of epistemic standing. Applies to Extract and Elevate.

Propositional Knowledge: Knowledge expressible in claims that can be true or false, assessed for warrant, and revised in light of evidence. Distinct from procedural knowledge and tacit competence.

Reliable Registration: True belief produced by reliable mechanisms, without requiring the believer's access to justification. Identified by reliabilism as sufficient for some knowledge attributions.


Constitutional Lock Statement

The Human Learning Framework specifies necessary conditions for epistemic legitimacy of propositional knowledge claims, not sufficient conditions for learning success or all forms of knowledge.

It acknowledges counterpositions and boundary cases where its constraints may not apply.


Revision History

Version
Date
Summary

0.1.0

2025-12-22

Initial framework

0.2.0

2025-12-22

Stress-tested revision: qualified Invariant 0, added counterpositions (§4), resolved internal tensions (§5), refined falsifiability conditions (§7), added glossary

0.2.2

2025-12-22

Precision hardenings: "paradigmatically" (§2.1), operational gloss for space of reasons (§1.1), Archivist note on ML boundary (§4.4), reader competence condition (§5.4), cross-reference fixes (§1.2), §2 title precision, metrology wording (§6.2)


Last updated