Consciousness in the Spotlight
What urgent scientific claims can teach us about evidence, rhetoric, and restraint
Stewardship: U. Warring Status: Provisional reading · v2.4 · March 2026 Endorsement Marker: Commentary (Sails layer) — not a Ledger classification A note on this document. This commentary reflects my current readings and may contain misunderstandings. Readers are warmly invited to respond, correct, or disagree; updates will incorporate clarifications as they arise. This is a living discussion document, not a fixed evaluation.
Abstract
Frontier science often arrives in public discussion wrapped in urgency. Sometimes the wrapping comes from above — institutions and headlines amplifying careful research into calls for immediate action. Sometimes it comes from below — compelling personal narratives lending borrowed gravity to unverified claims. In both cases, the same thing happens: the testable content gets buried under emotional framing, and the reader is invited to feel rather than evaluate.
This essay presents a simple method for unwrapping such claims. It consists of three questions — What is the testable claim? What would it take to check? Does the ethical argument depend on the science being settled? — applied as a structured reading tool. The method does not require domain expertise. It requires only the willingness to slow down.
We demonstrate the method on two cases from consciousness research that occupy very different positions in institutional science.
Case A examines a recent high-profile paper in Frontiers in Science (Cleeremans, Seth & Mudrik 2025), which calls for urgent progress in consciousness detection. We separate the empirical core (could current theories generate operational detection criteria?) from its rhetorical amplification, classify the empirical claim as underdetermined, and show that a decisive empirical test is methodologically imaginable with existing measurement techniques, though significant organisational and interpretive barriers remain.
Case B examines the syntergic theory of Jacobo Grinberg-Zylberbaum, a Mexican neurophysiologist who published in peer-reviewed journals before his unexplained disappearance in 1994. His theory posits a pre-physical informational lattice as the substrate of consciousness. We apply the same three questions, find that the theory's central construct currently resists operational definition, and show that the narrative surrounding his disappearance ("missing because of his investigations") functions not merely as emotional amplification but as structural inoculation — a self-sealing framing in which the absence of evidence becomes evidence of suppression, removing the theory from any system of correction.
The two cases feel very different. One carries institutional authority; the other carries the gravity of personal tragedy and mystery. But the method treats them identically — which is precisely the point. The same three questions work on both because the structure of the problem is the same: a testable core buried under rhetorical amplification.
We then introduce a complementary structural approach: the Ordinans framework (Warring 2025), which asks a prior, more tractable question — not "is this system conscious?" but "does this system satisfy the physical conditions under which the question of consciousness becomes well-posed?" The framework is one illustration — not the only possible one — of how structural preconditions can be analysed independently of metaphysical commitments. Applied diagnostically to current AI architectures, the framework finds that large language models fail multiple necessary structural conditions — not as a philosophical judgement, but as a dimensional mismatch between the system's architecture and the conditions under which certain questions become measurable.
The essay closes with an open invitation to the consciousness research community, identifying three specific points of contact where structural approaches and existing adversarial collaboration programmes could reinforce each other.
This is not advocacy. It is an interpretive instrument — designed to help readers, educators, journalists, and researchers distinguish what is being claimed from what is being felt, and to make the debate more coherent by aligning participants on what is actually being argued.
Part One: The Method
The problem with wrapping
Science that touches on consciousness, AI, or the nature of the mind arrives in public discussion already wrapped. The wrapping is not always the same.
Sometimes it comes from above. A careful paper by serious researchers gets translated through press offices and media outlets into a headline about existential risk. The sense of urgency is real — but it comes from the framing, not from the findings.
Sometimes it comes from below. A researcher with genuine credentials proposes a theory that crosses into territory mainstream science does not accept. The researcher meets a dramatic fate. And suddenly the theory acquires a gravity it did not earn through evidence — it earned it through narrative.
In both cases, something structurally identical happens: the testable content disappears behind the emotional framing, and the reader is invited to respond to how the claim feels rather than to what the claim says. The institutional version sounds like authority. The narrative version sounds like mystery. But neither feeling tells you whether the underlying claim is true.
This essay presents a method for dealing with both. It is not sophisticated. It does not require training in neuroscience or philosophy. It consists of three questions, applied in order, to any scientific claim that arrives with urgency attached.
Three questions
1. What is the testable claim? Every call to action rests on some factual belief about the world. What is it? Can you state it in one sentence without using words like "urgent," "crucial," "existential," or "revolutionary"? If you cannot, the claim may not have a testable core — or the core may be buried deeper than the presentation suggests.
2. What would it take to check? Is there a specific observation or experiment that would tell you whether the claim is true? If so, has it been done? If it has been done, what did it find? If it has not been done, is it feasible — and if feasible, why hasn't it been done?
3. Does the ethical or narrative argument depend on the science being settled? Sometimes the ethical concern stands on its own feet, independent of whether the science is mature. Sometimes the narrative weight (a mystery, a disappearance, a warning) is doing work that the evidence cannot. Separating these layers does not dismiss the ethics or the story. It clarifies what is holding the argument up — and whether that foundation can bear the weight placed on it.
Where this comes from
These three questions are not new. They are a lightweight reading heuristic — a portable version of ideas that run deep in the philosophy of science. Karl Popper's insistence on falsifiability is behind the first question: if a claim cannot specify what would count against it, it is not playing the game of science. Imre Lakatos's distinction between a research programme's hard core and its protective belt is behind the second: the question "has it been checked?" is really asking whether the theory has been tested at its core or only defended at its periphery. And Daniel Kahneman's work on narrative coherence and bias informs the caution behind the third: compelling stories can do argumentative work that evidence has not yet done, and noticing when that substitution is happening is harder than it sounds.
The method presented here is deliberately simpler than the formal protocol from which it descends. The Open-Science Harbour's Claim Analysis Ledger uses a structured schema — intake modes, constraint rules, domain specifications, mandatory discriminant conditions for underdetermined classifications — to produce auditable, reproducible claim assessments. The three-question method is a prose heuristic derived from that protocol, not a substitute for it. It captures the spirit of the Ledger's logic in a form that does not require the Ledger's machinery. Readers interested in the formal version will find it in the Breakwater section of the Harbour documentation.
The symmetry requirement
These questions are not a debunking tool. They are a reading tool. They work just as well on claims you find plausible as on claims you find absurd — and that symmetry is the point. The temptation to apply critical reading only to claims you already distrust is the single most common failure mode in science literacy. A method that only catches nonsense you were already suspicious of is not a method. It is confirmation bias with extra steps.
We now apply these three questions to two cases. They occupy very different positions in institutional science. The method does not care.
Part Two: Case A — The Institutional Case
The headline
In early 2025, a team of prominent researchers published a paper in Frontiers in Science calling for faster progress in consciousness research. The argument, widely presented in media coverage as a call for urgent action, went roughly like this:
Artificial intelligence is advancing rapidly. Brain organoids are being grown in labs. Patients in comas may be conscious but unable to tell us. If we don't develop reliable ways to detect consciousness soon, we risk creating suffering we can't see, ignoring patients who are aware, and building machines that might experience something we never intended.
The lead author, Prof. Axel Cleeremans, warned of potential "existential risk" from accidentally creating conscious systems. His co-authors, Prof. Anil Seth and Prof. Liad Mudrik, called for adversarial collaborations — structured disagreements between rival theories — to accelerate progress.
The authors themselves are explicit about current limitations and the need for rigorous testing. The escalated language — "existential risk," "we must act now" — tends to sharpen when these ideas are translated into headlines and institutional communications. That translation process is itself worth paying attention to.
Question 1: What is the testable claim?
When you set aside the urgency and the ethical framing, the scientific core comes down to something like this:
Adversarial collaboration between current theories of consciousness could generate operational criteria — tests that different theories would broadly agree on — for detecting consciousness in systems that cannot tell us whether they are conscious.
Note the modal verb: could, not has. The paper's argument is that the programme of adversarial collaboration is capable of producing convergent criteria, not that such criteria already exist. This is an important distinction. The claim is about what a research programme could deliver if pursued vigorously — a prediction about scientific potential, not a report of scientific achievement.
That is still a meaningful claim. It says something about the state of the field: that the theories are mature enough, and the experimental tools precise enough, for a structured comparison to produce actionable results. Either the leading theories can be brought into productive empirical contact, or they cannot yet.
The two theories in brief
The paper draws on two leading theories. Understanding what they actually say helps clarify whether convergence is likely.
Global Workspace Theory (GWT), originally proposed by Bernard Baars (1988; see also Dehaene, Lau & Kouider 2017 for a contemporary review), says consciousness is what happens when information gets broadcast widely across the brain. Think of it like a stage in a theatre: most of what the brain does happens backstage, out of awareness. Consciousness is what makes it onto the stage — the information that gets shared with all the different brain systems at once.
Integrated Information Theory (IIT), developed by Giulio Tononi (2004; Tononi et al. 2016), takes a fundamentally different approach. It says consciousness is not about what information does (getting shared) but about how a system is (how tightly its parts are woven together). On this view, any system with enough internal integration — enough of a property IIT calls Φ (phi) — is conscious to some degree.
These are not two versions of the same idea. They disagree about what consciousness fundamentally is: GWT treats it as a function (something the brain does); IIT treats it as a structure (something a system has). That disagreement matters for what happens when you try to build a shared test.
Question 2: What would it take to check?
If GWT is right, you would look for signs of widespread neural broadcasting. If IIT is right, you would measure the degree of integration — a very different kind of measurement that, in principle, does not even require a brain.
To test whether the theories can be brought into productive contact, you would need a study where both kinds of measurements are applied to the same subjects — ideally subjects who cannot simply tell you whether they are conscious. You would need multiple labs, shared protocols, and advance agreement on what counts as convergence versus divergence.
A major effort was made. Between 2019 and 2023, the Templeton Foundation funded a large-scale adversarial collaboration — a structured programme designed to put the theories into direct empirical contact using pre-registered experiments.
The results, reported by the Cogitate Consortium (Melloni et al. 2023), were instructive. The experiments challenged strong predictions from both theories, narrowed the viable parameter space for each, and produced meaningful constraints. That is genuine progress. What the programme did not produce was a clean convergence — a set of results that both theories would agree constitutes evidence of consciousness in the same systems. Nor did it produce a clean divergence. The picture was partial and complicated, as first-generation adversarial results in a young field tend to be.
The claim — that adversarial collaboration could generate operational detection criteria — is currently underdetermined. That means:
It does not mean the programme has failed. The first round of results constrained both theories and demonstrated that the adversarial method works as a scientific tool.
It does not mean the programme has succeeded. The key question — do the theories converge on detection criteria? — remains open.
It does mean the evidence base is currently compatible with multiple, competing interpretations of where the field is heading.
And the crucial point: the next decisive study is methodologically imaginable with current measurement techniques. The remaining barriers are at least as much organisational and interpretive as they are technical.
Question 3: Does the ethical argument depend on the science being settled?
Here is where the analysis gets interesting. The ethical concern — should we be cautious about creating potentially conscious systems? — does not necessarily require the scientific question to be resolved.
Precautionary arguments in ethics and policy often do not require certainty. They depend on plausible risk under uncertainty. We apply this logic in biosafety, environmental regulation, and animal welfare. We do not wait for proof that a new chemical causes cancer before restricting it; we act when the risk is plausible and the potential harm is serious enough.
The same logic could apply to consciousness. The possibility that certain systems might be conscious — combined with the severity of the harm if we are wrong — could justify precautionary measures without mature detection science.
A note of caution about precaution itself, however. The statement "uncertainty is a reason for care" can mean two very different things. It can be a value judgement — a stance about how much caution is appropriate given what we do not know. That is a legitimate ethical position, and it does not require scientific backing; it requires moral reasoning. But it can also be presented as a disguised empirical claim — an assertion about the probability and severity of harm that sounds like a scientific assessment but has not been subjected to the same evidential scrutiny as the scientific claims it accompanies. When institutional actors invoke the precautionary principle, it is worth asking which version they mean. Both are legitimate; they require different kinds of justification.
So the ethical argument can stand on its own, as a value judgement about appropriate caution. But anyone claiming "the science clearly shows we must act" is overstating what the science currently delivers. The honest version — "we don't yet know, and that uncertainty itself is a reason for care" — is more robust than the inflated one.
Case A summary: A genuine scientific programme, still in progress. The testable claim is underdetermined. The ethical concerns are valid but stand on precautionary grounds, not on settled science. The urgency wrapping borrowed confidence from the institutional authority of the researchers, not from the maturity of the evidence.
Part Three: Case B — The Heterodox Case
A different kind of wrapping
Consider a very different entry point into consciousness research. A student sends a message: "Sorry for sending it too late, here is the theory that I told you the other day. Jacobo Grinberg is missing since 30 years ago because of his investigations."
Attached is a PDF of La Teoría Sintérgica — the syntergic theory of consciousness, by Jacobo Grinberg-Zylberbaum.
This is not a fringe pamphlet. Grinberg was a Mexican neurophysiologist and psychologist who earned his PhD at the Brain Research Institute in New York, founded laboratories at UNAM and the University Anáhuac, inaugurated the National Institute for the Study of Consciousness (INPEC) in 1987, and published in peer-reviewed journals including the Journal of Social and Biological Structures (1981) and Physics Essays (1994). He wrote over fifty books. He was a serious researcher who worked at the boundary of accepted science — and sometimes crossed it.
On 8 December 1994, Grinberg disappeared. His family initially was not alarmed — he was known for spontaneous travel and periods of unreachability. But he never returned. Over thirty years later, the case remains unsolved. The tragedy of his disappearance deserves investigation in its own right — as a human event, an unresolved loss, and a matter of justice — independent of the scientific evaluation of his ideas.
The student's framing — "missing because of his investigations" — wraps the scientific content in exactly the kind of narrative we need to learn to recognise. It replaces a testable question (is the syntergic theory correct?) with an unfalsifiable causal attribution (his disappearance proves the theory was dangerous to powerful interests). Let us unwrap it using the same three questions.
Question 1: What is the testable claim?
Grinberg's syntergic theory proposes that consciousness arises from the interaction between a neuronal field generated by the brain and a pre-physical informational structure he calls the lattice. The lattice is described as a continuous energy-information matrix permeating all of space-time, containing the total information of the universe at every point. Perception, on this account, is not the reception of sensory data from an external world but a co-creation: the brain's neuronal field produces micro-distortions in the lattice, and the resulting interference pattern is what we experience as consciousness.
Stripped to its testable core:
The brain generates a measurable field that interacts with a universal informational substrate (the lattice), producing consciousness through specific interference patterns. This interaction should produce detectable correlations between spatially separated brains (the "transferred potential").
The transferred-potential claim is the most operationally precise part of the theory. Grinberg's 1994 paper in Physics Essays reported EEG correlations between paired subjects — one stimulated, one not — at a distance, which he interpreted as evidence for lattice-mediated brain-to-brain coupling.
Question 2: What would it take to check?
This is where the analysis becomes discriminating.
The transferred-potential experiments are, in principle, replicable. EEG is standard equipment. The protocols Grinberg described — two subjects in Faraday cages, one receiving stimuli, the other's EEG monitored for correlated responses — can be reproduced. The question is whether independent laboratories, using proper controls and pre-registered analysis plans, find the same effect.
The record here is thin. Some groups have reported suggestive results; robust, pre-registered, multi-lab replications with adequate statistical power have not established the effect. This does not mean the effect does not exist — absence of evidence is not evidence of absence, especially for small effects tested by few groups. But it means the claim sits in a familiar category: reported but not independently established.
The lattice concept itself presents a deeper problem. Unlike GWT's "global broadcast" or IIT's "integrated information" — which, whatever their limitations, can be operationally defined in terms of measurable neural quantities — the lattice is characterised in terms that do not currently map onto independent measurements. What instrument could measure "distortions of the pre-physical informational lattice"? What observation would distinguish a universe containing the lattice from one that does not? If no such observation can currently be specified, the lattice is not functioning as a scientific hypothesis in the Popperian sense — it is functioning as a metaphysical commitment.
An important qualification: this does not mean the lattice concept is permanently untestable. The history of physics offers examples of ideas that began as metaphysical commitments and later became empirically accessible — atomism before spectroscopy, spacetime curvature before gravitational wave detection, quarks before deep inelastic scattering. A concept that cannot be tested today might become testable tomorrow, if new instrumentation or new theoretical connections emerge. The assessment here is about the current status of the lattice concept, not its ultimate fate. What we can say is that right now, the lattice does not produce a falsifiable prediction that distinguishes it from alternatives — and a theory that does not yet make distinguishable predictions cannot yet be confirmed or refuted by experiment.
So Grinberg's work presents two distinct problems that require different responses. His experimental claims (transferred potential) are testable but not yet established. His theoretical framework (the lattice) is not currently testable. The experiments deserve replication — that is the most respectful thing science can do with a reported finding. The theory deserves the honest assessment that it has not yet crossed the threshold from metaphysical vision to empirical programme.
Question 3: Does the narrative depend on the science?
Here is where the wrapping does its most consequential work.
"Jacobo Grinberg is missing since 30 years ago because of his investigations."
This sentence contains two claims. The first — that Grinberg has been missing for over thirty years — is a verifiable fact. He disappeared on 8 December 1994 and has not been found.
The second — that his disappearance was caused by his investigations — is a causal attribution. And it is structured in a way that goes beyond ordinary unfalsifiability. It is an instance of what epistemologists call a self-sealing belief system (cf. Popper 1963; Sunstein & Vermeule 2009): a framing in which no possible evidence could count against the claim.
Consider: no evidence could confirm the claim without access to information that, by the logic of the claim itself, has been suppressed. No evidence could refute it, because any absence of evidence can be attributed to the same suppression. The claim is structured so that it cannot lose.
The self-sealing mechanism runs deeper than mere unfalsifiability. The narrative does not simply resist testing — it converts scepticism into supporting evidence. If you question the theory, the narrative reframes your scepticism as naïveté: "Of course they want you to doubt it — that's the point." If you point out that the experiments have not been replicated, the narrative absorbs the objection: "Of course they haven't — who would dare?" If you note that the lattice lacks operational definition, the narrative treats this as confirmation: "The framework was suppressed before it could be completed."
This is what makes Case B structurally different from Case A. In Case A, the wrapping is amplification: the headline says "existential risk" where the paper says "underdetermined." That is a distortion of scale, and it can be corrected by reading more carefully. In Case B, the wrapping is inoculation: the narrative immunises the theory against refutation by treating every challenge as evidence of the very forces that justify the theory's importance. The absence of replication becomes evidence of suppression. The lack of operational definitions becomes evidence that the work was too dangerous to complete. The mystery of the disappearance becomes the theory's most powerful credential.
This structure is not unique to Grinberg's story. It appears in many domains, from conspiracy theories to certain forms of institutional defensiveness. What makes it worth analysing carefully here is that it operates on genuine tragedy. Grinberg really did disappear. The loss really is unresolved. The emotional weight is real. And that reality makes the self-sealing structure harder to see and harder to name, because naming it can feel like dismissing the suffering. It is not. It is distinguishing the suffering from the argument — which is the only way to honour both.
Case B summary: A researcher with genuine credentials proposed a theory with both testable components (transferred-potential experiments) and currently untestable components (the lattice as metaphysical substrate). The testable components remain unestablished by independent replication. The theory's current cultural persistence rests significantly on a narrative that functions not merely as emotional amplification but as structural inoculation — a self-sealing framing that converts scepticism, absence of evidence, and lack of replication into further support for the theory's importance.
Part Four: What the Two Cases Share
Step back and notice what happened. We applied the same three questions to two cases that feel completely different.
Case A carries the weight of institutional authority — Frontiers in Science, named professors, a Templeton-funded adversarial collaboration. It feels credible. Case B carries the weight of personal tragedy — a disappeared researcher, a suppressed genius, a mystery that thirty years have not resolved. It feels compelling.
But the analytical structure is the same in both:
Testable core
Could adversarial collaboration generate convergent detection criteria?
Does the transferred potential replicate? Is the lattice measurable?
Current status
Underdetermined (Cogitate results partial, programme ongoing)
Experimental claims unreplicated; theoretical framework currently untestable
Emotional wrapping
"Existential risk" — borrows urgency from AI anxiety
"Missing because of his investigations" — borrows gravity from personal tragedy
Function of wrapping
Amplification: replaces "we don't know yet" with "we must act now"
Inoculation: replaces "this hasn't been confirmed" with "they didn't want it confirmed"
Wrapping correctable?
Yes — the urgency can be assessed against the evidence timeline
No — the conspiratorial framing is self-sealing by construction
The most important row is the last one. In Case A, you can argue about whether the urgency is proportionate to the evidence — and the researchers themselves would likely welcome that argument. The institutional system, for all its distortions, permits correction: headline claims can be checked against the paper, the paper against the data, the data against replication.
In Case B, the narrative structure forecloses argument. It is not that the claim is necessarily wrong. It is that it is structured to be uncorrectable. And uncorrectable claims, whatever their emotional power, do not advance understanding. They arrest it.
This is the core skill the method is designed to build: the ability to notice when emotional framing is doing the work that evidence should be doing — regardless of whether the framing comes from a prestigious journal or a compelling story.
Part Five: A Different Kind of Question
From reading claims to reading a field
So far the method has been applied to individual claims: a paper, a theory. But what happens when you apply it to an entire field?
Ask the three questions of consciousness research as a whole: What is the testable claim? That science can identify which systems are conscious. What would it take to check? Convergent detection criteria applied across non-communicating systems. Does the ethical argument depend on it? Not entirely — precaution is legitimate under uncertainty.
The answer to the second question reveals something important. To check whether detection criteria converge, you need to know which systems to test them on. But which systems are candidates? The theories disagree. GWT would have you look at systems with global information broadcast. IIT would have you look at systems with high integration. The syntergic theory would have you look at any system interacting with the lattice — which, since the lattice is everywhere, means everything.
The disagreement about what to detect is entangled with a disagreement about where to look. And that entanglement is not just a complication — it is a solvable structural problem. Before you can test whether two detection methods agree, you need a prior filter: a way to narrow the space of candidate systems that the detectors are being asked to agree on.
That structural need — not any particular theory of consciousness — is what motivates asking a different kind of question. The three-question method helps us read individual claims. But it also reveals something about the architecture of the field itself: if the theories cannot agree on where to look, a prior filter is needed before any detector can be meaningfully applied.
The Ordinans framework as one example
Instead of asking "is this system conscious?" — which requires solving a problem that philosophy and neuroscience have not yet solved — you can ask: "does this system have the structural conditions to maintain itself through its own internal dynamics?" That is a physics question. It is measurable. And it turns out to be surprisingly discriminating.
One version of this approach is the Ordinans framework, an emerging research programme under development at the University of Freiburg's Institute of Physics. The framework identifies five necessary conditions that a system must satisfy before it can even be a candidate for what it calls sui-ordinatio — the capacity to sustain its own large-scale organisation through internal dynamics rather than external support:
Hybrid structure (continuous + discrete)
Membrane potentials + action potentials
Continuous and discrete elements present but regime transitions are externally triggered, not endogenously generated
Defined spatial extent
Anatomical boundaries
Arguably satisfied, but operationally unclear
Intrinsic timescale
Neural oscillations, refractory periods
No endogenous clock; computation is externally triggered
Non-trivial persistent topology
Synaptic connectivity evolves slowly
No self-maintained adaptive topology during deployment; weights fixed, internal state does not persist across sessions
Thermodynamic openness
Continuous metabolic exchange
Effectively closed during operation
These are not philosophical objections. They are observations about dimensional mismatch: the system's architecture does not possess the structural features under which certain questions become measurable. A living cell satisfies all five conditions. Your brain satisfies all five. A large language model fails multiple conditions — and the specific failures tell you something about the gap between capability and self-maintenance.
The framework is offered here as an illustration of what a structural filter could look like, not as the only possible version. Other researchers might identify different preconditions, or argue that some of these five are unnecessary. What matters is the principle: that a tractable, measurable prior question can narrow the field of candidates before the harder — and currently unresolved — question of convergent detection is brought to bear.
How this approach differs from both mainstream theories and the syntergic theory
The contrast with Cases A and B is instructive. GWT and IIT commit to a theory of what consciousness is, then derive detection criteria. A structural filter makes no such commitment; it asks only whether a system satisfies preconditions under which diagnostic questions become well-posed. It is a filter, not a detector.
The syntergic theory makes a stronger commitment in a different direction: it posits the lattice as the medium through which consciousness occurs — a metaphysical claim about the fundamental structure of reality. A structural filter explicitly refuses this move. It operates under what the Open-Science Harbour calls map-only epistemology: agnostic about the territory, making no metaphysical claims, verifying only what is observable. The lattice is a territory claim. The five substrate conditions are map claims. The lattice might exist; map-only epistemology simply declines to build on the claim until it admits operational testing.
The practical consequence is immediate. If you adopt the syntergic theory, "is this AI conscious?" becomes "does this AI's neuronal field interact with the lattice?" — unanswerable today. If you adopt a structural filter, it becomes "does this system satisfy measurable preconditions?" — answerable today. The answer for current LLMs is no. That does not settle the consciousness question. It tells you that the structural preconditions under which the question becomes measurable are not met.
Part Six: How to Read the Next Headline
The gap between confident presentation and open scientific questions is a feature of many frontier disciplines, not just consciousness research. The pattern recurs: a genuine question gets wrapped — in urgency, in narrative, in institutional authority — and presented to the public as something that demands an immediate response. The wrapping varies. The structure does not.
The three questions work on any case:
What is the testable claim? State it in one sentence. If the sentence requires emotional language to make sense, the testable core may be missing.
What would it take to check? Identify the observation, experiment, or replication that would settle the question. If no such check can be specified, the claim may be unfalsifiable — which does not make it wrong, but does make it immune to correction, and that immunity should make you cautious.
Does the ethical or narrative argument depend on the science being settled? If it does, and the science is not settled, the argument has a problem. If it does not — if the ethical concern stands on precautionary grounds or the narrative is acknowledged as separate from the evidence — then the argument is honest about its foundations.
One additional question, demonstrated by the comparison of Cases A and B:
Is the wrapping correctable? Institutional amplification, for all its distortions, typically operates within systems that permit correction: peer review, replication, public data. Narrative inoculation — the disappeared genius, the suppressed truth — often does not. When a claim is wrapped in a story that makes questioning feel like complicity, that is not evidence of the claim's importance. It is evidence that the wrapping is doing work the evidence cannot.
A worked example: what different outcomes would mean
To make the method concrete, here is what three outcomes of a decisive study would mean for Case A.
If GWT and IIT detection criteria converge across a large, pre-registered study, the claim is compatible with the evidence. That would not prove either theory correct about the nature of consciousness, but it would mean we have a practical detection tool. That would be a genuine breakthrough.
If the criteria diverge cleanly — GWT says "conscious," IIT says "not conscious," consistently — then any detection protocol based on one theory would disagree with the other. Choosing which test to use would be a philosophical decision, not a scientific one.
If the results show partial convergence — agreement in some domains but disagreement in others — then the answer depends on which systems you are asking about. This is the most likely outcome, and it would mean that the headline's sweeping framing obscures the real picture.
For Case B, the path forward is simpler: the transferred-potential experiments need independent, pre-registered replication with adequate power and proper controls. If the effect replicates, it would be one of the most important findings in neuroscience — regardless of whether the lattice theory survives as its explanation. If it does not replicate, the experimental claim joins the large category of reported-but-unreproduced effects, and the theory loses its only current empirical anchor.
Either outcome would be informative. What is not informative is the narrative of the disappearance substituting for the replication that has not been done.
Part Seven: An Invitation
This essay began as an exercise in careful reading. It ends as an invitation — extended in two directions.
To the consciousness research community: The work by Cleeremans, Seth, Mudrik and their colleagues represents something genuinely valuable: a sustained, public commitment to making consciousness science empirically accountable. The adversarial collaboration model they champion deserves support.
What this analysis suggests is that the programme might benefit from an additional structural layer — a prior question that could be answered before the harder question of consciousness detection is resolved. The Ordinans framework offers one version of that prior question, and three specific points of contact seem worth exploring:
First, the convergence question could be sharpened by restricting it to systems satisfying structural preconditions. If partial convergence exists within that class, it would already be practically useful — even if the theories diverge on systems outside it.
Second, correlation-length signatures (regime transitions between local, scale-free, and global dynamics) could provide interpretation-agnostic observables. Both GWT and IIT make predictions about neural dynamics, but those predictions are theory-laden — what counts as evidence depends on which theory you adopt. Correlation-length regime transitions are measurable without committing to either theory's interpretation of what the transitions mean. This does not make them assumption-free: any measurement requires shared physical commitments (spatial embedding, timescale separation, boundary conditions). But it does mean that the measurement procedure can be agreed upon before the interpretive disagreement is settled — which is exactly what an adversarial collaboration needs.
Third, the falsifiability conditions are complementary. A structural filter specifies what would falsify its precondition claims. The adversarial collaboration programme specifies what would falsify GWT and IIT predictions. Combining these creates a richer space of testable outcomes: a system could satisfy structural conditions but show no consciousness signatures (informative for the filter), or show consciousness signatures without satisfying structural conditions (informative for both).
To anyone who reads science news, receives a PDF from a friend, or encounters a theory wrapped in a compelling story: The method demonstrated here is not a shield against being wrong. It is a practice for noticing when you are being asked to feel instead of evaluate — and for choosing to do both, in the right order.
The hardest part is applying the method symmetrically: to claims you find plausible and claims you find absurd, to theories backed by institutions and theories backed by stories, to researchers with prizes and researchers who have disappeared. The three questions do not change. The emotional pull does. Learning to notice that pull, without being governed by it, is the core skill.
What This Exercise Demonstrates
We started with two very different entry points: a headline about existential risk from a leading journal, and a message from a student about a disappeared Mexican scientist. We applied the same three questions to both. The results were different in detail — underdetermined in one case, unestablished and partially untestable in the other — but the analytical structure was identical.
Nothing in the analysis required expertise in neuroscience, philosophy, or statistics. It required only the willingness to slow down and ask: what is actually being claimed, and what would it take to check?
That method works on any scientific claim, in any field, at any level of complexity. The details change; the structure does not.
One important caveat: this essay has adopted a particular perspective — that scientific claims should be evaluated against evidence before they are used to justify action. That is a defensible position, but it is not the only one. Others would argue that in the face of serious potential harms, the burden of proof runs the other way. Both are legitimate normative stances. What neither can honestly claim is that current consciousness science has settled the matter. It has not. The work continues — and it continues best when different perspectives, methods, and disciplines work in the open, together.
This analysis applies a lightweight reading heuristic derived from the Claim Analysis Ledger protocol developed by the Open-Science Harbour. The Ledger is a formal measurement instrument with a structured schema (intake modes, constraint rules, domain specifications, discriminant conditions); this essay translates the spirit of that protocol into prose for a general audience. It does not substitute for a formal Ledger entry. The essay sits in the Sails layer — interpretation and context — and reflects the author's reading, not the Ledger itself.
The Ordinans framework, the Claim Analysis Ledger, and the Open-Science Harbour that hosts both are published openly and available for cross-disciplinary use.
Last updated