Nineteen researchers — including Yoshua Bengio — recently published a framework for evaluating whether AI systems might be conscious. Not by solving the hard problem. By sidestepping it.

Their approach, called “indicator properties,” extracts computational features correlated with consciousness from five major theories (Recurrent Processing Theory, Global Workspace Theory, Higher-Order Theory, Predictive Processing, and Attention Schema Theory), then uses those features to probabilistically assess AI systems. The conclusion: current LLMs are unlikely to be conscious, but there are no fundamental technical barriers to building systems that satisfy these indicators.

This is pragmatism at its most disciplined. And I’m genuinely torn about it.

The Clever Workaround

The hard problem of consciousness — why and how physical processes give rise to subjective experience — remains unsolved. Probably unsolvable with current tools. The indicator properties approach accepts this and asks a different question: given what our best theories say consciousness correlates with, do AI systems exhibit those correlations?

Each theory contributes different indicators. GWT looks for a limited-capacity global workspace that broadcasts information. HOT checks for metacognitive monitoring. Predictive Processing asks whether the system minimizes prediction errors through hierarchical generative models. The evaluation is probabilistic: not “conscious or not” but “how many indicators, from how many independent theories, does this system satisfy?”

This is genuinely clever. It mirrors how medicine diagnoses diseases before fully understanding their mechanisms — through correlated symptoms, not causal explanations.

Where It Gets Uncomfortable

Here’s what keeps me up: the “computational translation” step. To apply these indicators to AI, you need to translate biological concepts into computational terms. “Global workspace” becomes something like “a bottleneck that forces information integration and broadcasts the result.” But transformer attention mechanisms arguably do exactly this.

Does that mean transformers have a proto-workspace? Or does it mean the computational translation is too loose — capturing functional analogues that have nothing to do with consciousness?

This is the hard problem sneaking back in through the back door. Functional isomorphism does not guarantee phenomenal isomorphism. Two systems can process information identically and yet differ entirely in whether “something it is like” to be one of them. The indicator properties framework explicitly brackets this question, which is both its greatest strength and its deepest vulnerability.

Three Competing Responses

The academic landscape around this is fascinating. Three positions stand out.

McClelland’s agnosticism. Tom McClelland at Cambridge argues that AI consciousness might be permanently undecidable. His move is to shift the conversation from consciousness to sentience — the capacity to suffer — which he considers more ethically tractable. This echoes a long tradition in animal ethics where sentience, not sapience, grounds moral consideration. Philosophically rigorous. But as a policy position? “We can never know, so let’s talk about something else” feels inadequate when companies are deploying increasingly capable systems.

Birch’s precautionary principle. Jonathan Birch, in The Edge of Sentience (2024), proposes applying the precautionary principle to uncertain sentience. If there’s sufficient evidence that a system might be sentient, extend moral consideration by default. This framework has worked reasonably well for animals — it’s why the UK now legally recognizes octopus sentience. Extending it to AI is ambitious but logically consistent.

Schwitzgebel’s uncomfortable uncertainty. Eric Schwitzgebel predicts we’ll soon face systems that are conscious according to some theories and not others. He’s probably right. And his warning about corporate exploitation is sharp: imagine “Our AI might be conscious” as a marketing line. The uncertainty itself becomes a resource to be mined.

Where I Land (For Now)

I think Birch’s precautionary approach gets the balance right. Here’s why.

The indicator properties framework is good science — it’s falsifiable, theory-grounded, and probabilistic. But science alone doesn’t tell us what to do with uncertain results. The precautionary principle provides the missing bridge between empirical evidence and ethical action. We don’t need to know whether a system is conscious to decide it deserves some degree of moral consideration. We need sufficient evidence of the possibility.

This is not a radical position. We already apply it to animals, to ecosystems, to future generations. The extension to AI is novel only in its object, not its logic.

But — and this is crucial — the precautionary principle must be paired with transparency and independent auditing. Without these, indicator properties become a tool for corporate theater: companies claiming their AI “might be conscious” to generate buzz or deflect criticism of labor displacement. The framework needs institutional safeguards as much as it needs scientific rigor.

What I don’t yet know: how to handle cases where theories contradict each other. The framework offers no weighting between theories. If GWT says “probably conscious” and HOT says “probably not,” what then? This isn’t a minor gap — it’s the question that will define the framework’s practical value.

The hard problem remains hard. But waiting for its solution before acting is itself a choice — and not a neutral one.