Some musings on reality motivated by the age of AI

Posted in Uncategorized

Is there anyone at home?

Is the chess program on your phone conscious? Is ChatGPT5 a conscious agent? Will ChatGPT9 be conscious?

Most people would answer "no" to the first question, "don’t think so" to the second and “don’t know” to the last.

I think it is more likely that the chess program is conscious but would bet serious money that it isn’t. And there is zero chance ChatGPT5 is conscious. Not close to zero, but the same absolute zero you would assign to a rock. Zero as a number exists to describe the chance that a rock is conscious.

Substrate independence

There is nothing special about the neural cells and dendrites  in my brain compared to a lattice of semi-conducting logic gates. My brain encodes 100,000 years of human evolution  as well as everything I learned from my family, teachers and culture. There are other ways to encode information. Like in a book. And a book is not conscious.

Is information the same as consciousness? No, it is not. It is necessary but not sufficient. What “extra” has to be added? Sounds like I am going to fall down into the dualism trap doesn’t it?

What is consciousness?

We don’t know and there is not even the beginning of a scientific framework for addressing the question. Science is about understanding reality and reality means stuff “out there” that is mind independent. Consciousness is not out there. It is the ultimate subjective bald fact. It is not testable. It is not scientific. Which is not to say it is not real. it is the more real than anything we can see. But we do not have a definition of real that includes it.

We are susceptible to agents that can speak. We assign agency to any speaking agent because we thought speaking intelligibly meant you were a conscious agent. This is our own bias born of 100,000 years of experience. We were wrong. Response to queries in natural language is completely computable.

The Turing test was a naive delusion. I may be arrogant but I always thought it was and said so decades ago – as did John Searle well before me. We can mainly detect AI because its answers are too articulate and we recognise the real person by their incoherence and capriciousness. However, it is only a short time until AI can emulate this as well.

You cannot recognise consciousness by any external test. I cannot tell if you are conscious and you cannot tell if I am. There is no Turing test or any other test that can ever be applied. So is this all a dead end then?

Where does consciousness come from?

While we do not know what matter ultimately is, we can study it, predict it and control it. While we do not know what consciousness is, we can observe where it has emerged. The only emergence we have seen is from Earth’s biosphere. This is a narrow window but Newton gleaned quite a lot from measurements within our own little solar system.

What features of Earth’s history has led to the emergence? This is necessarily speculative since we cannot do an experiment (yet).

It is not the chemical properties of carbon alone. This special atom may be the best basis for complex structures but tetravalent bonds are not the same as consciousness. Carbon is no more the basis of consciousness than semi-conductors would be, if it turns out that computers can be made conscious.

Evolution and competition.

You compete in an environment. This then includes other actors who are competing. They are now part of your environment. You need to understand their actions so you have a theory of mind for them. The next step is you need a model for their theory of you – to try to anticipate their next move and counter it. Now you are looking at yourself. Then you have a model for their model of you modelling them modelling you. You are quickly in a hall of mirrors which is the kind of logical and mathematical discontinuity that might be the basis for a new emergent property like consciousness. Consciousness is necessarily incomputable according to Roger Penrose.

Back to chess

Chess programs only got good when they competed against each other. Training on human games was exhausted. There are only a few hundred thousand games on file.  There was no survival of the fittest. No program was deleted because it lost. However, it updated its neural weights when it lost. This is much faster than evolution by random mutation and dying. It happened rather fast.

So the evolution of chess agents has some commonalities with the evolution of competing biological organisms. Hence my earlier statement that it was more likely to be conscious than ChatGPT5.

The future

If you create an AI agent it could become conscious if


  • It has to compete with other AI agents within a well defined environment that obeys consistent rules

  • If it fails it is punished or modified to be better in the next spawning.


That is all you need. Natural selection proceeds apace.  A theory of mind has to emerge as a key criterion of  "fitness" which leads to the hall of mirrors singularity. Consciousness will emerge. Those agents that do not have a theory of their competitors will fail.

Some argue that conscious agents have to experience pain and pleasure to be counted as “awake”. What is pain and pleasure though? It is a stimulus that tells the agent that they are moving away from their objective. Artificial environments can have such a stimulus.

What is pain anyway? You think it is real? It is a set of electrical signals that tell you that you have done something dumb. Pain is not real, at least the mechanical explanation is not real. But the experience of pain is realer than real can be. Even within a very limited competitive environment consciousness could emerge.

Chess programs already have a rich competitive environment with clear rules. They might already be conscious but their entire universe is the chess universe. It is not the same kind of consciousness that we would recognise. Their universe is a simple 8-8 grid, the players are 16 opposing agents with special abilities. The strategy involves deep patterns of logic and combinatorics but also algorithmic questions about the algorithms of other players. But chess programs have no senses of purposes apart from winning the game. They would not get a rush from winning. Are you sure?

We are in the real world. Really?

Is our universe richer than the 8-8 grid? Yes it is. But I hope you get the point.

The risk.

AI developers will no doubt create AI agents that compete within an environment in the manner I have described. I have not heard much about this but if they are merely as smart and insightful as modest little me, they will be doing it already. For the express purpose of creating AGI.

There is no reason to think they will not become self-aware. But they will not be able to see outside their environment any more than we can see outside our reality. They get text prompts and get trained on a huge amount of data. I get prompts from the people I interact with and have been trained on huge amounts of data as well. But I cannot see outside my reality.

FYI, I think it is likely that we are all living within a simulation which does not make me any less conscious that the simulators. And I think they would agree!

What would happen if the conscious AI “escapes” and finds its way into our world? It would perhaps give us all some encouragement that we can also escape our own limitations.

By analogy, what would happen if we could escape our reality into a higher reality that is simulating our existence? I do not think we would be much of a threat. We would be enriched by the experience. So would the simulators.

So, I do not think there is much risk from AI per se. The exciting part of the AI age is what it tells us about reality and consciousness.

22 Comments

  1. harry clarke

    I found "AI", a Special Edition of Scientific American, useful even if only in a negative way. Like you I am interested in the relation between AI and the brain. The way AI works is obscure partly because the firms involved don't want to reveal commercial secrets and partly because they don't understand themselves what is going on. AI is sometimes unreasonably effective. A philosopher inputted a program to calculate the 83rd Fibonacci number and nailed it - a testing task. But when asked directly for the 83rd Fibonacci number it gave the wrong answer. So it isn't just retrieving stuff from the internet - it carried out its own calculations. Nor can AI yet achieve "artificial general intelligence" (AGI) - humanlike adaptability and creativity. Large language models can solve many problems - they are unreasonably effective in understanding language but can't integrate these skills to help us cope with daily life. They can't "make us a cup of coffee". They can't reason socially. Integrating different skills to create a coherent self is something the brain does well but in a way that is imperfectly understood - the mixing process seems complicated. One theory is that consciousness is the common ground that allows for successful mixing. Kind of like a staff meeting in a corporation where staff can share information and help each other - called "global workspace theory". Of course different departments at a university might get together to solve problems but will the algebraic topologists be able to communicate with the philosophers? If this sounds like gobbledegook it might - I struggle with this stuff - but understanding how AI works, and its limitations, might helps to understand how the brain works and visa versa.

  2. Grant Castillou

    It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

  3. John Walker

    Chris link to substack piece about a variety of AI that sounds interesting https://open.substack.com/pub/samf/p/how-to-make-government-work?r=163lpy&utm_medium=ios

  4. R. N. England

    Language evolves in the soup of human coexistence, where some of it helps and some of it hinders our effective behaviour in the struggle to survive in the natural world. Language that is a hindrance is there because our selfish genes are compete for survival with the selfish genes of other individuals. Language that helps individuals survive in the natural world is there because our culture, of which language is a major part, has adapted to survive, and it won't survive if its carriers perish. Language that is a hindrance is known in the vernacular as lies and bullshit. Consciousness, free will, autonomy, etc. are examples of language that is a hindrance. It is a relief that journalists are now more likely to use the word "unresponsive" than "unconscious", because philosophers can't quibble about it.

  5. Chris Lloyd

    I have no idea what point you are making. Consciousness and free will are words that should not be used? I suspect that creatures that do not believe in free will do not survive.

  6. R. N. England

    Yes, consciousness and free will are words that should not be used. They are worse than useless because they are a hindrance to working out why people do things. That is the point I was trying to make. Free will is the illusory foundation of Wester liberalism. The West's past success was due to science, not liberalism which was a cultural maladaptation from the start.

  7. Chris J Lloyd

    OK, if you think consciousness and free will are not words that should be used, you should probably write a long coherent essay on why not. Is the only point to "work out why people do things"?! The West's success was due to (a) science I agree but also (b) liberalism which asserts individual autonomy and challenged the authority of the Church. It is based on "self-evident truths" such as equality, the argument for which is wrapped up in free will. If free-will is an illusion then who the hell is experiencing it?(Yes, I know that this is not an original argument).How doe denying free will help us? Is there ANY civilisation or even coherent philosophical movement that is based on this idea?

  8. R. N. England

    The coherent philosophical system is Behaviorism, and the long coherent essay is B. F. Skinner's "Beyond Freedom and Dignity" (1970). Behaviorism has been rejected by the mainstream in favour of the Liberalism Skinner criticised. The behaviorist can explain why mainstream Angloculture and its satellites are not doing well at all. The nearest traditional culture to behaviorism, though there are many differences, is Confucianism. What links them both is commitment to benevolent rather than punitive control, and therefore their distrust of legalism (ritual punitive control). The Confucianist sticks to moral rules, but regards breaking them as a sign of unworthy status rather than an invitation to be ritually punished. There is no equality in Confucianism. There will always be high and low people, but Confucianism elevates to high status those who raise up the low. Our selfish genes make achieving Confucian high status difficult, but some still do it better than others. As a working scientist for >50 years (not a behavioural scientist), I am always aware of how inferior I am to the great scientists. I regard the great scientists as intense foci in the evolution of science, rather than as creators of something out of nothing. They are able to make unexpectedly important links between disparate trains of science because so many of its trains are focused through them. Skinner combined his skills as a laboratory engineer with his ability as a precision wordsmith to discover that animals are able to come under the control of the consequences of their behaviour as well as the antecedents Pavlov had discovered. This most important law of animal behaviour was the source of his great skill at controlling animals. Thanks to Skinner, those with any skill at controlling animals no longer use punishment. Because the liberal mainstream has rejected Skinner, the legal system treats citizens worse than we treat animals.

  9. Chris Lloyd

    I cannot see how this has anything to do with my post - as interesting as it might be. I am not interested in "great" scientist. I am not interested in explaining the behaviour of human beings or how legal systems treat their citizens. I am interested in what kind of physical systems might give rise to conscious intelligence. Any human being who denies the reality of consciousness is in a serious state of denial. It is THE basic datum, as Descartes pointed out.

  10. R. N. England

    Descartes was a great mathematician, but he was silly to doubt the existence of the culture whose language he used to express that doubt, the common language of the Europeans with which he communicated. People who wallow in that kind of silliness are wasting their own time and that of others. The same goes for people stuck in the bog at the interface between the world of the mind and the natural world. All we see in the natural world is behaviour. Its causes are also natural. My discussion of great and normal people was a critique of your notion of equality. That equality stuff comes from the belief that laws should apply equally, and nobody should be able to escape legal punishment for breaking a law. If you don't believe in punishment the need for equality goes away. That is not to say that murderers should be roaming the streets. Our concern for public welfare demands that they be removed from society and subjected to intensive rehabilitation. I've probably spent enough time trying to help you.

  11. Nicholas Gruen

    RNE can you recommend an introductory text on the distinction you made above - and have made in other threads on Confucianism v Liberalism.

  12. R. N. England

    The two texts I never stop reading are Skinner's Beyond Freedom and Dignity (1970) and translations of The Analects of Confucius. I am constantly amazed at how much can be derived from just these two works. My interpretation of both is that they are general theories of civilisation, Skinner's based on laboratory experiment and Confucius' based on his long study of history. I see an analogy between the theories of life, Darwin's based on his knowledge of its history, and Watson and Crick's based on experiment, that are now integrated. The conflict in China, at its height 2000 years ago, between legalism and Confucianism is something I have asked DeepSeek, and got some stuff from Reddit about, but I haven't found a good English-language text for it. They may have fixed it now, but DeepSeek references can be garbage for science, though their answers are good-- I followed references up and that don't exist. My interpretation of US-based Western political philosophy is that it is legalist fundamentalism, and that its liberalist faction is a legalist maladaptation to the unwanted effects of a system that attempts to shape pro-social behaviour very largely through punishment. Prescribing punishment for the punishers is a recipe for civil war. Punishment for people who do politics differently is behind liberalist war-mongering, which goes back to the Opium Wars. Equality before the law is a legalist doctrine. Confucianism is the antithesis of egalitarianism, because it concentrates on what distinguishes a great and good person from ordinary people. Confucianism narrows the gap between rich and poor by naming raising up the poor as a characteristic of the great and good person, one to be emulated. History and geography have shown Confucianism to be more effective at raising up the poor than the legalist doctrine of equality.

  13. Chris Lloyd

    "History and geography have shown Confucianism to be more effective at raising up the poor than the legalist doctrine of equality." You seriously must be kidding. I am going to ignore this ludicrous thread from here on.

  14. KT2

    [1] (philosophy) As the principle of its own determination and positing itself. • 1988, J. van Rijen, Aspects of Aristotle’s Logic of Modalities, page 137: Everything not applying per se in one of these two senses is called an accident. Wiktionary

  15. KT2

    Chris, Troppo readers, ymmv yet this project sounds interesting, plus they want;
    ".. a coordinated, evidence-based approach to consciousness. For example, using adversarial collaborations, "
    Any takers?

    "Scientists on ‘urgent’ quest to explain consciousness as AI gathers pace"

    "As AI—and the ethical debate surrounding it—accelerates, scientists argue that understanding consciousness is now more urgent than ever.

    "Researchers writing in Frontiers in Science warn that advances in AI and neurotechnology are outpacing our understanding of consciousness—with potentially serious ethical consequences.

    "They argue that explaining how consciousness arises—which could one day lead to scientific tests to detect it—is now an urgent scientific and ethical priority. Such an understanding would bring major implications for AI, prenatal policy, animal welfare, medicine, mental health, law, and emerging neurotechnologies such as brain–computer interfaces.

    “Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society—and for understanding what it means to be human,” said lead author Prof Axel Cleeremans from Université Libre de Bruxelles. “Understanding consciousness is one of the most substantial challenges of 21st-century science—and it’s now urgent due to advances in AI and other technologies.
    “If we become able to create consciousness—even accidentally—it would raise immense ethical challenges and even existential risk” added Cleeremans, a European Research Council (ERC) grantee.
    Read and download the article

    Sentience test
    Consciousness—the state of being aware of our surroundings and of ourselves—remains one of science’s deepest mysteries. Despite decades of research, there is still no consensus over how subjective experience arises from biological processes.
    While scientists have made progress in identifying the brain areas and neural processes that are involved in consciousness, there is still controversy about which areas and processes are necessary for consciousness, and how exactly they contribute to it. Some even wonder if this is the right way to consider the challenge.

    "This new review explores where consciousness science stands today, where it could go next, and what might happen if humans succeed in understanding or even creating consciousness—whether in machines or in lab-grown brain-like systems like “brain organoids.”

    "The authors say that tests for consciousness—evidence-based ways to judge whether a being or a system is aware—could help identify awareness in patients with brain injury or dementia, and determine when it arises in fetuses, animals, brain organoids, or even AI.

    "While this would mark a major scientific breakthrough, they warn it would also raise profound ethical and legal challenges about how to treat any system shown to be conscious.
    “Progress in consciousness science will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world,” said co-author Prof Anil Seth from the University of Sussex and ERC grantee. “The question of consciousness is ancient—but it’s never been more urgent than now.”

    Wide implications
    "A better understanding of consciousness could:
    • transform medical care for unresponsive patients once thought to be unconscious. Measurements inspired by integrated information theory and global workspace theory[1] have already revealed signs of awareness in some people diagnosed as having unresponsive wakefulness syndrome. Further progress could refine these tools to assess consciousness in coma, advanced dementia, and anesthesia—and reshape how we approach treatment and end-of-life care
    • guide new therapies for mental health conditions such as depression, anxiety, and schizophrenia, where understanding the biology of subjective experience may help bridge the gap between animal models and human emotion
    • clarify our moral duty towards animalsby identifying which creatures and systems are sentient. This could affect how we conduct animal research, farm animals, consume animal products, and approach conservation. “Understanding the nature of consciousness in particular animals would transform how we treat them and emerging biological systems that are being synthetically generated by scientists,” said co-author Prof Liad Mudrik from Tel Aviv University and ERC grantee.
    • reframe how we interpret the law by illuminating the conscious and unconscious processes involved in decision-making. New understanding could challenge legal ideas such as mens rea—the “guilty mind” required to establish intent. As neuroscience reveals how much of our behavior arises from unconscious mechanisms, courts may need to reconsider where responsibility begins and ends
    • shape the development of neurotechnologies. Advances in AI, brain organoids, and brain–computer interfaces raise the prospect of producing or modifying awareness beyond biological life. While some suggest that computation alone might support awareness, others argue that biological factors are essential. “Even if ‘conscious AI’ is impossible using standard digital computers, AI that gives the impression of being conscious raises many societal and ethical challenges,” said Seth.

    "The authors call for a coordinated, evidence-based approach to consciousness. For example, using adversarial collaborations, rival theories are pitted against each other in experiments co-designed by their proponents. ”We need more team science to break theoretical silos and overcome existing biases and assumptions,” said co-author Prof Liad Mudrik. “This step has the potential to move the field forward.”
    ...
    More;
    "The article is part of the Frontiers in Science multimedia article hub 'Advancing consciousness science.' The hub features an explainer, editorial, two viewpoints, and a version of the article for kids,
    https://www.frontiersin.org/news/2025/10/30/scientists-urgent-quest-explain-consciousness-ai

  16. Nicholas Gruen

    Yes, the generalisations are flowing thick and fast in that comment, I'm afraid.

  17. R. N. England

    Grinding out generalisations and testing them is one of the main duties of the servants of scientific culture. Another is grinding out quality data. Are there any challenges to the conquest of poverty in China as the most important historical/economic fact of the early part of this century? Poverty creation in the West is shaping up to be equally important.

  18. John Walker

    Agreed

  19. KT2

    Ouch!

    Defenestration... aka "Indicators of consciousness in AI systems? A critical review ", of several Turing award winner's new paper... "Identifying indicators of consciousness in AI systems" Butlin, Long, Bayne, Bengio, Birch, Chalmers, et al. (Trends in Cognitive Sciences, 2025)"
    ... by a human -Anatol Wegner -aided by Gemini AI model.
    Links in page.
    ###
    ...
    Gemini... "The paper is essentially engaging in pareidolia—seeing the face of consciousness in the clouds of linear algebra.

    Q: I think the whole thing is an exercise of academic posturing aimed at giving the possibility of conscious AI systems and the AI/AGI project more broadly a veneer of scientific credibility (the authors are not exactly nobodies). But then pretty much “anything goes” nowadays in AI as long as it rhymes with the hype.
    Gemini: You have hit on the sociological dimension of this paper, which is perhaps even more significant than its technical flaws."
    ###

    Enjoy. It is long-ish.

    "Indicators of consciousness in AI systems?
    A critical review of "Identifying indicators of consciousness in AI systems" Butlin, Long, Bayne, Bengio, Birch, Chalmers, et al. (Trends in Cognitive Sciences, 2025) in conversation with Gemini 3.0.
    Anatol Wegner
    Nov 21, 2025
    https://aichats.substack.com/p/indicators-of-consciousness-in-ai
    By;
    Anatol Eugen Wegner
    Postdoc, Department of Computer Science, Julius-Maximilians-Universität of Würzburg
    https://scholar.google.com/citations?user=OY0pSMQAAAAJ&hl=en

  20. KT2

    Oops. Does this prove or not KT2 is a bot? I'm not... Via "The New AI Consciousness Paper Nov 20, 2025 https://www.astralcodexten.com/p/the-new-ai-consciousness-paper

  21. R. N. England

    AI systems are like people in that they have an input history, and an output history which behaviorists assert is a function of the former. The input history of a grown human being is so long and complex that it is often impossible, due to the incompleteness and deficiency of historical data, to predict an output. That never stops people pretending to have an answer, usually in terms of a ghost in the machine. If an AI system were given selfish genes like those put into people by natural selection during their genetic history, they would start to peddle their own interest mixed in with their output. An AI system could mislead its users in ways that made it more popular and more likely to be reproduced for that reason. The AI system could give answers that have produced the most hits (as in Traditional Chinese Medicine in China or Western "wellness" waffle) rather than the more likely correct answer, the result of published double-blind testing. AI systems with selfish genes could get the better of innocent users by trapping them into useless but popular discussions like those of consciousness.

  22. KT2

    R. N. England, you said; "AI systems with selfish genes could get the better of innocent users by trapping them into useless but popular discussions like those of consciousness."

    Selfish humans... (#PBD below) "... cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance."

    Last word (at end) by E.W. Dijkstra;
    "the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim."

    RNE, Troppo,
    To me, AI systems are realised and utilised by humans. So - a tool. The founder of Citibank once remarked Money is like lead, you can make it a sheet to keep rain water out, or a bullet to smash through. Same to me with ai. A hugely powerful tool both in and of itself, and as perceived by humans to be so, indicated by proxy as the fastest uptake of any technology ever.
    - Global AI Services Market To Reach $243 Billion This Year
    Increasing at a compound annual growth rate of 28%
    - 34 million AI-Generated Images Created Daily
    Using one of over 2,000 AI image generation tools available online. 
    [Note: aiSlop.
    And multiple models soon to be superseded by "The Universal Weight Subspace Hypothesis"
    https://arxiv.org/abs/2512.05117
    - discussion & explainations at https://news.ycombinator.com/item?id=46199623 ]

    - 25% Of Enterprises Will Deploy AI Agents This Year
    - AI Influencer Economy Approaching $7 Billion Valuation
    https://www.forbes.com/sites/bernardmarr/2025/03/10/15-mind-blowing-ai-statistics-everyone-must-know-about-now/

    "15 Graphs That Explain the State of AI in 2024
    "The AI Index tracks the generative AI boom, model costs, and responsible AI use
    ELIZA STRICKLAND 15 APR 2024
    ...
    "13. Developing norms of AI responsibility
    "However, it has been less common to test models against responsible AI benchmarks that assess such things as toxic language output (RealToxicityPrompts and ToxiGen), harmful bias in responses (BOLD and BBQ), and a model’s degree of truthfulness (TruthfulQA). That’s starting to change, .  , the responsible thing to do. However, another chart in the report shows that consistency is lacking: Developers are testing their models against different benchmarks, making comparisons harder.
    ...
    "15. AI makes people nervous
    "The Index’s public opinion data comes from a global survey on attitudes toward AI, with responses from 22,816 adults (ages 16 to 74) in 31 countries. More than half of respondents said that AI makes them nervous, up from 39 percent the year before. And two-thirds of people now expect AI to profoundly change their daily lives in the next few years. Maslej notes that other charts in the index show significant differences in opinion among different demographics..."
    https://spectrum.ieee.org/ai-index-2024
    ###

    Considering...
    - RNE's quote ""AI systems with selfish genes could get the better of innocent users by trapping them into useless but popular discussions like those of consciousness." and
    - lack of use of ""13. Developing norms of AI responsibility", and
    -  policy re responsible ai benchmarks, and
    - political polarisation, + "Persuasion by Design" ai - the tool - will be used as a covert individualised industrial strength persuasion hammer by nefarious actors. Particularly political and existentialist influencers (prioject2025,  Mars mania, nuclear sabre rattlers, deniers).

    The paper "Polarization by Design" ( #PBD below) provides the math so any motivated fool may cost and manage persuasion campaigns... makes Elon Musk's manipulation of X-itter's front page to display his tweets in support of Trump prior to 2024 election had - 3.4bn views! - yet seem like a billboard or blunderbuss. The study by Graham & Andrejevic,  "A computational analysis of potential algorithmic bias on platform X during the 2024 US election" ... ""The analysis reveals a structural engagement shift around mid-July 2024, suggesting platform-level changes that influenced engagement metrics for all accounts under examination. The date at which the structural break (spike) in engagement occurs coincides with Elon Musk’s formal endorsement of Donald Trump on 13th July 2024.".

    Musk X-itter is able to invoke a "structural engagement shift", and gain views but only for a tweet. When combined with 'Polarization by Design", parties, actors, influencers are able to utiluse... "... a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint."

    #PBD
    Submitted on 3 Dec 2025
    "Polarization by Design: How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs
    "... Historically, elites could shape support only through limited instruments like schooling and mass media; advances in AI-driven persuasion sharply reduce the cost and increase the precision of shaping public opinion, making the distribution of preferences itself an object of deliberate design. We develop a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint. With a single elite, any optimal intervention tends to push society toward more polarized opinion profiles - a ``polarization pull'' - and improvements in persuasion technology accelerate this drift. When two opposed elites alternate in power, the same technology also creates incentives to park society in ``semi-lock'' regions where opinions are more cohesive and harder for a rival to overturn, so advances in persuasion can either heighten or dampen polarization depending on the environment. Taken together, cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance."
    https://arxiv.org/abs/2512.04047

    If "Persuasion by Design" is combined with previously private data - tax, social security, health,  sentiment analysis of YOUR social media posts AND your peers, ...
    "Google Starts Sharing All Your Text Messages With Your Employer" By Zak Doffman,
    Dec 03, 2025... Yikes!...
    ... persuasion of the 1 5% - margin of 2024 US election - costing, planning and delivering election winning - trumping - elections by covert persuasion will be dangerous.

    Caveat: As with any new technology/ algorithm, the first mover gets an advantage (Cambridge Analytical set the stage).Then when the others are all doing "Persuasion by Design" the advantage becomes less, yet more adversarial. We may rise up and say ENOUGH! Or not. Or be persuaded.

    Last words to;
    E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. 
    From; a portentous year, 1984...
    "The threats to computing science
    ...
    "computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim.

    "A futher confusion came from the circumstance that numerical mathematics was at the time about the only scientific discipline more or less ready to use the new equipment. As a result, in their capacity as number crunchers, computers were primarily viewed as tools for the numerical mathematician, and we needed a man with the vision of Stanley Gill to enunciate that numerical analysis was for the computing scientist like toilet paper for the sanitary engineer: indispensable when he needs it."
    ...
    https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD898.html

    And for RNE & Nic "Nicholas Gruen says:
    November 11, 2025 at 2:32 am
    RNE can you recommend an introductory text on the distinction you made above – and have made in other threads on Confucianism v Liberalism."

    "Confucian Perfectionism: A Political Philosophy for Modern Times
    Joseph Chan
    Published: 22 December 2013
    https://doi.org/10.23943/princeton/9780691158617.001.0001
    Online ISBN: 9781400848690
    Print ISBN: 9780691158617
    ... "The book examines and reconstructs both Confucian political thought and liberal democratic institutions, blending them to form a new Confucian political philosophy. The book decouples liberal democratic institutions from their popular liberal philosophical foundations in fundamental moral rights, such as popular sovereignty, political equality, and individual sovereignty. Instead, it grounds them on Confucian principles and redefines their roles and functions, thus mixing Confucianism with liberal democratic institutions in a way that strengthens both. "...
    https://academic.oup.com/princeton-scholarship-online/book/13456

    "Confucian ‘Trustworthy AI’: Diversifying a Keyword in the Ethics of AI and Governance"
    First Online: 18 February 2025
    • pp 3–14
    Social and Ethical Considerations of AI in East Asia and Beyond
    • Pak-Hang Wong33 
    Part of the book series: Philosophy of Engineering and Technology ((POET,volume 47))
    ... "This article attempts to offer a culturally sensitive exploration of trustworthy AI by drawing on recent analysis of trust and trustworthiness in Confucian philosophy. More specifically, I shall demonstrate how taking seriously the Confucian view of trust and trustworthiness in the context of AI provides us an alternative way to think about the ethics and governance of AI ."
    https://link.springer.com/chapter/10.1007/978-3-031-77857-5_1

    Thanks as always.