Thoughts on Artificial Intelligence.

[Note to self. Geeks only]

Over the fold I muse on the nature of human intelligence, social intelligence, and the options for artificial intelligence to become ‘smarter than humans’ in the areas of social power and law-making. It is taken for granted that you accept that in hardware terms, computers already have greater computing power than human brains and that it is merely a matter of software that constrains their abilities. It is also taken for granted that there is no human organisation that can make much difference on the trajectory of AI, so the question is merely what will happen rather than what ‘we’ should do about it. With those pre-ambles, I muse on whether we should worry about AI takeover and things like super intelligence.

The short version is: I see nothing truly to worry about in the short run and we do not have a clear view at the moment of where any power or ethical dilemmas with AI are going to be, so there is not even all that much to speculate about. We should shelve any fears of robot takeover for at least 10 years and reevaluate then.

  1. The Nature of Human Intelligence:
    1. Our collective intelligence is far higher than individual intelligence. Individuals specialise, and our institutions contain knowledge as to how things should be run, reflecting centuries of learned rules of thumb. Markets, prices, parliaments are information aggregation and exploration devices that come up with deep knowledge. Also, upbringing and social interaction perpetuate social rules of thumb that reflect deep judgments on what works and what does not, learned in highly complex environments. For an AI machine this means that if it would have to bypass human knowledge, it would have to be far smarter than humans to achieve the same intelligence as a connected human. If an AI machine is to access human knowledge, it must be able to read social situations as well as humans and thus either understand humans in a way that humans do not, or to attain human intelligence first.
    2. Meaning of language and social concepts is derived from social interaction and is not objective, or transportable outside of the time and social context in which it arises. An individual learning to live in place A with circumstances B is thus evolutionary prepared to interact and communicate in that arena, not another and hence no social redundancy. This also means it is probably unlikely that one can give an AI entity absolute moral rules at the outset that retain their meaning over time (the objects in those rules are not fixed, nor even objectively measurable, even initially). And if the AI entity makes the leap of faith to pretend fuzzy abstracts are totally clear and meaningful, it will make the same mistakes as humans do, making it hard for that entity to be much better in human intelligence than humans. Having said that, an AI entity can learn how humans make leaps of faith and copy that ability or at least anticipate it (it can learn our cognitive tendencies and learn to spot how different humans do this differently). Even then though, the output remains in the world of vague abstractions (‘A believes this about a fuzzy reality, B believes that’) which does not tell us what the truth is because there is no such thing in social space. There is only more or less (social) evolutionary successful beliefs and decision rules. Unless the AI joins in with marketing, it could only make judgements, some of which will be seen to be false, perchance leading others to try and kill of a ‘bleeding god’ (if the AI cannot overcome our uncertainty, some might rail at it for lack of certainty. For instance, an AI saying ‘with 60% chance, the world will be 2 degrees warmer by 2070 under the following scenario’ would probably not be taken seriously, whether it is right or not).
    3. So an AI only becomes powerful in the world of human affairs if it is given the power to make decisions without human oversight. This already happens in various spheres (automatic warning systems, flood systems, etc., are all a form of autonomous AI). It is when an AI is given political power though, ie to set rules that humans must live by, that the role of master and machine is reversed. Setting rules and then enforcing them might not be that far off in some spheres. I can imagine, for instance, that in cases of emergencies (fires, wars, etc.) an AI machine that rattles off augmented and changed protocols to deal with the emergency (send fire-fighters here, forbid people to use water there) might be with us in years rather than decades. Self-learning AI is already with us as well (chess computers, but also weather forecasters), so it is not such a stretch to think that self-learning social power carrying AI will soon be with us. AI that understands some things better than us (the game of chess) is also with us already. It is if such an AI finds a way to learn faster than we can keep up about optimal abstractions and their relations in the human world that we will be beaten at one of our particular games (social theory). How far their perceptiveness reaches is quite uncertain because it might see patterns where no human has done so before. Whether humans would trust those insights enough to back its recommendations with resources? And if it had autonomous resources, how far could it go before humans would perceive it as working against their interests? Hard to know how to even approach an answer to those questions.
    4. All social concepts are abstractions without objective counterparts. It’s a fractal that does not get clearer if you zoom in. Hence all social ‘data’ has huge measurement error. This will in many areas of predictions of social phenomena make it unlikely that an AI will do much better than the best humans (even in combination). Teams of humans often do not do better than single good experts at reading a social situation (economic forecast, the future of a conflict).
    5. In understanding the world, humans imbue meaning and motive in others to predict what they will do, drawing on their own experiences of motives. They are thus their own laboratory for how others think, and even non-human entities think (god, the Internet). Their experience of the world is then their training in understanding themselves and others.
    6. Humans communicate far less to others than that they communicate within their own mind. Only a very small fraction of what is thought gets communicated. Not so with computers. That is not important when it comes to complicated judgments, but it does point to very different comparative advantages with a computer able to quickly give you all the works of Shakespear and a human keeping knowledge of his rising heartbeat to himself.
    7. Humans play each other and play the whole of human collectives. Traders bet against markets, political actors deliberately falsify data, people lie and cheat. Human produced data is thus imperfect and cannot be trusted. For applications that need ‘the truth’, an AI would thus have to understand when such things occur. To do this, it must be able to predict what data is more reliable. Given the measurement error in all data and the underlying fuzziness of core concepts, it must develop theories (mental representations) of the world to progress.
    8. Humans are far weirder than they admit, even to themselves. Religion, magic, face-keeping, morality, etc., are all very different in how they affect behaviour from how humans present these things and think of them.
    9. Humans can feed the AI the ‘best’ schemas we have on various items. AI’s as collection points for the received wisdom of the smartest humans would work well in all areas of ‘expertise’, as long as that expertise can be applied to others without social interaction (ie, can be dispensed rather than necessarily co-experienced, in which case more than particular expertise is needed as general expertise is also needed).
  2. Likely AI trajectories:
    1. It will follow prices and markets: whatever the area is where humans can find more profits for its use is where developments will go first because that is where humans will direct it and develop it. Hence AI development is co-development with human society, oriented towards profits. This means it will probably be incremental in increasing its intelligence, picking off profitable areas.
    2. So far, developments have been incremental rather than revolutionary and applications are gradually explored. This is partly because an underlying breakthrough needs new data to have its full advantage realised, for instance when new learning algorithms call for new types of data to feed it and hence the data gathering process needs to be changed. A self-feeding loop of fast increases in intelligence is thus likely to come up against the fact that the current environment is optimised for humans and their current ecology, not the potential or abilities of something that does not exist. This should give us some pause in believing that superintelligence could outrace human intelligence within a few seconds or weeks.
    3. From the current stand-point, there are many areas of improvement needed for AI to get even close to the package a human represents. AI does not yet understand systems like we do (via motivation, causal elements and pattern-recognition in a social space), nor do they have our sensory abilities bundled (sight, hearing, touch), nor is there the physical abilities linked up to them (dexterity, legs, a mouth, some social power), nor does AI have what we would recognise as consciousness or an ego.
    4. Human level intelligence would thus require major advances in many areas (not just some kind ‘learning sweet spot’ in one area run by one team), we should be able to say whether a take-off period is conceivable in a few years time or not. At the moment, AI is still too far away to have to consider the option realistic for the next 10 years, so we as a humanity are not in some kind of dire peril that we should worry too much about now.
    5. It is then somewhat futile to think how to restrain something with abilities that do not yet exist, using a language we have not yet conceived off, in order to constrain an environment that would be very different to outcomes we cannot yet see clearly. What is there to prepare for and who should do the preparing?
    6. As with human intelligence, AI intelligence will not be bundled, but suited to purpose. Breakthroughs in one area might thus not be incorporated in another if it does not help there.
    7. How AI currently predicts human behaviour is very different from how humans do it. They might do better in some circumstances, but it is more data-driven (the whole internet) and thus very different. It essentially bypasses social judgment and motivational understanding of humans. The question is whether there are areas in which that would be a problem. Surely for social interactions, yes.
    8. AI should do well with medical issues, particularly diagnosis and treatment (perhaps less with caring) because they are rules of thumb based on data-gathering. Similarly, issues of financial planning, social justice, technological innovation, scientific exploration, etc., should also be relatively simple because they occur in a fairly structured world (experiment, data gathering, data analysis, etc.). Fields of scientific exploration that have limited need for social data would seem especially well-suited for computer intelligence to take over the position of lead researchers.
    9. In economics and politics, competition in AI-feeding schemas would be a useful way to show the world what indeed are the best ways to view the world.
    10. AI experiments based on obtaining more copies of itself or more resources from other AI machines (a bit akin to computer virus) could be one way of AI experiments derailing and leading to unexpected conscious-type entities that are very probably short-lived (like a virus, by killing their hosts (the computers) they run out of victims and themselves die off). A virus-view of AI is possibly the dominant view for the coming decades, with simple AI intelligence living inside a few computers, hosted and developing. Humans could give a head-start to such entities. The question would be what would lead to mutations, selection, and subsequent development.
    11. Should there emerge an AI intelligence that is truly more intelligent than any human AND amasses great power, then humans will worship it as a god. That is because ultimately, humans worship power. Once there is one god, different groups of humans would build more gods, simply because they want to worship. AI take-over might thus be something we stop fearing and start competing for. This would then also entail a crisis of faiths in preceding religions.

 

 

This entry was posted in bubble, Bullshit, Business, Ethics, Geeky Musings, Innovation, IT and Internet, Science. Bookmark the permalink.

26 Responses to Thoughts on Artificial Intelligence.

  1. Paul I agree with you re AI .
    However humans worship power ” is not nuanced enough, is way too simplistic.

    From two poems about, the mystery , that sum why I am both religious and a ‘Modern ‘artist:

    “I joy, that in these straits I see my west;
    For, though their currents yield return to none,
    What shall my west hurt me? As west and east
    In all flat maps (and I am one) are one,
    So death doth touch the resurrection. ”
    And:
    “To all life Thou givest, to both great and small;
    In all life Thou livest, the true life of all;
    We blossom and flourish as leaves on the tree,
    And wither and perish, but nought changeth Thee.”

    • Paul Frijters says:

      hi John,

      yes, I was writing in shorthand. I don’t take back the idea that we worship power though. We worship power and we worship absolute power absolutely :-)

      Or is your god not ‘all powerful’?

      • Hi Paul
        Does power as a human concept-term mean anything in the context of , infinity?

        • Our revels now are ended. These our actors,
          As I foretold you, were all spirits, and
          Are melted into air, into thin air:
          And like the baseless fabric of this vision,
          The cloud-capp’d tow’rs, the gorgeous palaces,
          The solemn temples, the great globe itself,
          Yea, all which it inherit, shall dissolve,
          And, like this insubstantial pageant faded,
          Leave not a rack behind. We are such stuff
          As dreams are made on; and our little life
          Is rounded with a sleep.

          • Philip Clark says:

            God moves the player, he in turn the piece.
            But what god beyond God begins the round
            Of dust and time and sleep and agony?
            Jorge Luis Borges

            • Because I seek an image, not a book.
              Those men that in their writings are most wise
              Own nothing but their blind, stupefied hearts.
              I call to the mysterious one who yet
              Shall walk the wet sands by the edge of the stream
              And look most like me, being indeed my double,
              And prove of all imaginable things
              The most unlike, being my anti-self, 80
              And standing by these characters disclose
              All that I seek; and whisper it as though
              He were afraid the birds, who cry aloud
              Their momentary cries before it is dawn,
              Would carry it away to blasphemous men.

  2. John Goss says:

    I don’t know how much science fiction you have read Paul, but all of the ideas you discuss have been much explored in science fiction eg Isaac Asimov. But you don’t touch on what I think is the most important issue which is how do we as humans relate to self-aware AIs. Do we treat them as individuals with rights? Will they have the right to free speech? (Maybe they will get it before we get that right in Australia!). Will they have property rights? Will they have the right to control their own ‘bodies’?
    Very difficult issues. But issues we should come to agreement on while AIs are still our children.

    • Paul Frijters says:

      Hi John,

      yes, I like Science Fiction. I found Azimov ok, but not as great as others found him. Bladerunner probably comes closes to my expectations of the future, but I expect to be surprised.

    • Philip Clark says:

      Hi John, lets start with Isaac Asimov and the three laws of robotics.
      1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
      2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
      3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
      And look at the I Robot series of short stories in particular the character of Dr. Susan Calvin, Chief robo psychologist at U.S. Robots and Mechanical Men, and her adventures as she tries analyses how these laws are interpreted by the robots she is asked to examine due to issues in there behaviour. In this framework these robots are not considered life forms just machines. But as we progress through the stories it becomes increasingly apparent this I not the case. Its the subtle way that Asimov introduces this theme through the Dr Susan Calvin that made me appreciate his imagination and intellect. But in discovery of what it means to be alive in this future society we should reflect on difference between procreation and creation. Are our creations our children or our gifts to the universe.

  3. Bruce Bradbury says:

    Objectives and motivation are the key question. What makes an AI get out of bed in the morning? To begin with, at least, AIs will have motivations and objectives determined by their human controllers. So the key political issue will be the power accruing the humans that set these objectives.

  4. Kien says:

    It is useful to distinighuish between artificial intelligence and sentience.

    Humans are sentient creatures because we have some ability to conceive or, and commit to, “agency goals”. My agency goals are commitments that go beyond my own wellbeing. For example, my commitment to my spouse, my children, and to the wellbeing of other people in society generally. If I am religious, I may have commitments to God.

    Suppose a machine has AI but is not sentient, then it is simply an optimising agent. A non-sentient AI will optimise whatever objective is given to it, which may be the wellbeing of a particular individual (“owner”). But a sentient AI will be capable of conceiving of its own agency goals.

    If we ever have sentient AI, we have to seriously consider treating the sentient AI like a human, or at least like other non-human sentient creatures (eg elephants, dolphins). Sentient AI will have rights (perhaps including “human rights”) and freedoms. And also corresponding obligations to exercise their freedoms responsibly.

    • Paul Frijters says:

      I see the distinction, but as yet do not see it as realistic that either will come round anytime soon, so I don’t worry about it yet (the form of the AI and sentience can be varied: an AI might not be self-goal setting like we are, and neither are we self-goal setting in the way we normally describe that).

      I think that the issue of ‘rights’ though is totally tied to the question of social power. An AI that would ‘want’ rights and is able to ‘demand’ it from ‘us’ and yet is not so superior to us that we should crave ‘it’ for mercy is a very odd thing indeed. A very narrow sweet spot under which the issue of AI rights would arise. Not a likely scenario IMO. But then, who knows?

    • Philip Clark says:

      To quote Descartes, Cogito ergo sum, or more aptly, We cannot doubt of our existence while we doubt. Are we really sentient Kien or self aware and is there a difference? Is the biological requirement to socialise and procreate a basis for meaning or an evolutionary by-product and is the need to create a God real or a simply a survival reaction to cushion the impact of our fear of the unknown? And more to the point, if we create a new form of life with independent thought and will by what criteria do we evaluate the rights and status of this life form if it is physically dependant on its creators. I commend and agree with your thoughts and hope that we will progress towards the moral high ground in this and all things, thank you, for being you :)

  5. ShaneG says:

    An interesting article, thanks for posting it. Given the amount of hype around AI at the moment it is pretty timely. A few points though:

    1. I feel that the base assumption that ‘computers already have greater computing power than human brains’ is incorrect – the human brain weighs around 1.5Kg and consumes 20W of energy, IBMs Watson (most well known for playing Jeopardy) weighs 30Kg and consumes 20kW and although impressive is nowhere near the level of general intelligence. Hardware is still orders of magnitude away from matching the capacity of the human brain but will eventually catch up (although the solution may not look or work much like todays computing hardware.

    2. The term AI is massively overused (and has become more of a marketing term than anything else). Everything that you see today (at least in consumer products) is machine learning (ML) which is simply training an algorithm to detect patterns in large data sets and to use that model to classify new data. The same result could be achieved with classical statistical analysis, the benefit of ML is that large chunks of the process can be automated. This article has good description of the process (and it’s inherent flaws).

    2A. It’s worth covering the flaws in a bit more detail – the choice of training data can have some bizarre (and unwanted) side effects. When the Google Photos app started identifying African Americans as ‘gorillas’ it was understandable from a technical perspective (more variables were needed for classification) but showed a (hopefully unintended) bias in the selection of training data – if more African American images were included in the training set it should have identified that additional criteria were required to differentiate between the ‘human’ and ‘gorilla’ categories. In this case the problem was fairly obvious to a human observer but that is not always the case, leading to ‘racist algorithms‘ that simply re-enforce existing biases rather than provide any new insight.

    3. When we do eventually develop an AI (and I am sure we will at some stage) will it be anything like human intelligence? We assume so because that is what we are trying to mimic (it’s the only working example we have) but that may be an invalid assumption as well. Our ‘intelligence’ is subject to our context – you don’t perform as well if you are tired or hungry for example; diet and environment also have an impact on decision making. If you take all of that away will the resulting ‘intelligence’ be the same? Will ’emotional intelligence’ be present in an artificial system?

    There is no shortage of topics to talk about with this subject – and those discussions themselves can be beneficial even if we don’t make it to ‘true’ AI in our lifetimes.

    Thanks again for the article.

    • Paul Frijters says:

      Shane,

      thanks for the generous spirit of the comment.
      Yes, the equivalent of a human brain’s worth of computer power needs a whole barn full of space and masses of power, so when I said computers now have reached our raw computing power, I wasn’t thinking of anything you could carry around in a truck. But it exists and will get smaller. I actually doubt it will ever get as small as us. We are packing very densely.
      2, 2A: yep, all agreed. One of the challenges in developing the software will be how to speed up the training process and make it far more distributed, ie feedback in many parts of the chain thousands of times per second. That means non-human feedback. How to do that? I only have vague ideas and no intention to develop them. I am an observer, not a producer of AI.
      I think the point that human data on social issues is extremely uncertain and cannot be made certain by any means we social scientists at present possess, should not be underestimated as a fundamental barrier to learning. If nothing we know is certain in a field, the task of training something else to make more sense of it that us aint streightforward.
      3. Of course. Wasn’t that clear from my post?

      thanks for the links. Yes, I was aware of all that, but am sure i am behind the tech frontier. My specialty is social science and can only stumble behind the developers of AI and like technologies (a very wide field). By the same token, they are not social scientists and I think that matters (but then I would, wouldn’t I).

  6. Phil Clark says:

    “How do solve a problem like Maria, how do you hold a moon beam in you’re hand….” A line from one of my favourite songs and scenes in the movie The sound of music. Interestingly, probably the only thing I enjoy more than listening to the music is watching others enjoying the same experience. Apparently one of the possible reasons for this is something called mirror neurons, a sort of hard coded structure in the mammalian brain that allows us to experience the feelings of others, wonderful when you think about it but still not a certainty. And that pretty much sums up our understanding of self, wonderful, beautiful, amazing and even miraculous but still no certainty. So if you accept humanity has yet to formulate a complete understanding of its self then would the next logical step be to accept that we also have no true understanding of artificial intelligence, or more correctly non biological self awareness, and it is in that lack of understanding that real problem hides. If we continue to expand our abilities in computing, in particular quantum computing, we may find our selves having to face this jack in the box fare sooner than we think. Of course there are some rather stubborn laws in thermodynamics to bend first but I don’t see this as a big obstacle for the sapien’s who split the atom and set foot on the moon, not bad for bunch of knuckle dragging monkeys. But on a more serious note, as pointed out in a previous post, most if not all of Paul’s points have been explored in science fiction but in science fact IBM have been making some significant progress in quantum computing, there public website is available for anyone who would like to understand and experience the technology for themselves. Also, new theories on the quantum state of water may provide an insight to theoretical development of high temperature superconductors while the work to complete physics standard model seems to be moving slowly but steadily forward, or at least we can say it’s a work in progress, pun intended. So in short the possibility of a quantum computer developed to study global weather conditions in real time could provide an environment for self awareness that would simply be unrecognisable from its base code. I wonder what a self aware entity with the objective of evaluating the condition of its creators enviroment would do if it came to the conclusion that it was the byproduct of a self destructive and fundamentally flawed intelligence. To conclude though I reflect on another line from the musical mentioned at the start as a display of my affections for humanity, “I always try and keep faith in my doubts…”

  7. Phil Clark says:

    Hi Paul, the folks at Minutephysics came up with this short animation on AI that I thought you and any else would like as well as participate in if they feel like it.
    https://youtu.be/3Om9ssTm194
    Hope you enjoy.
    Phil

  8. I my view Artificial intelligence is great and is needed in this age.

  9. Thanks for the information..

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.