Last Updated on April 21, 2024 by Art
A Summary of the Durable Roadblocks to Super-Intelligent AI…
Artificial intelligence limits are most visible when coupled with failures of robotics. Consider:
- Instances of serious or fatal vehicle crashes in which a driver relied too much on his self-driving car.
- Customer frustration with household products like the Roomba vacuum cleaner by iRobot. Ignore the numerous customer service complaints. BBB and ConsumerAffairs reviews also abound with failures of this AI-driven robot to perform as intended.
However, it’s not hard to find artificial intelligence limits even when there’s no recognizable robot in the loop. When we telephone a large business we often have to comb through an interactive voice response (IVR) structure. Many are poorly designed – by humans or by subhuman AIs – and cause extreme customer dissatisfaction.
Moravec’s Paradox and the Limits of Artificial Intelligence
The heading above is the title of an outstanding article by Dr Richard Smith. My friend Charles South alerted me to this valuable article, which Smith has posted on his website.
Who is Smith, and what does he know? Smith has a PhD in Math and Systems Science. However, he isn’t a cloistered academic. He founded and leads TradeStops, a firm that provides math-based tools to support individual investors in their own investment decisions. His article is written from the perspective of an investor, but one with solid math credentials.
Artificial Intelligence Limits: My Reactions to Smith’s Article
Charles and I agree that Smith’s article is thoughtful and well-written. So I wanted to share with you my take-aways from it:
- Smith quotes the views of long-term robotics researchers, especially Rodney Brooks. Brooks was professor of Robotics at MIT and head of its Computer Science and AI Laboratory, and later founded iRobot. During his MIT years, he collaborated with the Computer Science department of HRL. We had great respect for his brilliance, knowledge and personal character.
- Concerning artificial intelligence limits: Our AIs are not nearly as capable as an insect. Humans, with all their computer help, are not likely to develop AI superintelligence anytime soon.
- Smith’s article title references Moravec’s Paradox. It’s not so much a paradox as a deep insight into artificial intelligence limits: Computers can easily play games and ace intelligence tests. However, the most advanced computer does not have the perception and mobility skills of a one-year-old human.
- The article introduced me to Steve Wozniak’s Coffee Test as a replacement for the Turing Test (measuring whether an AI can successfully pose as a human). The coffee test is brilliant: a robot enters a home it has never seen before and successfully prepares a cup of coffee. Smith’s analysis of what’s involved makes great reading!
Artificial Intelligence Limits: Bottom Line and Speculation
Smith’s discussion makes evident something that, to me, is the bottom line: Machines will not replace humans. They will continue to get better and better at helping humans do the things that computers can. However, they will never (well, “hardly ever”) master the things that humans are instinctively good at.
If your interest in the future of AI and robotics is piqued, I encourage you to read Smith’s entire article, which is not much longer than this blog. It’s a fine piece of work and compactly written.
We may wonder: is there an in-between world in which robots and humans can effectively partner? Brooks’ other company, Rethink Robotics, was not able to pull off that trick. However, perhaps that company was merely ahead of its time. We may yet see new ideas in human-robot collaborations that shift the ground beneath this entire field.
Have you encountered frustrating – or rewarding! – robots or AI services? Do you have additional thoughts to add? I encourage your comments below.
Sources: I thank Dr Richard Smith for having made his excellent article on artificial intelligence limits publicly available on his website. I thank Charles South for making me aware of it and for contributing his own comments.
Image Credit: Retro robot, from VexStrips on openclipart.org
Having been busy spent these last few weeks attempting to complete a draft of my new book, I have only just got round to catching up with Art’s blogs. Having sent it off to a literary agent today in Oxford. Given the book’s content, I can comment on AI since it is a subject I cover briefly. I doubt whether I will get much change out of the agents or the publishers from past experience, but one lives in hope, and besides there is always KDP on which to fall back.
I comment a little on AI in my last chapter 13 titled ‘And if so, what next?’ My view is that a digital computer is missing the vital element of intuitive ability of the human, or any other brain for that matter, and unless the mechanism for that mastered and understood, AI as it is understood today will never match up to even limited brain ability other than perhaps its admirable ability to contain huge amounts of information in a small space. What Rupert Sheldrake describes elegantly as Morphic Resonance, I describe more clumsily as Duplication Theory, but both of us indicate how structures from the past can be duplicated later in the brain.
He explains this as a biochemist by observation of what happens in nature, shoals of fish, swarms of starlings, termite colonies etc., whereas I use principles of physics with a few assumptions to show how quantum entanglement not only acts through space simultaneously but also through time in relatively the same location. There have been a couple of fairly recent quantum experiments demonstrating the latter apparently although little attention has been paid to them (Megedish and Zeilinger).
These structures in this case are holograms produced by patterns of firing neutrons/dendrites to cause interference waves which form these highly structured holograms (I call them holocepts) to be projected from the brain. In the first instance these are registered as visual/sight from the retinas at once, and then when further processed will instigate sequences of similar past experiences from the past as memory sequences more vestigially, and then even more indistinctly the latter combine to form thought. There has also to be a store of memory molecules (described as engrams) probably akin to DNA structures which will tend to activate such sequences from the past by resonance. The more accurate the duplicate image formed in thought, the closer will be that understanding of that thought be to the truth, or rather what actually happens in nature.
Since quantum entanglement experiments are now known to rely very much on the essential element of randomness as explained by Bell’s inequality theorem, from that I have been able to qualify Duplication theory to a certain extent, since randomness is what I describe as a singularity state. Although the latter can never be achieved by definition (light speed; absolute zero; infinity; prefect fusion of fermions for instance) close approaches can be made and when that happens the rules of nature and physics have to be amended to account for new unanticipated effects.
One of the lynchpins of Duplication theory depends on the concept of randomness to justify quantum entanglement, and which I came up with as a conjecture in 1978 in a couple of years’ spent away from the office. It was another decade or two before I first started to read about quantum entanglement and I am reluctant to believe this is just a fortunate coincidence.
The mechanism of intuition in my world depends on the ability to empty the mind, effectively into a random void of firing neutrons, whereupon, if one has been sufficiently obsessive previously about a particular problem, then the answer can fall suddenly into place, just like that. There are a fair number of examples of crucial breakthroughs of this happening over the years in science as well as other disciplines.
However I will not go into more detail and risk Art’s blog being seen as a rostrum for such extravagant claims, but in summary, until the experts get to grips with why this major element of randomness is so essential to the quantum world, and then hopefully be able to explain clearly the connection, preferably in words and/or diagrams rather than just abstruse mathematical equations (one of my inabilities regretfully) understood by few, then digital computers will never be able to act intuitively.
A final plug I fear is irrestible: the working title of the book as and when it comes out, is currently ‘Mind, Memory and Entanglement’. However my wife prefers what she considers is more relevant and catchy: ‘The Estate agent who thought he was Einstein’, but we shall see.
Nick, thanks for weighing in on the continuing dilemma of how well digital systems may be able to perform human-like tasks. For the reader: Nick’s Duplication theory, like my own Similarity theory, posits that there may be a tendency for complex patterns to repeat, not in violation of known physical laws but rather as an extension of them. If this occurs, I assert that it should depend only on the complexity of the pattern, not upon whether a life form is involved with it. Thus I would say that if digital computers behave differently from human brains, it’s only because they have a different organization and complexity at their present state of evolution. I would call upon the “hardly ever” clause in the concluding section of my blog to say, OK, not yet, but let’s not declare anything out-and-out impossible! – Art
More about A.I.: For readers with a continuing interest in how artificial intelligence is succeeding, and sometimes failing, to transform business – You may wish to sign up for a free weekly e-newsletter newly started by Fortune Magazine, titled “Eye on A.I.” This and other Fortune newsletters are available at https://cloud.newsletters.fortune.com/fortune/newsletters/.
I have found Fortune’s CEO Daily and Brainstorm Health Daily valuable; I am adding subscriptions to their A.I. and Data Sheet letters as well.
Comment from James Brennan via LinkedIn.com: Art, both your and Dr. Smith’s articles highlight a key issue with AI adoption: the hype is often too far ahead of reality for end users. Expectations are set too high and limitations are not well-defined.
Robots, autonomous vehicles, machine learning analytics and other AI technologies are all making great progress thanks to the great minds working in these fields and the strong capital investment, but commercial products need to clearly define the edge cases.
Machines can undoubtedly process orders of magnitude more raw data than humans, but a little information can go a long way in the human mind. We clearly hold the upper hand in the battle of Man vs Machine. The ideal system is when we are collaborators not adversaries because together we can make each other better.
Well expressed, James. Perhaps we need a programming framework that explicitly includes human-AI cooperation rather than tasking the AI to “do it all.”