How Smart Can You Be If You Ain’t Got No Body?

The rate at which a middle-aged man is going to grow new leg bone and/or ligament tissue is

a) fairly fixed;
b) not fast.

Which gives me time to catch up on various blogs, including the one written by Scott Locklin. His post “Open Problems In Robotics” warms my heart, because he and I have come independently to some of the same conclusions, and have been influenced by some of the same concepts. He’s a scientist, while I’m a computer scientist. The gap between these two professions is immense, and entirely to the advantage of the real scientists. Yet since I’m also a writer by trade, allow me to take a shot at making a few things clear(er) on this particular topic.

About a month before Locklin’s “open problems” post, I told Hagerty readers that we were very, very far away from a car that could run a Burger King errand for you. Locklin’s chosen example of robotic/autonomous difficulty is an even simpler one: a robot that will get you a beer out of the fridge on command, the same way that another human being would.

We all know that Amazon and the automakers and various other companies have “solved” this problem, largely by reducing the number of variables to zero. Honda’s East Liberty plant is filled with “robots” that can go get, say, the proper seat for the next CR-V coming down the line. This is done by mapping out every single inch of the journey in question and making sure that the seat in question is always presented in exactly the same fashion, and so on. The various sort-and-stack machines used by FedEx, Amazon, and other firms operate in similar fashion. As Locklin notes, they are all too “stupid” to even know where they are; if you unbolted the sorting machine and moved it two inches to the left, you would render it useless.

What Locklin wanted to build was a robot that could be turned on at any time, that would immediately figure out where it was, then wait for the command to get a beer, then find the fridge, then move anything that was not a beer out of the way of the thing that was a beer without causing damage, then obtain the beer, then repack the fridge, then find Locklin and hand the beer over. As with the mythical Burger King Errand Car, we are no closer to accomplishing this goal today than we were in, say, 1980.

Locklin points out that not even the most sophisticated robot on earth can automatically decide to avoid something that is swatting at it. A fly can do that, using 135,000 neurons. It takes a lot of computing power to simulate 135,000 neurons in accurate fashion — a lot, like “fills a room” lot — but the mere act of creating 135,000 psuedo-neurons doesn’t actually get you anything.

Why, as KRS-One asked, is that?

Locklin suggests that a a brain without a body is useless. In other words, the brain is somehow programmed by the body as an organism grows and develops. This is also the theory employed by Steve Grand in the charming book Growing Up With Lucy. Perhaps “programmed” is the wrong word. Let’s go back to the fly for a minute. After it is born, it learns to operate by firing different neurons, seeing what happens, and strengthening the neural connections which result in successful behavior. This was how I learned to play the beginning of Supertramp’s “Goodbye Stranger” on the piano this morning. I read the sheet music, then I tried operating my hands while listening to the results. When things went wrong, I stopped, which was a negative reinforcement. When they went well, I repeated the performance, which was a positive reinforcement. Now I can do something I could not do this morning, kinda-sorta.

All the things that a robot can’t really do — easily understand its position in a room by looking around, make a guess about how hard to push an object to get it out of the way, find a refrigerator that has moved by eight inches since it last went for a beer — are things that a growing organism learns to do through physical-neural feedback. Moreover, it would appear that many of these skills are really just small manifestations of greater skills. They have robots that can ride bicycles, but they don’t have a robot that can get off a 29″ downhill MTB, get onto a 20″ BMX bike, and ride the same course again without crashing. (To be fair, that’s not always something that I can do, either.) So the “ride a bike” skill is really a small visible part of the “learn to ride a random bike” skill, which in turn is a “learn to ride a random bike on random terrain, in random weather” skill, and so on, and so forth, and you get the idea.

If this idea that the body is father to the mind turns out to be correct — and it’s far better-supported than any competing idea at the moment — then it suggests that we will never get “artificial intelligence” just by loading a program into a really powerful computer. Instead, you’d somehow have to “grow” the computer and a physical manifestation of that computer together.

“Wait, do it in simulation!” you respond. Except that we don’t really know which parts of the physical/mental interface to simulate. What if you spend a decade having a supercomputer learn to throw an imaginary ball via simulation, only to discover a decade later that it was the act of rubbing fingers together that starts the path towards consciousness? More likely, what if you spend a hundred years trying to simulate the inciting incident of intelligence generation, and never get anywhere?

As I’ve previously noted in these pages, it’s easier to teach a computer to beat Bobby Fischer than it is to teach a robot to consistently touch its own nose. It does not seem likely that this will change any time soon. Computers will become better and better at handling data, and they will become faster and faster at it, but they are very unlikely to do “robot-like” or “autonomous-like” things in the foreseeable future.

Unless.

Unless you are willing to change your idea of what a robot is. If you started right now, you could create a mentat long before anyone creates useful AI. Which is to say that you could do some mild genetic engineering on a human pattern, then raise that human being in such a way as to apply hyper-intelligence to the problems you place before it. This was how Frank Herbert waved his hand at supercomputers in the Dune books. Poor Frank grew up in an era where “strong AI” was always right around the corner, but he didn’t want to write about that, so he came up with the “Butlerian Jihad” that destroyed all the AI, leaving enhanced humans, mentats, to do the work of supercomputers. Sixty years later, we are no closer to strong AI than the scientists of his time — but we could have raised five generations of human mentats in that time, the same way you create new dog breeds.

Naturally, you’re not going to get away with openly breeding people like animals, against their will and whatnot. Instead, you’d want to create artificial conditions to ensure that the highest-IQ people had higher-IQ children who could then be matched and bred. Your homework, dear reader, is to

a) conceive of an American society that disproportionately incentivized breeding with people of like intelligence, at the expense of all other qualities;
b) demonstrate how, if at all, this differs from the way we pack certain American colleges with a pre-determined ratio of high-IQ men and women nowadays.

If I wanted to write sci-fi, I would start with this concept: It’s XXX years in the future. Most “computing” is done biologically, by creatures that are vat-grown with bodies that enhance their abilities to develop certain forms of intelligence. General-purpose people are thin on the ground, and there are some bio-mechs in power who don’t like the idea of letting them run around at all. Pow. There’s your conflict all set up and ready to go.

The problem, of course, is that you’d have to come up with some sort of hand-waving idea as to why the general-purpose humans aren’t trivially easy to destroy. Maybe they’re the only people who can eat fresh-grown food or something like that. Otherwise it’s a turkey shoot. Feel free to leave your ideas in the comments.

All of this goes a long way to suggest that post-humanism will become a reality well before the Golden Age Sci-Fi scenarios of Planet-Sized Supercomputers and whatnot. Frank Herbert might have been on the right track with his Bene Gesserit and Guild Navigators and whatnot. Maybe the way you get to artificial intelligence isn’t by starting with artifice and making it intelligent; maybe you start with intelligence, and make it more artificial.

Apropos of nothing, this is how the rebooted Battlestar Galactica series worked: the “Cylon Raiders” were cyborgs with living tissue inside a metal ship. That’s not quite right, unless you have a way to let the ship grow and develop along with the brain. Which is a very difficult problem, but probably not as hard as creating “strong AI” out of an Intel chip. That‘s never going to happen. As Locklin notes elsewhere, when the tech firms say something will be handled by “AI” they might really mean “Aliens and/or Immigrants”. Longer than anyone reading this blog will be alive, the cheapest way to address a problem will be continue to be the sourcing of cheaper labor. Which means that you’re already living in the future, every time you order an Uber or eat some foodie meal that can’t be harvested and assembled in any world that pays a living wage to all of its laborers. Another way to look at it: Scott Locklin’s “beer robot” has existed for thousands of years, as an “indentured servant” or “slave” or just “employee”. If it ever gets rendered in metal-and-motors form, there still might be a biological brain doing the work behind the mask. Would such a creature have a soul? What about the creature who created it in the first place? What about any of us?

13 Replies to “How Smart Can You Be If You Ain’t Got No Body?”

  1. AvatarDisinterested-Observer

    The first time a baby grabs a finger (or a boob) its brain and its body are learning something. More in theme with the article my niece learned how to ride a bike a shockingly young age. Getting back to your Hagerty article, I would much rather see, and I think it is vastly more feasible, an infrastructure that supported autonomous flying vehicles and left the surface streets to the meatbags.

    Also nice to see that you are still putting out your own original content even as the back catalogue remains in storage.

    Reply
  2. Avatarstingray65

    “The problem, of course, is that you’d have to come up with some sort of hand-waving idea as to why the general-purpose humans aren’t trivially easy to destroy.”

    A very thought provoking essay – thanks Jack. Of course most general-purpose humans would be trivially easy to destroy – just give them tax incentives to buy a Tesla with Auto-Pilot and then as they are napping or watching porn while the car is driving them to work or McDonalds they smash into an abutment or parked emergency vehicle and die in a fireball of battery acid – EXCEPT for the tiny fraction who enjoy driving and refuse to use Auto-Pilot and therefore live to fight against the tyranny of the bio-mechs.

    Reply
  3. AvatarJohn C.

    I wonder how it has worked out for Japan where they have avoided immigrants in favor of simple robots to help care for their aging population. I had rather hoped this was something we could learn from.

    Reply
    • AvatarLatisha Brown

      The tribe keeps trying to fill Japan with immigrants from Africa. I heard Africans have higher birthrates than the Japanese. How long can Japan resist? I pray for Japan.

      Reply
  4. AvatarJames

    You might recall that before Amazon acquired Kiva, and its designs for incredibly stupid robots, Amazon had its own designs for efficiency in picking… And Kiva’s robots blew Amazon’s designs out of the water.

    Kiva’s robots, had autonomy sufficient to find the next point on the grid (important as wheels wear down–they could basically adjust their servos in real time), and could raise or lower their lids.

    Kiva robots didn’t take items off shelves–that’s too hard for robots to do. They raised their lids and carried the shelves to the human picker. I remember hearing all of this and thinking, man that’s a really nice robotics design, and it will blow everything else out of the water–but they really don’t need a computer programmer now, now do they?

    Reply
  5. Avatardzot

    Robin Hanson, in Age of Em, gets around this by suggesting that AI/Robots can be copied from beings that have already done the physical/neural learning via whole brain emulation.

    Reply
    • AvatarNoID

      I added Age of Em to my reading list after hearing the author debate Bryan Caplan at the SoHo forum on the question of whether or not robots will eventually dominate the world. As I recall, Mr. Hanson was quite welcoming of our utopian collaboration with robots in the future.

      As far as the sci-fi pitch goes, it is reminiscent of “Probots and Robophobes” by Scandroid, a song that I doubt Jack will appreciate but which chronicles a scenario not unlike the one in his story proposal.

      Reply
  6. AvatarScottS

    2019 came and went and we still don’t have Replicants. While the movie Blade Runner was generally considered a box office flop (although it later became a sci-fi cult classic) it left an impression on me having seen it in theater when it was released.

    While it was never overtly communicate in the move, Replicates very clearly were slaves “developed” to serve genetic humans in tasks that were of low value or dangerous. Robots are developed for exactly the same purpose. They cannot be slaves without intelligence and self awareness. Combining AI with robotics? Is this not the same as creating Replicants? Where do we draw that line. I hope I never go too lazy to get off my ass and get myself a beer.

    The biggest regret from the original Blade Runner is that we know almost nothing about Rick Deckard’s gun and suspect we are going to need one in the future.

    Reply
      • AvatarJohn Marks

        My personal blog is called The Tannhäuser Gate. I have long held that “Blade Runner” is a Christian allegory about the meaning of slavery (or bondage) versus freedom. At the end, Roy Batty saves the life of the man who has spent the movie trying to kill him for money. It seems that freedom from bondage often requires the death of, if not the innocent, at least of the “unguilty.”

        Here’s what I wrote on my blog, answering my own question, “Why name your blog ‘The Tannhäuser Gate’?”

        START

        I think that Blade Runner is the greatest science-fiction movie of all time. I think that, because it creates a completely believable future world that is arrestingly foreign, but hauntingly familiar. At the same time, the story almost subliminally makes us uncomfortable about our collective past. Obviously, book author Philip K. Dick tapped into deep historical and cultural currents involving not only what it means to be human, but also, “how then should we live”—in the sense of act or behave. Ironically, in the end, it is a non-human who behaves heroically.

        Replicants are slaves, and they are self-aware. They were created to do “the jobs Americans don’t want to do” (scare quotes my own). When replicants get out of line, we hunt them down. They might not feel pain the way we do, but they understand what their own death means. The whole film inexorably moves toward the climax wherein replicant Roy Batty, knowing that he is dying, nonetheless saves the life of the human who has been trying to murder him.

        Reportedly, actor Rutger Hauer ad-libbed his final scene:

        I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain… . Time to die.

        If that was improvised, the scene is doubly impressive.

        Not to take anything at all away from Rutger Hauer’s career-defining performance, but I think that it was everything that director Ridley Scott had put in place that made that improvisation possible. Or perhaps even inevitable. Hauer was keying into the action and blocking that were already laid down; and, especially, the props and the special-effects makeup.

        I think that the subliminal connection Ridley Scott was seeking to make for his audience was given away by two small touches:

        1) For the entire last scene, Roy Batty tenderly holds a white pigeon (or a dove). When Batty dies, the bird flies skyward.

        I think that the bird not only symbolized that replicants have immortal souls. I believe that the bird represented the active presence of the Holy Spirit, and that that was what inspired Batty to save a man who had earned death.

        I think it is entirely possible that Ridley Scott was aware of the same device’s having been used in the climactic scene of Stanley Kubric’s Barry Lyndon. Right before Barry Lyndon discharges his dueling pistol into the ground, rather than kill his wife’s son from her previous marriage, a pigeon is seen to fly down from its roost on the inside sill of an eyeball window in the outbuilding’s pediment.

        2) Back to Blade Runner. As Batty single-handedly lifts Deckard to safety, there is a fleeting glimpse of a rusty nail that has gone through Batty’s hand. I think quite obviously (though not exactly analogously) Scott was making Batty into a Christ figure. (Note the church-bell-like chimes entering into Vangelis’ soundtrack score, at that point.)

        The action and blocking and camera angles were all set up, and I am sure had been run through before. The white bird was ready, and Batty’s hand had been doctored to look transfixed. So all the heavy lifting had been done to wrap up this movie, and set up the sequel.

        Deckard goes on living, if not on borrowed time, on unmerited time (bought via “unmerited grace”?). So, the next movie will tell us what he has done with it.

        Mr. Hauer’s brilliant free association only raised the scene that was already laid out, up to a higher level.

        END

        jm

        Reply
  7. AvatarArk-med

    Recall the PlayStation GranTurismo Academy, where toppers on the leaderboard were offered a chance to compete IRL to some limited success.

    Reply
  8. AvatarJohn Marks

    Dear Jack,

    About your “homework,” I filled that out about 40 years ago!!! Really!

    I had lots of part-time jobs during college and college summers. A favorite was being the night clerk at a hole-in-the-wall wine shop (that also sold hard liquor and beer) that had won the favor of Brown University, and so the different departments could in theory send someone to make some purchases and in due course the proper department would get a bill.

    One afternoon a rather pleased-with-himself young man came in and said that he wanted to charge a couple of cases of beer to the Admissions Office. They were going to celebrate having finished making all the decisions about the next entering class. I did not even ask him for ID, his aura of WASP entitlement was so perfect, so strong, so complete. I just asked him to print his name on the three-part carbon charge form, as well as sign it.

    Something about the guy’s “Got the World on a String” attitude irked me royally, so as he prepared to heft his two cases of beer cans off the counter and out into his illegally-parked car, I asked him if he minded if I asked him a question. No problem, he said.

    I asked, “Do you ever think about all the really bad marriages that would not have happened, except for the choices you make?”

    He thought for a moment, then half chuckled through his smile. “No,” he beamed. “I never have!”

    Back to you, Jack.

    PS: What I had in mind was a guy from the Midwest who ended up at Brown, a few years before I did, and he spotted a girl during Freshman Week, and said to himself, “I am going to marry her.” And he did, right after they graduated, and it did not last two years.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.