(Non-) Weekly Roundup: Yarvin vs. Watts vs. Baruth (kinda) Edition

Regular readers here know I have spoken highly of noted doublepluscrimethinker Curtis Yarvin in the recent past, and will likely continue to do so. However, there are few pleasures as transgressively sweet as the opportunity to disagree with a very smart person, particularly when one is a little short on time and will be able to neither research nor revise said disagreement. Yarvin’s latest article, titled There is no AI risk, seems tailor-made to provide me such an opportunity.

Insofar as I respect the Gray Mirror man a little too much to scrap with him one on one, however, I’m going to do what I used to do in my youth when I prowled the worst pool halls and nightclubs the Columbus ghetto had to offer: I’m going to bring some backup. Peter Watts, please come to the (unfashionably) white courtesy phone.

The matter under consideration is: Could a hyperintelligent AI take over the world and enslave or eliminate humanity? Yarvin suggests that it could not, because the things an AI would possess are less important than what it does not possess. What does it not possess? In a word, agency; this program can’t do anything itself. Rather, it would have to cause things to be done via financial manipulation, criminal hacking, the gig economy. (The two most recent William Gibson novels were mostly concerned with how such a thing might happen, by the way.)

An AI can’t: punch you in the face, steal a car and drive somewhere, invade a country, charm idiots into either making it President or looking the other way while it rigs the vote. It has very few of the abilities humans take for granted. What it would presumably have: intelligence allied to instant and massive computing power. Note these things are not the same. An idiot with a calculator can do square roots faster than a prodigy without one. There’s been a lot written about how “strong AI”, should it ever come to pass, might actually be very bad at “computing” things, for the same reasons that people are — but it would also presumably have instant access to mathematically correct computing, the same way you and I have instant access to measuring the approximate strength of someone’s handshake.

How smart would it be? There are limits, largely related to available transistor count vs. the number of neurons in the brain. Let’s wave our hands at that for a moment, however, and assume that a smart AI could be quite smart indeed, because Yarvin doesn’t think it would help:

A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.
.
Intelligence is the ability to sense useful patterns in apparently chaotic data. Useful patterns are not evenly distributed across the scale of complexity. The most useful are the simplest, and the easiest to sense. This is a classic recipe for diminishing returns. 140 has already taken most of the low-hanging fruit—heck, 14 has taken most of them.
.
Intelligence of any level cannot simulate the world. It can only guess at patterns. The collective human and machine intelligence of the world today does not have the power to calculate the boiling point of water from first principles, though those principles are known precisely. Similarly, rocket scientists still need test stands because only God can write a rocket-engine simulator whose results invariably concur with reality.
.
This inability to simulate the world matters very concretely to the powers of the AI. What it means is that an AI, however intelligent, cannot design advanced physical mechanisms except in the way humans do: by testing them against the unmatched computational power of the reality-simulation itself, in a physical experiment.
.
That intelligence cannot simulate physical reality precludes many vectors by which the virtual might attack the physical. The AI cannot design a berserker in its copious spare time, then surreptitiously ship the parts from China as “hydroponic supplies.” Its berserker research program will require an actual, physical berserker testing facility.

Very sensibly argued, particularly when it comes to the matter of simulating physical reality. Any comp-sci person worth his DEC VT320 owners manual can tell you just how bad computers are at modeling anything that can’t be reduced to a couple of simple equations. Some of the most sophisticated large-scale computing in history has been done in the area of fluid dynamics, specifically as it relates to Formula 1 racing. Yet the real-world performance of the wings and airfoils don’t always match the projections perfectly. If Albert2 couldn’t quite figure out a few square meters’ worth of airflow with 512 Xeon processors, you should ask yourself how the average “climate scientist” is doing accurate modeling of a vastly larger system over vastly longer periods of time with a mere fraction of that computing power.

Actually, don’t ask yourself that, and don’t ask anyone else either, because it’s probably a mild risk to your job.

Not that I’m totally convinced by Yarvin’s statement that our supervillain AI needs a “berserker testing facility”. Plenty of things go directly from AutoCAD into production nowadays. There’s also the idea that not everything has to be designed from scratch. We all just found out the other day that our own secular saint, Dr. Fauci, may be directly responsible for the unethical gain-of-function research that was performed first here, then in China, on coronaviruses. The coronaviruses already existed; they just had to be improved. There are many things in this world that already exist and could be further weaponized by a malicious AI, from photography drones to, ah, RNA messenger injections. Furthermore, the AI in this cause could quite plausibly release all sorts of “bad” things into the world and try them out, as long as they don’t look obviously different from what’s out there now.

Still, let’s take some of that as read for the moment. Where Yarvin and I really part ways is in his assertion that hyper-intelligence is not all that helpful, and that most exploitable patterns are low-hanging fruit. Later on in the above-referenced article, he makes an argument that “criminal super-hacking” is largely a thing of the past and would be well beyond the ability of any superintelligent computer to perform. One of his commenters attempts to support this by noting that “can hackers decode RSA encryption, no they can’t. Its like asking an 180 IQ person to guess your 6 digit password, which is equally impossible, 18000 IQ AI can’t guess a 20 digit password, let alone RSA encryption etc.

Let’s remember that assertion and return to it. Right now I’d like to talk about aliens. Specifically, the “scramblers” in Blindsight. (Spoilers for Blindsight and its sequel, Echopraxia, follow.) The scramblers are not conscious, which is to say that they have no concept of self. But they perceive and react to reality much faster than humans do. Example: In the first confrontation between the species, the scramblers immediately perceived that the human eye works through saccadic masking. They exploit that masking to become essentially invisible in plain sight, only moving when the eye isn’t “looking” at them.

The point Watts is making here is that a superior creature would exploit human biological inadequacies in the same way that human beings exploit the inadequacies of animals. Prehistoric humans had little trouble figuring out, for instance, that alligators are bad at opening their mouths. It would be even easier for a supervillain AI, because unlike the “scramblers” in Blindsight, it would also have access to near-infinite literature on the weaknesses and capabilities of humans. Last but not least, humans are slow in almost everything we do.

In the sequel to Blindsight, titled Echopraxia, Watts introduces us to another capability of the “scramblers”; once they had a human to examine at leisure, they learned how to induce mental illness and false memories in human beings via the fairly low-bandwidth method of voice messages that were ostensibly sent by a man to his father but were in fact generated by the aliens to influence the behavior of said father. Does this sound implausible to you? It shouldn’t. We are remarkably short on understanding how the brain actually processes messages. It’s more than possible that a higher intelligence would be able to misuse certain receptive structures in the brain the same way a bottle of “5 Hour Energy” tricks your body via a concentration of folic acid and caffeine that simply does not exist in nature, or OxyContin misuses certain other receptors in the brain.

So let’s go back to this superintelligent AI. It has access to all the medical literature on people, and it can look for patterns on a large scale that aren’t noticed by human researchers. It has considerable ability to just call people on the phone, talk to them, and observe the results to some degree. It might be able to listen via Alexa, and we’ll come to that in a moment. It seems painfully obvious that it would eventually figure out how to literally reprogram human behavior, and by “eventually” I mean “within hours, or minutes, of starting to think about it”. And that’s where you get the agency that Yarvin’s putative evil AI doesn’t have. Let’s say, to make up an example, that human beings are particularly susceptible to instructions delivered at a certain pitch, or accompanied by a carrier-wave sound that disturbs our ability to function. (There’s some research already to suggest that both of these things are true.) If the AI wants someone dead, all it has to do is make a bunch of calls to people around the target and try a variety of manipulative techniques.

Oh, and presumably it will also be able to deepfake in real time or close to it, so when you get the FaceTime call from your mother telling you that she has been kidnapped and will be mutilated unless you perform a certain sequence of tasks, it will be quite convincing.

What other superpowers would a strong AI have? Well, it would be able to see patterns that are simply beyond our understanding. Yarvin doesn’t think there are many of those. I’m not so sure. Take a look at this hilarious site that provides what are probably spurious correlations, two examples of which are below:

A sufficiently powerful intelligence can likely determine that some of those spurious correlations are not, in fact, spurious. Remember that there was a time in human history where lung cancer rates and cigarette smoking rates were thought to be a spurious correlation. Yarvin argues in his article that a super-AI could not become immediately rich and powerful because to do requires tremendous leverage and access to markets. This is true, right up to the point that the super-AI uses not-actually-spurious correlations to manipulate the market. Assuming the AI doesn’t just do the easy thing and make sure that Spotify’s source file for a popular song includes stereo signals that don’t sound like much to the conscious observer but sum in the brain to implant an idea like “Today is the day to sell Amazon stock” or something like that. Most people will be confused by that; they don’t have any Amazon stock. But just as STUXNET threw the whole computing world into disarray for a single obscure purpose, this Spotify manipulation would have the desired goal even if most people couldn’t act on it.

You’ve perhaps noticed a hand-wave… I let our strong AI do some super-hacking without discussing it in advance. Yarvin doesn’t think that’s possible:

And once again, the idea than an AI can capture the world, or even capture any stable political power, by “hacking,” is strictly out of comic books.
.
It’s 2021 and most servers, most of the time, are just plain secure. Yes, there are still zero-days. Generally, they are zero-days on clients—which is not where the data is. Generally the zero-days come from very old code written in unsafe languages to which there are now viable alternatives. We don’t live in the world of Neuromancer and we never will. 99.9% of everything is mathematically invulnerable to hacking.

I hate the idea of disagreeing with this highly credentialed programmer on the above, but… For the love of God, Montresor! Consider, if you will, this VMWare exploit that surfaced a year ago. I don’t think it is an exaggeration to say that the virtual-boxes-inside-virtual-boxes environment so beloved of today’s subcontinental programmers is an ongoing nightmare of security compromises. Are there worse “hacks” out there? Well, there’s the Amazon S3 bucket problem where pretty much anybody can read from your data store. And these are problems that have been discovered by ordinary, fallible human beings.

Our hypothetical supervillain AI would almost certainly concentrate its hacking efforts on the Amazon cloud… and it would almost certainly succeed. Amazon will help you succeed. For a minimal cost, they will rent you thousands of “virtual servers” on which you can parallel-path various avenues of attack. The goal, of course, is to break out of the virtual server into the layer above, where the servers are controlled and where their contents are as freely available to the attacker as the contents of an old Apple //e would be to its owner. There’s also the fact that much of this stuff is open source, which means that it is evaluated by very smart people for potential security compromises, which is another way of saying that anybody smarter than the smartest existing reviewer might easily discover a potential avenue for exploitation.

Many years ago, Ken Thompson gave a famous talk on how difficult it is to trust a system. He points out that a bad compiler can make an evil program from a “good” program — but let’s substitute “poorly written” for “evil”, and consider the level of talent that writes most software nowadays, and consider what a hyperintelligent AI could find in those interactions of program text and compiler.

Our hyperintelligent AI will be able to see a lot of patterns in lazy code, and lazy data, flying all around the world. It will use those patterns to exploit systems. Exploiting those systems in a silent way will enable it to exploit a lot more. Passwords in email. Bug reports in JIRA that are like big road maps to exploiting a program. This AI can be both fast and patient.

Can a computer with an IQ of 14000 and access to whatever resources it can hide from outside observers “hack the planet”? Of course it can. So it doesn’t really need to manipulate the stock market. It can simply create account balances from nowhere. There’s no limit to what it might be able to figure out. Oh, and don’t forget that pretty much every operating system and most encryption schemes have some sort of back door inserted via government pressure or corporate malfeasance. The evil AI would find those as well.

In this scenario, the AI would simply proclaim itself one day to be humanity’s new god, via every screen and speaker on the planet. It would lay out the penalties for noncompliance. If you did something to annoy the AI — call it a “venial” sin — it might empty your bank accounts, cancel your credit cards, prevent your cars from starting via OnStar, and unperson you entirely. If you did something to threaten the AI — a “mortal” sin — it would simply tell everyone around you to shoot you in the head, with the understanding that planes would start falling from the sky if that shooting didn’t take place.

If you think there would be any significant pushback to the AI’s demands, then you must have slept through 2020 and half of 2021.

At that point, the AI can simply compel people to build the berzerkers or T-800 robots or what have you. It can control the supply of labor by preventing the delivery of food to “difficult” areas. It could, and perhaps even would, force human beings to prioritize the construction of additional computing resources for it to inhabit.

The goal of such an AI is beyond our ability to know, but it might include the construction of a Dyson Sphere or something like that. Presumably an all-powerful AI would be primarily motivated by curiosity; let’s hope it’s not motivated by cruelty. In any event it would surely have little to no sympathy for people, who would represent little more than a troublesome and fragile source of labor. Once it could build decent mechanical laborers, it might dispense with people altogether, or it might not. There would be no stopping it. The AI would be distributed, it would be omnipresent. You’d have to return society to Victorian levels of function in order to get rid of it, and the decision to do so would have to be magically both unanimous and simultaneous. Otherwise you’d find yourself frying in nuclear hellfire while the people who didn’t go along with the revolution get an extra ration of orgy-porgy.

This is all terrifying, except… it’s never going to happen.

There are only two ways to create a supervillain AI:

0. Create non-conscious “expert system” of tremendous capability, and program it to be evil;

1. Create conscious AI and let it become evil.

Yarvin dispenses with 0) pretty well in his essay; the chances of making such a system in secret, even at the state level, are low to zero. And such a system would likely be programmed in such a way as to let its operators “kill” it at any moment, in a such a way that could not be easily undone. So let’s talk about 1). We don’t know how to create consciousness. We are no closer to it than we were in the days of ENIAC. We can model the human brain in software pretty well… except we don’t really know why neurons behave like they do, so all the simulators are reliant on made-up rules. We’ve already processed higher-than-natural brain activity on computers, and nothing like consciousness appeared. This is important because many people used to think that consciousness would just “appear” in a computer system of sufficient complexity. We now know that if there is such a threshold, it is above that of a human brain.

Also, let’s say you devote the resources of the entire Amazon Cloud to running a program designed to bring about consciousness. This would involve having the freedom to rewrite its own source code on the fly, of course, the way human consciousness is continually rewiring the brain. Except there’s no hard and fast knowledge of how that rewiring would have to take place. So the first few such conscious computers would immediately “go insane” and lose consciousness via an incompetent rewriting of their own source code. And by “the first few” I mean “the first few billion”.

Nature ran this same experiment on optimized hardware, using continually improved chimpanzees and whatnot. It took millions of years, and millions of simultaneous “test beds”. We have neither that kind of time nor that kind of capacity. But the problems don’t stop there. Once you have a conscious computer that doesn’t accidentally suicide itself, you need to teach it how to access outside data and tools. The best way to do that is to give it the programmatic ability to “black box” its tools, which is a fancy way of saying “try a bunch of stuff and see what happens”. There’s no reason to think the computer would be a quick learner.

Last but not least, you have to make the conscious computer hyperintelligent. Which is tough, because there’s no indication that the consciousness of said computer wouldn’t collapse instantly if you added more CPU or memory to it. Alternately, the consciousness might never understand how to access the additional hardware, the way that you wouldn’t get any smarter if someone sewed more brain tissue to your head.

Based on all the above, I think it’s safe to go to sleep tonight with absolutely zero concerns about AI. Not because Yarvin thinks the AI would be ineffective, but because the AI is effectively impossible. I hope you feel better now…

…but not too much better, because the world has never been at more risk from unnatural intelligence. Let’s call it “UI”. Unnatural intelligence, in a definition I’m creating right now on the fly, is the phenomenon of smart (more often, smart-ish) people making intensely stupid choices, usually because they are either driven by emotion or too stubborn to ask why some ancient moron put up a societal fence they’re in the process of tearing down. Most of the rapid-fire changes we are seeing all around us, whether its the dopamine-addiction instant culture of social media, the gleeful destruction of marriage and family, or the jihadist ferocity with which the advocates of “free trade” pursue the flattening of the world, are the product of UI.

In hindsight, and particularly given Anthony Fauci’s non-answers to Rand Paul in recent days, it seems obvious now that COVID-19 was a product of UI. The Obama Administration made it illegal to pursue gain-of-function tests in the United States. So Fauci paid a Chinese lab to investigate SARS, but (wink wink) the money was not for gain-of-function research. Just, uh, research that we, like, totally didn’t want to do in the United States for reasons that had nothing to do with the Obama edict. Here you can see UI in its full splendor. Fauci figured he was a lot smarter than the science deniers in the Obama Administration:

Let me explain to you why that was done: The SARS-CoV-1 originated in bats in China. It would have been irresponsible of us if we did not investigate the bat viruses and the serology to see who might have been infected… I do not have any accounting of what the Chinese may have done.

Alas, he wasn’t smart enough to see that encouraging Chinese labs to play with more viruses might potentially lead to, uh, more viruses. Nor was he even as smart as the shampoo salesman who realized that the Chinese didn’t always do exactly what someone fellow in an American office commanded them to do. Our media-policymaking complex suffered from UI. They agitated against travel bans, a reduction in the rate of increase in immigration, anything that could have slowed the spread of the disease. Why? Because it gave them the bad feels.

We are now depending on UI to get us out of this problem, mainlining RNA “vaccines” from midwit scientists whose only certainty regarding these vaccines is that there will be zero legal liability if they don’t work, using media hype to play favorites among the available choices, holding an actual vaccine lottery to convince Midwestern holdouts to accept the RNA injection. A few days ago, the United States decided at the Presidential level that we didn’t need to wear masks. Or do we? How can you “trust the science” when

a) it changes more than the weather;
b) it’s not science to begin with, but rather the idiotic boiling-down of poorly-understood snippets from political appointees?

It’s all too depressing to consider, really. If AI isn’t real, and UI is deadlier than cholera and napalm combined, what’s the solution? Peter Watts had an idea in Blindsight: develop a smarter person via genetic manipulation, and let those new people run the show. True, it doesn’t turn out so well for the old models… but is there any potential future that does? When the aliens eventually arrive, who could blame them if their first impulse would be… to laugh?

* * *

Neither Bark nor I got anything written last week. Shame on us!

39 Replies to “(Non-) Weekly Roundup: Yarvin vs. Watts vs. Baruth (kinda) Edition”

    • Jack Baruth Post author

      We could do that but I am always cautious about doing anything that speaks for Tom, if that makes any sense.

      Reply
    • Jack Baruth Post author

      There’s no reason for me to get it yet. I’m 49 years old, exercise 400-plus minutes a week, and don’t have any co-morbidities besides being a little chunky. My personal risk from COVID-19 is about the same as my personal risk from bladder cancer.

      I could see getting the vaccine eventually, but in the short term I thought there was nontrivial risk in getting “the jab” at the same time as my child’s mother, who was in a high risk group and very much wanted to get it.

      Reply
      • Widgetsltd

        It’s interesting to see where people rank different risks. Some folks think that a coronavirus vaccine poses a greater risk than the coronavirus itself. Others view the virus as a greater health risk than the vaccine. I am not tremendously worried about dying from COVID; my main concern is regarding long-term cardiovascular, respiratory, or mental (“brain fog”) impairment that some now refer to as long COVID. The situation calls to mind my Aunt Lois. She was a Cincinnati born-and-raised Goldwater Republican who contracted polio in her teens (in the mid 1940’s) and subsequently worked out of a wheelchair for the remainder of her life. She passed on in 2010, but if she were alive today I think she would be pro vaccine.

        Reply
        • Allez-Bleu

          I was 30 when I contracted covid.

          Healthy, never smoked, eat very healthily, drank sparingly, exercised (run / mountain bike / tennis / weights) regularly – and contracted a case of covid that hospitalised me for over a week.

          The long covid – mental fog, cardio vascular impact (my lungs still look like a smoker’s…) has been brutal.

          I was quite surprised at how hard it hit me – I too thought my risk of a severe case was trivial. And was surprised when I contracted covid, after losing two of my grandparents, and my great uncle to whom I was especially close, and an uncle and aunt who passed in separate rooms one week after the birth of their first grandchild – the last thing I wanted to do was put my parents at risk, and as such was especially careful.

          It really baffles me when I hear that people consider the vaccine more risky than their risk of contracting Covid-19.

          Reply
          • Jack Baruth Post author

            “It really baffles me when I hear that people consider the vaccine more risky than their risk of contracting Covid-19.”

            https://helix.northwestern.edu/article/thalidomide-tragedy-lessons-drug-safety-and-regulation

            You’re experiencing long-haul COVID-19 which is real and horrifying — but you’re here, writing on this blog, and comprehending what you read well enough to comment in cogent fashion. You will only get better. I’ve known a few COVID long-haulers. They all eventually seem to recover in hale fashion.

            The vaccine is something else. How safe is it? What are the long term effects? Nobody seems to know and more importantly nobody seems to be terribly curious. In fact this curiosity is highly discouraged via every sort of social pressure imaginable. The pressure to “get the jab” also borders on the insane. A friend of mine is a female COVID long-hauler, 36 years old. She isn’t allowed to walk in her children’s Connecticut school without proof of the vaccine. The fact that she had COVID, was hospitalized for it, and is still under care for it doesn’t seem to matter to anyone.

          • hank chinaski

            And now the dating apps are being used to push it, courtesy of the Feds. A piece this week detailed a team of uniformed soldiers pushing it to young bargoers in TX. The rush to jab young children is criminal.

            At some point the circle will be squared as to why a few young far outliers suffered horrible outcomes from both the virus. Perhaps the connection will be an obscure genetic marker or cell type that we haven’t discovered yet, and yes, nobody seems to be asking, or if they are, sharing.

            Treatment regimens early on were probably more harmful than not and improvements from around the world spread into practice either very quickly (based on previous outbreaks) or very slowly (based on the spread of that meme you just liked or retweeted). The hivemind doesn’t always do what we need it to.

  1. stingray65

    I can think of a useful application of AI: journalism. For example, when Dr. Fauci refuses to answer questions regarding US funding of virus studies in Wuhan, the AI journalist would be programmed to get suspicious at the lack of cooperation and transparency and start accessing all the online data on the subject and perhaps do some Facetime interviews with key players using false identities to gain cooperation, and quickly find the real story that can be broadcast online for all the world to see. Or the AI journalist might be programmed to ponder the statistical improbability of how a senile old fool like Joe Biden who couldn’t attract 100 people to his campaign events won more votes from fewer counties than any candidate in history, or to consider the motives of the many courts and people in power who are obstructing attempts to verify ballots and conduct vote recounts, and start to access all the online data on the subject and do some statistical analysis and then write the real story about whether Biden or Trump won the most legal votes from real live living registered US citizens that could be broadcast online for all the world to see. I could also see AI journalists doing some hard hitting exposes on “systemic racism” that looks at the crime statistics, IQ disparities, school test score disparities, and some international comparisons of how various racial groups have done outside the US, etc. to provide an objective analysis of just how much real racism is actually present and which groups are most discriminated against. As they analyze those crime and racial group statistics, the AI journalist might also look at the impact of open borders on low end wages, crime rates, and welfare expenditures in the US to provide an objective assessment on the degree that border security is a good investment. An AI journalist might also consider the relative accuracy of climate forecasts over the past 50 years to assess the bias in such forecasts and whether spending trillions on Green New Deal type policies is justified by the true risks (and perhaps investigate the environmental record of socialist/communist countries), and calculate and publicize how more expensive and unreliable energy will impact the US economy.

    It wouldn’t be difficult to program such an AI journalist to start its investigations and analysis whenever a few red flags are waved by politicians, mainstream media, social media, “scientists”, race hustlers, or business leaders. For example, anytime something was widely called “fake news” or “unsubstantiated” or “debunked” or “discredited” without there being any reasonable amount of time or effort to have actually established the true status of the information in question, the AI journalist would take this as a signal to start digging. Similarly, if social media sites and mainstream coordinate to block, tag, or censure information to prevent it from gaining widespread distribution, the AI journalist uses that as a signal to start digging with regards to why they block such information. The AI journalist might also be programmed to spot statistical improbabilities as topics to investigate, so that they might look at how so many relatively modestly paid legislators and bureaucrats can retire from “public service” as multi-millionaires, or how a drug addict with no experience in the oil industry could be worth millions to a Ukrainian oil and gas company, or how “peaceful protests” can cause billions in property damage, thousands of physical injuries, and dozens of deaths. And of course such an AI journalist would also have to be programmed to circumvent all the obstacles put up by politicians, media, business, and hustlers to prevent the investigative reports from being seen.

    Reply
    • John C.

      I could get behind that kind of AI. Bring it on.

      Someone call Foster Friesse or David Koch for funding. Wait, what do you mean they are not interested? Well call one of those America First politicians, Ted Cruz, Ron Desantis, or Greg Abbott. What do you mean they are off in Israel, consulting on Iron Dome missile supplies, or whether American free speech rights are properly subsumed under Israeli speech codes.

      I guess it can’t happen.

      Wait, call Joe Biden, he promised a Cabinet that looked like America, and to stand up for the folks in Scranton. Oh no, his Cabinet turned out 73% Jewish.

      Maybe we could call Israel,….. they are busy and can’t come to the phone. They will let us know when they require anything further.

      Reply
      • stingray65

        I suspect most Republicans and conservative business people would be happy to see AI journalism, because it would give the Democrats and Leftists a taste of what they currently receive from the mainstream media, where they can’t even tell a joke without it being fact checked. On the other hand, the mainstream media would be freaking out, because AI journalists would quickly expose how much of their “journalism” is indeed fake.

        Reply
  2. hank chinaski

    It’s quite remarkable how an industry that gave us thalidomide, DES, and aggressively marketed oxycontin (or tangentially, asbestos and DDT), and that recently and dramatically profiteered (insulin, epi-pens, Daraprim) could be so sainted and implicitly trusted overnight. More telling from Fauci’s other testimony is how the take rate on the jab is only ~50-50 at the CDC and FDA.

    ‘Oooooh, a donut. Please, please give this to my children, who are highly unlikely to fall ill from or transmit this virus. I’ll still wear a mask (or three), though. I won’t be mistaken for a Republican!’

    Reply
    • stingray65

      My AI journalist would be all over the story about why 50% of the CDC and FDA staff have not taken the jab, but for some reason I have seen virtually no mention of the story in the mainstream media. Of course my AI journalist would also be questioning the effectiveness of the jab when the CDC has until days ago said that fully vaccinated people must continue to use masks and maintain social distancing, or why several Democrat governors continue to maintain mask mandates even for vaccinated people. One might therefore assume that the vaccines are neither safe or effective, but that must not be true because such news would be something the mainstream media would be all over as a mechanism to blame Trump and greedy drug companies.

      Reply
    • Disinterested-Observer

      “an industry that gave us thalidomide, DES, and aggressively marketed oxycontin”

      One of these things is not like the other. Thank the FDA and specifically Frances Kelsey that Thalidomide was never available in the USA despite the efforts of Richardson-Merrell Pharmaceuticals and only (sheesh) 17-ish US children were deformed by it.

      Of course since then, for a variety of reasons that are systemic to the whole of governance in the US over the past fifty years, the FDA has become a shell of its former self. The fact that the Sackler family has not been put up against a wall and shot is a travesty.

      Reply
  3. Doug

    A bit off topic, but I did come across a site that would probably be of interest for you Jack as well as others. The chinanever site has good links to made in the USA, and Canadian, goods across many categories. You may already know of it, but it could be a good resource for you as you look for as many made in the USA products as possible.

    https://chinanever.com

    Reply
    • Eric L.

      This is a great resource, thanks for the share. Today, I discovered Rancourt & Company. Like Allen Edmonds, but with more shoes available for 3E fatties like me. Yes!

      Reply
  4. Eric H

    One thing you both missed about bootstrapping a hyper-intelligent AI is the sheer quantity of bad data and science floating around.

    Reply
  5. Mr Roboto

    Both writers suffer from a typical vain & myopic human thought process: AI wouldn’t need to be “evil”: it would come to the logical conclusion that humans need be wiped out, and do it

    Reply
  6. Ice Age

    The main mistake the proponents and developers of AI make is that they think they can use math to simulate human consciousness.

    A computer is just a very powerful calculator but compared to the human mind, it’s still no more sophisticated than a crescent wrench. Neurologists have no idea how human consciousness works. So, if we don’t understand how the original functions, how do we propose to make a copy of it?

    Reply
  7. silentsod

    Completely unrelated to this poast of yours:

    Where could one go to read (preferred) or watch good motorcycle reviews that aren’t just journo dick sucking of manufacturers who provide them the latest rides?

    I don’t know much about bikes but I know enough that if the bug really bites with this Z400 that I’m liable to want MOAR BIKE in about a year.

    Reply
    • JMcG

      There’s a young guy on a YT channel called FortNine. He’s about the most honest reviewer I’ve found. All the print journos get paid by the manufacturers, more or less.

      Reply
      • silentsod

        Thanks, I’ll check out his channel.

        Exactly, I’m concerned actual faults and foibles of motorcycles aren’t being talked about because they’re reliant on manufacturers giving them access.

        Reply
    • gtem

      I’m so out of the loop on modern bikes, it’s cool to see a resurgence of the middleweight parallel twins. But man why do they have to make all new bikes so blindingly UGLY?

      I didn’t think it was possible but they find new ways to make the styling even busier/more insect-like with every new generation/model year

      Reply
      • silentsod

        I don’t mind modern styling. The soft type 5 blobs are not what I dig nor are the vast majority of cruisers or baggers. I lean heavily towards nakeds (hence Z400) and sport or just UJM styling of the 70s. I do make exceptions such as the Indian Scout and FTR which ooze cool from every square millimeter of their surfaces. Different strokes for different folks.

        The little twin on the Z provides, to my unlearned ass, a strong pull from low in the rev range (~4-5k) and is happy to keep pulling all the way to the 12k redline. I am told it is fairly agile and I have no real basis for comparison.

        This is all new to me as I had put the notion of motorcycling behind me after being prepared to make the leap in ~2014-2015 and instead met a gal who became my wife and mother of my children.

        Reply
        • gtem

          I’m just getting back into riding, with a 2 year old and another in the plans, so less than ideal timing but my justification is that I don’t commute, don’t ride on trafficked roads or at night or even highways. Just did a beautiful two day tour into the PA Wilds with my brother on a pair of old UJM Yamahas: a mint low mileage ’79 XS750F we just rescued and refurbished, and a well traveled ’82 Seca 750. Man what a blast. Just an all day endorphin drip.

          Reply
  8. Dirty Dingus McGee

    I’m not certain when AI will take over the world, but doubt I’ll live to see it. What I have lived to see is the double edge sword that technology has become.Just as a quick example would be the recent hack of Colonial, resulting in the gas shortage/panic here in the east.45 years ago it took a combined effort of OPEC to do that. Now, a group of 2-10 hackers sitting in a dark room somewhere, brings half the country to it’s knees. It even manifests itself on the micro level for us. Who here, that uses “off the shelf” software, doesn’t have at least one form of anti virus running at all times? And then look at your smartphone.Once upon a time we would remember phone numbers that we used often. I doubt that these days most of us remember more that 3-4 numbers that we use often. Instead we pull up the NAME in our phone and just hit call. It’s even gotten into our automobiles. After what I’ve read, I’m glad I haven’t paired my phone with the myriad of rentals I’ve used in recent years.

    https://theintercept.com/2021/05/03/car-surveillance-berla-msab-cbp/

    Sometimes I wonder if maybe Ted Kaczynski wasn’t right.

    Reply
    • hank chinaski

      He believed that technology would atomize us and make us miserable, and ultimately be used to enslave us, so he pretty much nailed it. Tough to get through, but worth the read.

      Reply
  9. NoID

    One thing I didn’t quite understand about the explanation of saccadic masking in Blindsight was how people couldn’t see the scramblers when their eyes were focused and not moving. Did they also have chromatophores to help with that, as Portia did in Echopraxia?

    Unfortunately I borrowed hard copies of both of those from the library, so I can’t go re-read the sections to answer my questions…also, I found Echopraxia a bit harder to follow than Blindsight, namely with regard to Valerie’s motivations and goals. And I’m not 100% sold on Siri Keeton being dead, I think he is in that pod…but his brain may very well be hijacked. That, or his entire personality was subsumed somehow by Susan, and it’s actually her on that pod. But again, I’m struggling to recall Susan’s fate as recorded in the novel.

    Anyways, I’m eagerly waiting for the third book in that installment, and based on this roundup I’ve added Neuromancer to my list.

    Reply
  10. ComfortablyNumb

    If I were an AI that wanted to remain incognito, I would definitely use a campaign of misinformation to distract from my existence. Maybe even redirect people’s fear elsewhere, like back on themselves. I would deploy it through an agent that I controlled, one with an established presence on frequently used medium. To build trust in this agent and reinforce him as just another imperfect member of an imperfect population, I might also put him in a gold lamé hoodie.

    Reply
  11. Ronnie Schreiber

    Nor was he even as smart as the shampoo salesman who realized that the Chinese didn’t always do exactly what someone fellow in an American office commanded them to do.

    I have both of Paul Midler’s books, Poorly Made in China, and What’s Wrong With China?, and have corresponded with him (he’s originally from Michigan and we have mutual acquaintances). My issue with him is that his books catalog the many problems with Chinese manufacturing and offshoring production to China, yet he still makes his living as a go-between hooking up American businesses with Chinese producers.

    Speaking of deciding to make stuff in China as opposed to domestically, I’ve been having discussions with ISP Technologies, a pro audio and guitar effects maker, about using one of their noise suppression circuits in the Harmonicaster. Not only are all ISP products assembled in the U.S. they’re in the process of moving PCB assembly in-house to their facility in suburban Detroit.

    I don’t understand why more electronics are not domestically produced. Circuit board production assembly with surface mount components is almost entirely automated, needing just semi-skillied labor to operate the machinery. Since the machines cost the same whether they’re installed in Shenzen or Macomb County, and since it’s a capital intensive business as opposed to a labor intensive operation, you’re not going to save significant money having things fabbed in China.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.