Butlerian Jihad
Ten thousand years before the events of Dune, mankind fought the Butlerian Jihad against the tyranny of machine intelligence. We need a Humans Union.
I was around in 1966 when Weizenbaum created his DOCTOR script and ELIZA, the chatbot that behaved like a non-directive psychotherapist, with leading questions like “Tell me more about that” and “How did that make you feel”. We had a copy of ELIZA where I worked and it was very weak. If you typed “he gave me cake”, it might say “How did he gave you cake make you feel?”. Its ability to parse and modify what you said to it was very limited. Even so, the thing was fascinating.
I suspect that most people saw ELIZA’s defects quickly, and found its limited reactions to be repetitive and not very revealing. Some people, however, began to describe their emotions quite deeply, as this simple program seemed to be interested. There are stories of people who wouldn’t let others read what they had typed to ELIZA: presumably things they rarely if ever shared with anyone else.
I have tried LLMs.
Early on in the time of today’s LLM-based chatbots, I tried to get one to help me write some program or other. It would make mistakes, some of them quite stupid, such as suggesting that I use library functions that didn’t exist. When I would tell it “math.hamburger_helper” does not exist, the chat demon was all cooperative:
“You’re absolutely right! Here’s a better solution.”
It never told me that I had my head up, as my friends will often do. It never pushed back, it never refused to try again, and it expressed all its ideas politely and helpfully. It seemed to be making every effort to be my friendly helpful partner, and seemed to be trying to be completely aligned with my leadership.
The word “sycophantic” means to behave in an obsequious way in order to gain advantage. A more modern term with a similar meaning is “sucking up”.
The so-called “AIs” are sucking up. Not because the “AI” wants an advantage: it has no feelings, no desires, no wishes. But its creators want you to use the “AI”: it is the business they are in. It is to their advantage if you use their program more and more. Billions of dollars are at stake and the LLM perpetrators want those dollars.
Their “AI” must offer value.
To get those dollars, their “AI” has to offer value to you, and they do offer value, in two forms. The answers that the program provides have to be pretty good, and they are in fact pretty good. If you work with the thing, prompting a few times, providing enough context, you’ll probably get pretty good answers.
No one falls in love with “pretty good answers”. Most of us will use stack overflow or reddit, but we don’t believe everything we read there. We have come to expect that most of the advice we’ll get from those places will be pretty good, nearly correct, and probably not quite good enough. We use them, but we aren’t hooked on them.
The “AI” needs to hook us.
The “AI” is programmed with an aspect that is demonic in its simplicity and power: it forms its answers to appear to be friendly, to appear to be trying to help, to appear to respect our opinion, to appear just short of obviously sucking up. I emphasize “appear” because it’s a program it absolutely cannot be friendly, it cannot be tying to help, it cannot respect us. But it can be programmed to seem that way, and it can be and has been programmed to dial its apparent obsequiousness up and down based on how it is prompted. It allows us to tune its responses to be the kind of responses we like.
The “AI” is programmed to influence us to like it, to enjoy interacting with it. It is programmed to be addictive. It is programmed to hook us.
Cui bono: Who benefits?
The creators of the demons are working toward immense benefits for their companies and themselves. The companies that “move to AI” foresee great benefit as well, in that they’ll be able to get rid of people and replace them with much less expensive “AI”. And yes, as the user of the “AI”, you do get some benefit. As we’ll discuss below, that benefit is likely very short term.
“AI” Creators
There are hundreds of billions of dollars waiting for the “AI” creators: revenue from “AI” usage; huge salaries for executives; massive stock market profits, direct and indirect; and more. They are talking about trillion-dollar valuation, and they could be right. Hundreds of billions is clearly in hand.
User Companies
The company that “moves to AI”, your company? How do they expect to benefit? The easiest and most direct way is that they hope that using “AI” will allow them to get rid of human employees. We already see chatbots replacing on-line help people. We see chatbots making phone solicitations that would formerly have employed people. And, in case you happen to be a technologist, you’ll have noticed that they’re laying off programmers with the pretext that their “AI” can do much of the programming. If your company is trying “AI” and asking you to try it in your job, it should be pretty clear that if it can do half of your job, half of you and your colleagues will be on the street any day now.
People like you and me
For the “AI” industry to succeed, for the pyramid we’re drawing here to work, the “AI” has to be useful to you, the worker who is asked to use it. In essence, when we use these things, we are training our replacement, and our replacement isn’t even another human being who at least needs a job, it is a computer program whose sole purpose is to put money in the hands of people who are not you and me.
Yes, we’ll see some benefit from using it, but that benefit will be short term, because the long term plan, as it applies to most “AI” human users, is to replace them. To replace you. You.
Not to mention
There are of course also immense effects of the “AI” on our planet. Data centers consume mass quantities of water and power. Data centers run so hot that their heat can be detected miles away. I get it: it’s hard to think about something that indirect and still a ways out in the future: everyone alive today will be gone before the planet is no longer inhabitable by humans. This impact is probably more important than the individual losing their job, but I’m writing to individuals and whatever floats your boat. Including rising sea levels and the prospect of a nice coastal view from Nevada.
What Can We Do?
Probably nothing we can do will work, but possibly everything we do can help. I don’t want to go over the top here, but I think we are in a battle between the vast majority of people, and a very few, very powerful people, the ultra-rich and the politicians who are at their beck and call, from all parties and all sides.
My friend Hill points out that billionaires, “AI”, politics, wars in Iran, all of this mess is the same thing. In my words, it is all a very complex system of power accruing to a few people, at the expense of almost everyone else. But a complex system isn’t a ball bearing whose surface is all the same, every point of which is the same as every other. It’s more like a ball of many different aspects, held together by the glue of money and power. There are always vulnerable bits sticking out, and every act that hammers away at those, that pulls one of those out of the ball, helps to destabilize the system.
It is possible to derange a complex system. Often what seems like a small impact can destabilize it and bring about substantial change. (More often, the system will jiggle and recover: but the right tweaks will bring about change.)
For the good of the bulk of humanity, we need to re-balance the power and money disparities that drive the system today. Perhaps at a later time, I’ll address things we might do that are not focused on the “AI”, but today is about the “AI”.
Today, we need to begin the Butlerian Jihad against machine intelligence. I have a few ideas, and I invite my readers to suggest more, and to carry the ideas to others. Here are just a few starting thoughts:
- Do not ever support “AI” and what it does, in your speech or writing. Be against it, every time.
- Do not use “AI” if you can possibly avoid it. Even those handy uses are harming the world, and are aimed at harming you.
- If you’re forced to use “AI” in your work, and you can’t find other work, find every flaw in it that you can.
- If you can sabotage an “AI” effort with safety to yourself, do so.
Finally: organize. I never thought I would recommend that programmers form a union, but I’ve changed my view on that. I think we need a programmer’s union.
No, what we need is a Humans Union. No billionaires allowed. This is a war between good and evil and we need to win it.