Could an LLM-human relationship be like pair programming? And if it were .. ?

Despite my moral objections to LLMs, it seems to me that my choice of work requires me to at least think about them, if not use them.

With rare exceptions, I am probably not a good pair for pair programming. Despite the fact that I am a teddy bear, I am somehow scary to many folx. Because of long hours of programming with Chet, I am used to some fairly intense, not to say hateful banter as we work. I hog the keyboard. I get the bit in my teeth.

In recent years, by which I mean many recent years, I have not had a pair to work with at all. My pair, such as it is, is the article I write while programming. The writing gives me the chance to see the code from more distance and gives my mind time to come up with alternatives, issues, ideas, similar to what a pair might do. It’s far from perfect, but it’s what I’ve got.

Many of my friends, associates, colleagues, and random people I’ve met on the Internet are working with current “AI” LLM-based systems. I think it is fair to characterize the overall impression of the experience as somewhere between “pretty good” and “good”. Most everyone reports some instances of the “AI” coming up with something that they think they would not have come up with. Most everyone reports some stupid “AI” mistakes—and most everyone laughs and waves them off.

That’s not really surprising, laughing and waving mistakes off. Anyone who has paid attention to their own programming has made mistakes that are quite laughable, and has learned to wave them off. We owe the same to our pair partner, so we are predisposed to laugh and move on.

The Eliza effect



Way back in the middle of the last century, Joseph Weizenbaum developed a chatbot called “Eliza”, that very weakly modeled a psychologist with those leading questions like “How do you feel about that” and “What would it mean if …”. It was fun to play with: we had a copy where I worked.

The thing is: the program, dull as it was, seemed interested in you. It asked these leading questions, “Tell me more about …”. People found themselves telling Eliza things that they wouldn’t have told random acquaintances. There were reports of people who wouldn’t let others look at their Eliza conversations, because they were too intimate to share.

This silly little program seemed to be interested in you, seemed to ask leading questions that invited you to tell it more, and that, in that indirect shrink way, might accidentally lead you to say the things that you really know about what you should do. This was called the “Eliza effect”.

LLM as Eliza



I’ll put it right out there: part of what you like about your “AI genie” is the Eliza effect. Is it a large part or a small one? It’s surely hard to know. But think about it a bit:

When you ask the “AI” to do something, it cheerfully gets going and tries to do what you asked. When you correct it, instead of arguing with you, as any human might do, it says that you’re right, in a congratulatory kind of way, and tries again, still cheerfully. It is subservient when you want it to be, tries what you’re interested in, and it never ever says “Chet what the hell are you doing?”

There is something wonderful about an effective pairing session, especially when one’s pair is responsive, helpful, kind, patient, all things that your “AI” pretends to be. It’s great when your pair has a good idea or knows something you don’t know or don’t have at your mental fingertips. And if your pair doesn’t have it quite right, that’s OK too: the whole point of pairing is that you give what you have and the two of you use it. So if your pair feeds you 80 percent of an idea, and you provide the 20, you feel good about yourself, and about the pair.

So, I assert without proof that part of why an “AI” user is pleased with the “AI” is that it acts like a helpful partner that is never in your face, and always trying its best.

But is “AI” really helping?



Here’s the rub: I think that it almost certainly is helping. First of all, even a very dumb Eliza-style computer pair could be helpful, just by saying things like “Do we have a test for that?”, “Is there another way we could approach this”, “We’ve been 15 minutes without a green bar, what should we do?”. That would be helpful.

If we asked it for refactoring advice, even some very simple pattern-recognition might allow it to offer “I think I see some duplication here, can we remove it?” or “This method is rather long, can it be broken down?” or “This method isn’t using any member variables, what is that telling us?”. That would be helpful.

Even if it just did simple searches for keywords, it could say “I’ve found something on stack overflow that might apply here”. Helpful.

All those things would in fact be helpful. And the fact is, the “AI”s of today can do those things and more, and often do them fairly well. And always with that cheerful here to help you sure let me get that for you not quite subservient but certainly accepting your prominence and leadership.

It’s truly helpful, and the better programmer you are, the more likely you are to be able to capitalize on what it gets right and avoid most of the things it gets wrong. The better pair programmer you are, the more you are equipped to pull the good bits out of what your pair offers, and guide them back on track if they are a bit off.

I have not used an “AI” in that fashion and I am resisting doing so. We’ll come to my reasons below. But I feel quite sure that if I were using it the way some of my friends are, that I would enjoy it, and that it would truly be helpful.

And I am equally sure that if it were not really improving my productivity, I would have no way of knowing that, and the Eliza effect would likely make me feel more productive whether I was or not.

Even if not …



Suppose there was a way of working that wasn’t quite as productive as my most productive mode, but that made the work more pleasant and enjoyable. Would I be wrong to choose that way, assuming no other harm than lower productivity?

If I can work for the man one way and produce 100, while not enjoying it much, or work another way and produce only 80, but with joy, what is the responsible thing to do?

Productivity isn’t the sole measure, nor likely the best measure for how I should perform some activity by which I earn my living.

Conclusions so far



Setting aside some important objections to “AI” that we’ll come to, under the assumption that it isn’t harmful to use them, and assuming that they aren’t terribly harmful to productivity, and that using them gives the programmer a pleasurable feeling of working and having a helpful if not really human partner … setting aside all that …

I think that I can see why an “AI” user would enjoy working with the “AI”, why they might be more productive, why they might feel more productive even if they weren’t, and why they might choose to continue with that experience.

It is not my place to judge others, although I often feel rather judgmental anyway. Be that as it may, I am feeling more OK about my friends using these things: I can understand how they might feel and why they might want to continue feeling that way. And I can even accept that they might really be more productive.

On the Contrary



All the above is assuming that there is no inherent harm in using an “AI”. Unfortunately, that is emphatically not the case. They are the tools of extractive capitalism; they are doing massive harm to the environment; they are a very slippery slope toward literal harm to humans; they are costing people their jobs; used improperly, which is common, they reduce human learning.

I feel those harms strongly, and so far have almost completely resisted using these tools, because they are not just tools.

My bottom line so far is this:

“AI” probably provides a not unreasonable kind of human pleasure, and possibly offers productivity a bit better or not much worse than working without it.

I believe that the systematic harm being done by “AI” far exceeds the value that it provides. I choose not to use it nor to recommend it, and I also choose to try to understand it as well as I can. Those choices are in conflict.

And if you choose other than I do, I think I can understand why you might do that. I wish us all good luck: we need it.