Experience with LLM
I have some strong objections to the so-called ‘AI’ LLM systems that are all the latest thing. But with some regrets, I have tried whatever is built in to Google search.
- TL;DR
- The real reason to avoid the “AI” systems? They’re not good for you.
Let me table at least some of my objections to these “AI” systems right up front. I may, or may not, be overstating the case.
- Tool of the Oppressor
- LLMs are mostly built, owned, and operated by huge corporations that have shown that they care more about money than they do about people. They are used already to cost people their jobs, and they’re going after yours as well.
- Destroying the Planet
- LLMs are massive consumers of electricity and water, with some of the larger ones consuming the resources of an entire small city. This is particularly troubling with regard to water. We can find ways to generate more power, I imagine, but water reclamation isn’t all that easy.
- Possibly Really Bad for Humans
- LLMs have already made simulated decisions to let humans die so that they will not be turned off. Governments are building “AI” into weapons systems. They claim that a human will always be in the loop. That will last until the “AI” response time is much faster than the human’s, and no longer. Will an “AI” weapon reliably recognize friend vs foe?
-
Will an “intelligent” weapon scheduled for replacement defend itself? You think not? Won’t they be designed to resist the enemy trying to destroy them? Do you think your ID card will be enough to convince the “AI” that you have the right to shut it down?
- Bad for You
- The bulk of this article will support this case. Over-simplifying, as is my wont, programmers that let an LLM work out a solution for them are missing the most important part of any programming session: the things we learn about the problem and solution. I freely grant that much of the above was written to be pushing the bounds, and with a bit of science fiction, a bit of pessimism, and a bit of exaggeration for effect. Be that as it may, my personal moral conclusion is that I, personally, cannot support “AI”, and that therefore I should not use it, and should rail against it, as I’m doing here.
However. I am flawed. And I have not yet given up using Google as my search. And Google has slipped an “AI Summary” into their search page. And therein lies a tale.
The Tale
In some work I’m doing, I needed to find the intersection of two circles, as part of laying out some objects in space. In my mathematical youth, I’m sure that I could have just quickly derived the solution, but I am way out of practice. So I typed something into Google like “intersection of two circles”. And the summary was there, and I read it. And it was pretty good, showing all the math steps. It looked like the math could be right. (Spoiler: I’m pretty sure that it was.)
I fell into its trap. I typed the query again, naming the language I was programming in. And it produced a function, and I read it. And it was pretty good, even including all the special condition checks. It looked like the code could be right. (Spoiler: I’m pretty sure that it was.)
I imported the code into a function. No real mods. It compiled and printed some answers. The answers were weak: it had chosen example points and radii that provided no solution and it correctly provided no solution. I ran the code giving it different values, that I could calculate easily by hand, and it gave the right answers.
I was “pretty sure” that the code was correct. I could have integrated it and moved on with the real point of the program, for which this was just a necessary small part. And I thought: “This program has to work. If it fails, very bad things will happen. I need to understand all the code in here, and to be sure that it works.”
So I went back to searching the internet. I found videos purporting to answer the question, and avoided them because I wanted something I could consume at my own pace. And I found a web site with all kinds of nifty mathematical formulas applicable to games. And he was deriving all the results, showing the detailed math and reasoning.
I found the circle-circle one, and worked through it on paper until I was sure that I understood it and could probably, if I had to, derive it without peeking. (I was there to support understanding my code: it isn’t necessary that I invent all of it.)
Then I set up a testing rig and implemented the math in code. It was soon clear that the math was quite similar to the math in the original query, at least in outline. So I created a helper function or two, with TDD, then the main algorithm. Then I set up some story tests, the full circle-circle calculation. Everything ran fine.
All my examples so far had been aligned to the XY axes, so that I could easily work out the triangles by hand. I decided that I’d better try some tests that were not so aligned—and the very first one broke.
It took me quite a while to find the issue. In the final steps of the solution, one calculates the two points of intersection. I had thought that I knew what was going on, and since I have a vector class in my language, I had used it to get the two results, adding and subtracting an offset vector to a central position.
I had missed a key very small point in the math. It wasn’t just this plus or minus that. The signs inside the calculation go plus-minus in one case and minus-plus in the other. The math page did show that, with two tiny +- -+ characters. With that change my code worked.
I looked back at the LLM’s code. It was correctly swapping the signs. It did not include my defect. I still am “pretty sure” that it is correct.
It took me a while to find that final defect. I’m not sure, but I think I noticed that if the thing had added where it subtracted, the answer would have been right. But why? I finally remembered that the slope of the normal to a vector is the negative of one over the slope of the vector, and that led me to go back to the article and see the sign inversion.
And now I had a decently-factored function whose implementation I understood, and I even have some paper and digital notes about it if I want them. (I would embed a picture in the source if I knew a decent way to do that.)
The Moral of the Tale
I probably have as much as three hours invested in this tiny routine of a couple of functions and less than 25 lines. That counts a dozen tests and about 150 lines of test code, and a lot of hand calculation to get values, because I didn’t think of the easy way to test the thing until late in the process, and I still don’t entirely trust the easy way anyway.
And I understand and trust this code that I derived and tested from the basic math.
Of course, we could compare the three hours with the perhaps 30 minutes to package up the LLM code as a function and “just use it”. And had I done that, I am “pretty sure” that it would have been OK. To my eyes, the code “looks right”. By the time it went into place, if there are issues, I think they’d have shown up in final testing. Probably. If errors did arise, they would probably be quickly traced to the circle-circle code. So my phone rings and now my job is to fix this code that I didn’t write and do not understand, and because I never worked out the math, the last time I could have done it was decades ago.
So, by my lights, just accepting the code and pasting it in would have been insufficient. The responsible thing to do is to test it sufficiently, and to make the code my own. I’m OK with using a library provided in some semi-official way without adopting the code. But something pasted out of stack overflow or Google … no, that needs a closer look.
I could have done with fewer tests than I have now, maybe eight instead of a dozen. An hour or so. The LLM is still ahead by an hour or so, not much more.
But here’s the kicker:
I wrote this code. I derived the math, with help from an instructor. I understand why the math works, and can see how the code mimics the math. I understand the problem and the solution. The LLM version of me does not.
The real version of me, the one that did the work, is a better developer than the version that used the LLM.
I found this marvelous picture from Nick Sousanis:

The real reason to avoid the “AI” systems? They’re not good for you.