At FGNO last night, a member told us of experience with an “AI” and showed us the code. I wax Jesuitical, I hope in the best possible way.

Last night at FGNO1 we were talking about a member’s experience with an “agentic AI”, in building an app. Our friend would have had to re-learn Python and figure out a lot of APIs to write the app, and estimated that it would take them a week or more to do what they accomplished with the “AI” in a day or so.

Mostly what our friend had working was straightforward access to an on-line database containing masses of information, far from the full app they have in mind. The code that I saw was absolutely typical of the kind of boiler-plate I would have found with a conventional web search. This is, of course, no surprise, since that’s pretty much what the “AI” did to get what it “knows”2.

They had used prompts that demanded tests as well as code, and while there was nothing super wonderful about the code we saw, it was certainly typical of the examples one would find with a conventional search of the Web.

We tried to set aside the horrific impact on resources that these things cause, and their likely use to extract more labor out of fewer and fewer people, and all the other deep moral concerns that some of us have about the things, and just think about it much as we would think about a “smart” but not “AI” programming IDE like the marvelous ones from JetBrains, such as PyCharm, which I use (with the “AI” not enabled).

If I were to undertake what my friend did without using an “AI” helper, I’d have used conventional search and surely would have found the on-line database that they found. I’d probably have delved into whatever site supported that, looking for info on how to access it. Maybe I’d have found code there. Maybe I’d have found that it was some “standard” kind of web API, and then searched for examples of that in Python. I would surely have found code like what the “AI” found.

At that point, I tend to do one of two things. Sometimes I just paste in the example from the web and edit it minimally to connect to the database I have in mind. Other times, I might use the code I found to build up similar code bit by bit. I do the latter thing when I want to understand what I’m doing, and the former when I just want an answer.

Our friend just wants the answer, and, as I understand it, pretty much just let the “AI” do the work. I do that as well, often, when the code I’ve found on the web is just something I want to use to get on to the part of the problem I care about.

Our friend described the process of using the “AI” as producing a prompt, then walking away for a coffee, coming back later to see what was done. If I were the kind of a person to have apprentices, and had a programming apprentice, I could imagine giving them the assignment to gin up access to this database, with instructions much like our friend gave to the “AI”, which included descriptions of tests, descriptions of the desired output, and instructions as to refactoring that was desired.

And I could imagine going off to read my book or drink my chai, or work on something else while the apprentice worked.

When the apprentice was done, I would owe them a fair and deep assessment of their work, with keen expert observations, pithy commentary, elegant examples, and, of course, judicious application of the lash of sarcasm that makes up the life of a typical fictional apprentice. Since I am a person of impeccable decency, at least sometimes, I would work to give my apprentice everything I could.

Working with an “AI”, however, is different. While the thing might remember locally what we’re talking about, as far as I know, they do not build up new concepts back at AI Headquarters based on what the individual instances do. So probably one teaches an “AI” differently from how one might teach a human apprentice.

Mostly Grunt Work

I think that even the most creative programming someone might do includes a very high percentage of work that is not particularly creative, what I want to call “grunt work”. One needs workman-like code, solid code, well-tested code, but there is no magic to it: it’s just the day-to-day work of a competent professional.

I have no idea what the percentage of grunt to creativity is, but my own work with programming all kinds of things makes me think that more than half, perhaps much more than half, is just plain old programming. That could be because I am a fairly competent programmer but not particularly creative, but even if so, that would make my experience more like that of the typical programmer, not less.

We do need creativity in our technology. I suspect that most creativity today, in programming at least, comes from someone of great experience who has a facility with putting ideas together in unique ways, and who recognizes interesting ideas and tries them.

If a human is to be truly creative in any endeavor, they need a great deal of experience, combined, no doubt, with other elements such as curiosity, insight, free time, access to mind-altering substances, I don’t know, probably all kinds of things.

But most of the work in bringing that creative idea to the world will be standard grunt work, or so it seems to me.

Somewhat the Same

I hope you can see how similar the use of an “AI” is to what we might do with simple web searching, finding resources and examples to learn from, or just to copy, paste, and hammer.

I hope you can see how easy it is to make the use of an “AI” seem to be much the same thing as using the help of a smart apprentice who doesn’t mind doing grunt work.

So, if we close our eyes, just for the sake of the discussion, to the costs of the things, it is, I think, easy to think something like this:

It’s just a very smart web search, with the ability to find code and fit it into my app. It can even write fairly decent tests. Given decent instructions, which I can clearly create, it’s really a lot like an intelligent junior helper.

Very Similar to What I Do

I have looked up APIs for online databases and similar resources many times. I typically just use them, whether I paste them in and make them work, or put them in incrementally. And when they work, I move on to the real work of whatever I was tying to accomplish, the meat of the application.

I don’t program with a dumb text editor if I have a choice. I don’t use my rather smart Sublime Text editor for programming if I have a choice. I use a powerful IDE, with search tools, testing libraries, refactoring tools, even, if all else fails, debuggers. My preference is JetBrains’ IDEs, which are, frankly, just great.

So why don’t I turn on the “AI”? Well, at some point, I might. We’ll perhaps consider that below, or if not, at some future time. But I mostly don’t turn it on, because I enjoy doing the thinking that goes with programming, and, without the “AI” turned on, the IDE does the grunt editing to extract a variable or method, allowing me to think of the operation, get it done, assess whether I prefer the result, and put it back if I don’t, all with minimal typing and great precision.

When it comes to thinking that I do not enjoy, I can see the desirability of turning that work over to a helper of some kind. In a game, I enjoy figuring out how to find a path from where we are to the Pit of Despond, but I do not enjoy drawing all the walls and floors along the way, and I do not enjoy placing all the monsters and scaling their power to the power of the player, and all the innumerable details that make up an actually successful game. I would love to have helpers to do those things.

However, I am, for all practical purposes, a programming hobbyist who likes to write about his experience in hopes of offering useful ideas to those who follow his maundering. Or at least horrible examples. If I were programming for a living, it is quite likely that I would be condemned to write vast swaths of grunt code. And, as such, if I could get some help, I’d really want it.

So I Kind Of Get It

The “AI” isn’t my friend, and I don’t think it is a person, but it does seem to be a very powerful web searcher that parses human language and correlates what we say rather well with what it finds, so that what it finds has a good chance of being much like what we were looking for.

And while its probabilistic behavior has a non-zero chance of being wrong, it has, on the face of it, about as good a chance of being right as a conventional web search. I have to be similarly careful in either case. (However, stack overflow comments are often very useful in pointing out errors in answers provided. Does the “AI” look at those? Very doubtful.)

And it can paste in the code and make it fit with the existing code pretty well. The mistakes it makes are usually easy to spot. It can write tests and they are simple but can be effective with decent prompts from me.

I can dig it: the [DELETED] thing is actually helpful, to a discernible degree.

On the Contrary

There are some very serious reasons not to accept these things, including but not limited to:

Environment
The “AI” tools consume vast quantities of power and water, and produce pollution through the need to produce the power somehow. They consume the power and water of small cities.

They are not good for the world.

Tool of the Oppressor
They will be used to replace human workers with machine workers, as soon as they seem to offer an economic advantage to the capitalist machine that controls so much of our world. It will become harder and harder for a human to find work, especially work that is in any way fulfilling or lucrative.

They are not good for people.

Personal Learning
Typing a prompt and wandering off for a coffee means that you will not learn how to do the work that the “AI” does. You might at least review the work, but if the “AI” has a really good shot at being correct, even that review will seem not to be worthwhile. You won’t learn how to do your own job.

They are not good for you.

AI Answers That

Environment:
The data centers exist no matter what you do. Your individual query uses a few drops of water and a few milliwatts. You use more water and power taking your weekly shower.
Tool of the Oppressor
Your best defense against the “AI” replacing you, might be to become truly adept in its use. You might even get a promotion out of that expertise.
Personal Learning
The tool will change what people need to know vis-a-vis what the computer deals with. You can’t learn about programming with “AI” unless you program with “AI”.

What’s To Be Done?

We all have to decide for ourselves what to do about these things, and all the things that make up our very complicated world. Your situation is not like mine.

How should we move through the world? Should we move with minimum harmful impact? What would that mean? Move to a warm climate, and live nude in an area where we can subsist without garments, eating the fruits that fall on the ground or dining on the carcasses of squirrels and rabbits that one finds lying about? Even then, we’d be depriving the soil of nutrients, snacking on things the buzzards would have eaten, and depriving beetles and worms of their best livelihoods.

I think that what we might try to do is to recognize that our existence imposes certain costs on the world, and that we should try to pay back those costs by doing some kind of good for … for the world, I guess. I am somewhat human-centric, so I would like to do good for people, although I am also inclined to do good for nature, the planet, cats, and NPR.

So, if using “AI” lets you do a greater good than the cost of using it, I could accept that it was a good use of the AI.

And, as it happens, I know what our friend from FGNO wants to do with their use of “AI”, and it is a good use that has a small but real chance of making a real difference in the world.

So, I cannot condemn that use, not at all, while I remain deeply concerned about the overall costs of the technology in the world.

Pace3, Everyone.

Yes. I was educated by Jesuits. Casuistry would be my middle name, if my middle name were not what it actually is.



  1. Friday Geek’s Night Out, a Zoom ensemble held, as you might imagine, every Tuesday evening for the past few years, among about a handful and a half of like-minded folx. 

  2. pace3 Hill, and pace me and pace Emily M Bender and pace all the rest of us who know that the LLM doesn’t “know” anything, other than the probabilistic relationships between words and phrases. I’ll use casual terms here. Note that the so-called “AI” does not think, does not really form concepts4 the way we perhaps do, and so on. 

  3. Latin pace, “PAH-cheh” in Church Latin: “peace”, implying respectful attention to individuals holding well-founded alternate views from one being expressed.  2

  4. To my knowledge, we do not know how our brain forms concepts, but it’s typically imagined to be related to clusters of connected synapses that tend to trigger at the same time under some stimulus. It wouldn’t be hard to make up a credible analogy between that brain science and the LLM science of probabilistic triggering of the next words or phrases. I remain open to the possibility that an LLM is a very decent model of how our brain actually works.