I know it’s a sin, but I’ve been trying a chatbot in conjunction with programming. Here’s a brief report.

I tried the thing out of curiosity and because I feel that I’ll be asked about them and should have a better response than ‘dunno’.

We’ll finish up with the morality. Let’s start with what I’ve found out.

I’m using ChatGPT-3, for no particular reason. I’ve been using it in conjunction with a Blender / Python thing I’m working on. The Blender Python API consists almost entirely of a million calls, loosely classified, documented in convenient alphabetic order. There are almost no working examples (that I’ve been able to find) in the Blender documentation. You are invited to study the Python add-on files that come bundled with Python, conveniently located somewhere inside blender’s executable file, which, if you hold your mouth just right, you can open as if it were a file folder, since basically it is a file folder.

The examples are not particularly good code, and are not self-explanatory, nor were they ever intended as tutorial material. Swell.

Of course, stackexchange is your friend, and there and elsewhere on the Internet, you can find a lot of material on Blender and Python, at the usual level of quality, and primarily about five to eight years old and two major version numbers out of date. Marvelous.

I was doing OK. This level of inconvenience is nothing new, but one evening I was watching something about “AI” and loaded ChatGPT onto my iPad. With no other questions in mind, I asked it something about Blender and Python and it gave an answer that looked pretty good, based on what I understood by then.

“Looks pretty good” should be the primary advertising banner for these things1. What a Large Language Model (LLM) does, in essence, is compute sentences that are very similar to the sentences upon which it was trained. And they have been trained on books, articles and quite likely the entire internet. For an over-simplified yet really quite on point example of how they work, look up Markov Text Generator and maybe even play with one.

I found this online one, and fed it text from its own page. Here’s what it produced:

MARKOV CHAINS ALLOW THE NUMBER OF TWO CONSECUTIVE WORDS IT THEN

CHAINS TO OBTAIN A SENTENCES GENERATOR HOW DOES MARKOV CHAIN CAN

ALLOW THE NUMBER OF MARKOV CHAIN WORKS FOR TEXT THE USED

When you play with these things, as I have done at length in my youth, they can generate some nearly reasonable-seeming sentences. Long ago, I wrote what I called a poetry generator on the IBM 1620, that generated text that almost seemed like blank verse, if you didn’t know much about poetry.

It’s not intelligence, artificial or otherwise.

Today’s LLMs are basically Markov text generators writ large. Very large. Internally, they do not sort through the text they are fed creating meaning. The sort through it creating probabilities of words and phases appearing in conjunction with each other. (And, I would guess, they have some kind of basic language syntax rules built in, because they do generate real sentences, unlike the examples above.) But they have no model of the world, and they have no model of truth, and they do not understand what they are saying. As Emily Binder, a well-known AI researcher puts it, they are stochastic parrots. They say random things that they do not understand.

But let’s get down to cases, mine in particular.

When I have needed to write a Blender class or add-on component, I’ve found the usual internet searches to be, as usual, mostly noise. If you read down to the third or fourth answer on stackexchange you may get a clue. So it is tempting to ask ChatCPT, because it gives you an answer that seems quite reasonable.

Here’s a recent case. Blender Python add-ons consist, typically, of small classes that can do a thing, and a menu addition that adds your thing to an existing Blender panel. It’s really almost good. You tell the panel what class to call, not by naming the class, but by using a string ID, bl_idname. You give each of your classes a unique bl_idname, and you refer to that string in the panel. So the strings that I was given in the thing I’m working on, were all lower case, not camel or snake case, like dothisthing, not doThisThing or do_this_thing. I thought camel case would make more sense. (I was probably wrong to choose camel, snake would be more pythonic.) So I changed a bunch of them, should be no problem. But the add on wouldn’t load any more, and the message said that the name doThisThing was invalid. So I asked ChatGPT and this is what it said:

Me
can bl_idname include upper and lower case letters?
ChatGpt
Yes, bl_idname can include both upper and lower case letters. While Blender’s Python API conventionally uses uppercase letters for bl_idname values, there is no strict requirement for them to be uppercase.

However, it’s important to note that bl_idname values are case-sensitive, meaning that “MY_OPERATOR” and “my_operator” would be considered as different identifiers. Therefore, consistency in naming convention (whether using all uppercase, all lowercase, or a combination) is crucial for readability and avoiding potential issues with identifier collisions or confusion.

In summary, while bl_idname can include both upper and lower case letters, it’s advisable to choose a consistent naming convention and stick to it throughout your Blender add-on or script to maintain readability and consistency.

Very clear, quite credible … and entirely wrong. The API uses lower case, no upper. The names may or may not be case-sensitive, we’ll never know, because they have to be all lower case. In summary, no ChatGPT, bl_idname cannot include both upper and lower case letters.

By now, of course, I knew what had broken my code, so I just fixed it. However I asked ChatGPT about the error message:

Me
invalid bl_idname
ChatGPT
If you encounter an “invalid bl_idname” error in Blender, it means that the bl_idname you have provided does not conform to the required format or rules. Here are some common reasons why you might encounter this error:

Incorrect Format: The bl_idname must follow a specific format, typically starting with a prefix that indicates the category of the element (e.g., OP_ for operators, PANEL_ for panels). Make sure your bl_idname follows the correct format and conventions.

Special Characters: Avoid using special characters, spaces, or punctuation marks in bl_idname. Stick to alphanumeric characters and underscores.

Reserved Keywords: Ensure that your bl_idname does not conflict with Python reserved keywords or built-in names.

Case Sensitivity: bl_idname values are case-sensitive. Make sure you are using consistent casing throughout your bl_idname.

Uniqueness: Each bl_idname must be unique within its category. Ensure that the bl_idname you are using is not already in use by another operator, panel, or element within Blender.

Length Limit: There may be a limit on the length of bl_idname. While Blender’s Python API does not specify a specific length limit, excessively long bl_idname values may be rejected.

To resolve the “invalid bl_idname” error, review your bl_idname and ensure that it adheres to the required format, follows the conventions, and is unique within its category. If necessary, try adjusting the bl_idname to comply with these requirements.

If you called Tech Support and got such a clear answer, you’d probably fly over to visit the company and buy whoever gave you such a clear answer a drink. Unless, of course, you tried to follow their advice, because the advice is wrong.

But worse yet, it’s not always wrong. It is often, perhaps usually, nearly right.

When I’m pairing or mobbing, and we have a problem, we often offer and try out half-bun ideas. “Maybe it wants a dot, not a colon.” “Could it be something about using a float there?” “Try the conventional spelling of that word.” Often we don’t have the answer but we see a possible step in the direction of the answer, and we try something and when we get a different result, we follow our nose.

So if it’s nearly right, that can still be quite useful. I haven’t been keeping score, but I would say that at least half of ChatGPT’s answers have been on point, or close enough to get me there. And it’s far better with plain Python than with Blender, which does not surprise me because the Blender info on the web is sparse and weak. And I’ve read one and a half out of the three books on the subject, and they are … well, one of them is terrible and the other is fairly good. So how could ChatGPT know any better? There’s no good info out there.

But if you were to ask me about Blender Python, I’d kvetch about the quality of the documents, and I’d sandwich all my remarks with “as I understand it” and “or something like that”, and you would know not to take what I say as face value but as the remarks of someone who has been in the woods a bit but doesn’t know them all that well.

Not ChatGPt, no sir. Right or wrong, it’s confident and articulate and always speaks as if it is presenting facts.

And that is dangerous.

In an area where I know rather a lot, I can spot a lot of its mistakes, and I can ask followup questions. And it’s very good about taking feedback: you can say “that didn’t work” and it will compute another guesswork answer.

But in an area where I only know a bit (Blender Python), it is harder to spot the mistakes. And in an area where I know almost nothing—I’ll think of one in a moment—I won’t be able to catch those issues, and I might be inclined to take them as truth. Well, no, I wouldn’t, if I knew I was talking to a chatbot, because I know you can’t trust a chatbot. But I were just some random person, I might be taken in.

I might even be being taken in now, because I’m probably not going to throw the thing away. In spite of its horrendous flaws, it’s actually kind of useful, and, though I hate to say it, the way it seems to be actually talking back and answering your questions feels a bit more human than reading the dead pages of the web.

Speaking of Human

Years ago, back in the 60s, Joseph Weizenbaum wrote a trivial program called Eliza. It was a very poor simulation of a conversationalist. A quite popular script for it tried to answer like one of those classical psychotherapists who just ask open questions. “How do you feel about that?” “What if you could do that?”

It was really pretty weak, but if you chatted with it, you could almost get the feeling that it understood you and cared about you, mostly because it kind of seemed to be listening. “My brother used to make fun of me.” “Tell me more about your brother.” The same will be true with chatbots, many times over. It would be “easy” to come up with a life coach kind of chatbot, that would listen to you and offer advice and feedback. And it would sometimes, often, give you bad advice. But it would seem like your friend, your companion, your helper.

Insidious. Danger, Will Robinson.

On a recent program about AI, perhaps on Nova, they demonstrated a number of interesting deep fakes, including putting someone else’s face into Terminator, Jordan Peele doing his Obama impression while Obama’s face seemed to be saying his words, and the narrator of the program viewing a video of himself saying things that he never said.

We are well into an era where nothing we see or hear on the television or the internet can be fully relied on. And, unless I miss my guess, it’s going to get worse before it gets better … if it ever gets better.

And the Energy!

According to one article that I found, the energy consumption of the creation and use of these programs is immense. It is estimated that training ChatGPT3 consumed as much energy as running 130 homes for a year.

A Moral Issue?

They consume massive amounts of energy. They have been and will be used to create misleading, harmful misinformation. They will be used to displace workers from their jobs. It doesn’t really matter if they go for Skynet and try to kill us all: there are already serious moral concerns about these programs.

I know at least one person, whom I respect greatly, who sees even what little use I’ve made as frankly immoral, and they may be right.

These things are among us, and they will continue to become more capable, and they will continue to surprise us. Some of the things they can do are clearly good. Some of them are clearly not good.

In the end it comes down to what people will do, and people do not have a universal record of making good choices.

Interesting times. That’s my report. Contact welcome.



  1. I will generalize here about all LLMs, without direct experience, but with rather a lot of background in education and reading. If I impugn your favorite chatbot, do please toot me up and I’ll be happy to be further educated.