Questions about "AI"
Some rambling thoughts on “AI” and LLMs. Things I think about.
Main Concerns
I believe that the current LLM craze is fundamentally wrong, in a moral sense, on several dimensions, including destruction of the environment, putting humans out of work, and getting in the way of humans learning things by providing each answers.
More and related thoughts can be found in this linked article. I think “AI” is not good for us.
That doesn’t mean that I’m not interested in the topic. I am very interested, because it is important. I’m not satisfied with “AI IS EVIL”: I want to understand this technology, to understand the enemy, and to see what good there is to salvage, if there is any.
Topics that interest me include:
Modeling Thinking
I wonder how much of what we think of as “thinking” might be rather well-modeled by a Markov-like, LLM-like probabilistic stream generation. Presumably not all, because we are usually working toward some point. But I do observe in my own stream-of-consciousness writing that what I have just written often changes where I think I’m going: suddenly I’m working toward a different point.
I wish I had a few sharp folks to kick this around with, in a sort of salon or chat group or reading club.
Consciousness
How do we know, other than by analogy, that another person is conscious? The Peter Watts novel, Blindsight, touches on that topic and related ones rather nicely. In this context, how would we know that an LLM was conscious? We can certainly say — or think we can — that the code (we think) they have inside cannot possibly generate consciousness. But how would we know? If we knew, or believed, that some program was conscious, then what?
I wish I had a few sharp folks to kick this around with, in a sort of salon or chat group or reading club.
Human Reactions
People are more important than programs, and people’s reactions to these systems are strong, scarily strong. They are sure the LLM is helping them. Some become sure that the LLM is somehow conscious or alive. Some come to rely on it emotionally. I’d like to explore that topic.
Joseph Weizenbaum reported that his secretary, I think it was, would not let him read over her shoulder what she typed into ELIZA, the world’s first and worst on-line therapist.
Weizenbaum’s secretary, we imagine, must have been somewhat baring her soul to ELIZA. We might jump to the conclusion that Joe didn’t know she had that need, and he should have known. We might be sad or angry that he did not or could not provide it, that she had no other possible listener, we write a whole story about Joe and his secretary.
Would we have the same kind of reaction had we heard that she kept a diary which was very personal, in which she wrote her secret thoughts and worked with them, and would not let him read it? I suspect not. If not, what is really behind our reaction about her interacting with ELIZA?
What are the key important differences between introspection, introspection via writing in one’s journal, introspection via writing a blog entry every damn day, introspection with a therapist, introspection via talking with today’s much larger ELIZA? Are there special risks in finding one’s thoughts and feelings via ELIZA? Via ChatGPT? Are there benefits? Might it actually be “therapeutic” to chat with ELIZA or an LLM? Is that better than just holding it in, not expressing it?
I wish I had a few sharp folks to kick this around with, in a sort of salon or chat group or reading club.
Summary
I think it was E.M.Forster who said “How do I know what I think until I see what I say?” That is certainly true of me: I learn best in conversation with other thinkers, ideally wiser than I am. I make do here on my web site, in conversation with you, with myself asking the questions you might ask, giving the answers you might give. Or at least the best versions I can produce. And quite often, I do discover what I think by writing about what I think.
But conversation is better, and that’s why I wish I had a few sharp folks to kick this around with, in a sort of salon or chat group or reading club.