Sudoku: Is This Programming?
What does all this have to do with real programming? We’ll begin with last night’s FGNO and go wild from there. We mention Iceland, and the age of the universe. Onions. Easter eggs.
FGNO1
At the FGNO last night, we reviewed the work that another member, whom I’ll refer to as “Bryan” without loss of generality, has done working toward Sudoku. “Bryan”, as I’ll call him, wanted to start early on with a validator that would determine whether a given array of numbers amounts to a solved Sudoku. That immediately led “Bryan” (q.v.) to the problems of finding the rows, columns, and sub-grids of the game.
Curiously enough, B, as I’ll call “Bryan” (sic) for short, came up with the same mysterious formulation that I found for finding the leading element in a row:
(x // 3) * 3
B used the parentheses. I did not. B’s way might be better, since at least some of us might wonder if x//3*3 might equal x//9. I confess that I wondered myself, but the fact that my tests ran told me that it was OK. Perhaps a bad decision: explicit is usually better than implicit.
In the course of the evening, we found a defect in B’s code. We devised a test to show the defect and it did not fail. The reason was that, although the defect was there, an earlier check before the one with the defect was making our test pass, even though if it had gotten to the defective code, it would have failed.
It’s rare that that happens to us, and most interesting is that B had pointed to the issue almost first thing. The method under test was shaped like this:
def check_something(...):
for x in one_thing:
if some_check(x):
return False
for y in another_thing:
if some_check(y):
return False
for z in yet_another_thing:
if mistake(z):
return False
return True
B actually pointed out that another FGNO member, to whom we’ll refer as “C” (not their real name), would have demanded that those three paragraphs should be factored out into three separate methods. We all agreed, but didn’t do it.
If they had been separate, it would have made sense to test them independently, and if they had not already been so tested, finding the defect before last night, it would have been easier to write a failing test to show the defect.
- Reflection
- One real advantage of very small methods is that it is both easy and natural to test them independently, and the result is that fewer subtle defects are likely to arise.
-
One disadvantage is that, very often, we are thinking of a larger chunk of the solution. In the code we had last night, B was probably thinking something about “OK we just check all three cases and we’re done”, and wrote that. Having thought that way and written that way, it seems like extra work to do the extraction. And, in the case in hand, the extraction is difficult, because of the
returnstatements interspersed through thecheck_somethingmethod. So we skip it, and, sometimes, a defect creeps in. -
Another very real disadvantage is that sooner or later, there get to be so many tiny methods that even we tiny method fanatics find ourselves in a twisty maze of tiny methods all slightly different. That’s usually a sign that there’s a missing level of classification, perhaps a few missing classes, but really, even a fanatic sometimes gets tired of making things smaller and then arranging them.
Is This Even Programming?
A question came to mind this morning while I was making my morning iced chai latte, using some seriously rigid ice from our new refrigerator, the old one having decided that 42 was the answer to the temperature for the lettuce, the ice, and the ice cream, and everything:
Is this even programming?
If a programmer today needed to produce a Sudoku solver, they’d probably search out a micro-service that was out there and write some bizarre linkage, spinning up some servers of their own and wiring them together, then hacking out some HTML and CSS, with a little JavaScript in there, somehow ultimately solving the Sudoku and using an LLM to write a few paragraphs about the solution and the history of Sudoku.
They might need to use DALL-E—or possibly Midjourney—to ensure that each user got their own lovely picture of a strangely unreal-looking but unquestionably beautiful girl sitting at the table solving a puzzle, writing with her seven-fingered extra right hand. They’d accomplish this at the cost of approximately the Thursday power consumption of Iceland for every solution, and having written perhaps four lines of actual code
Like it or not, that’s rather like much of today’s programming, and it’s often nothing like what I tend to do here.
Should we deplore this situation, extol its virtues, or what? Honestly, I do not know. When I learned programming, the first computer I used, which I was not even allowed to visit, filled a room buried deep underground, as was the second computer I ever used, which I was actually allowed to visit and even touch, as by then I had attained a security clearance above Top Secret. That one had a special double memory of 65536 36-bit words, giving it almost 300 kilo-bytes of main memory.
A laptop today might include:
- 16 or 32 gigabytes of memory;
- 8 or more cores running at perhaps 3000 megahertz;
- A GPU capable of 2 trillion floating point operations per second;
- Neural network hardware capable of 11 trillion operations per second.
A fascinating article A Quadrillion Mainframes on Your Lap says that a week of processing on your laptop might require a 7090 to work longer than the age of the universe. That does include the time for people to swap tapes in the drives, however. Very slow cycle time for that.
But seriously
Someone is out there programming in JavaScript, Java, Kotlin, Python, Ruby, and the more modern languages such as Go, Haskell, and so on. Lots of someones. And the code written in those languages does need to be read far more often than it is written, does need to be maintained over long periods of time.
I think it’s fair to say that many of the ideas that my fellow grey-beards and I use to organize our Sudoku programs and D&D and Asteroids programs are still quite desirable today. And our ability to proceed in small steps, with confidence that each step actually does what we think it does … that’s a big part of our ability to quickly build programs that do what’s intended, and that can grow and be modified over long periods of time.
Every day as I write these little programs, I learn and re-learn and re-learn lessons that would have served me well when, in my distant and rapidly-receding past I was working with teams writing operating systems, compilers, database management systems, and financial applications. I do not wish to repeat my past: if I could stay at least as healthy as I am today, I would much prefer a very long future. I suspect that neither option is in the cards. But I digress.
If I had known then what I know now, we could have delivered more value, sooner, and at a pace more acceptable to the business side people, providing them with a better basis on which to make decisions about what to do next, and we could have maintained those programs more readily than we were then able to.2
I’m sure I’d have screwed up in some other fascinating ways, but if I were to be condemned to programming again with teams, I would hope to take with me the things I’ve learned in the last two decades, two years, and two days.
What about you? You do you. I think there’s value here, but mostly I do this to be doing what I do. I hope that you are at least entertained.
See you soon!
-
Friday Geeks Night Out, held each Tuesday evening, also referred to in these writings as “Zoom Ensemble”, where a few of us get together and, as we used to say in the old days, “chat” about whatever code or non-code things come up. ↩
-
Contrary to rumor, we did not, at that time, tie onions to our belts. It was not the fashion at the time. That came later. ↩