Driven by some Slack chatter, I’ve been thinking about feedback. How does that fit into life, and into this little program?

On the Slack, we were talking about feedback, particularly negative feedback, particularly negative feedback that someone thinks is “necessary”. My plan here is to start with some general thoughts on feedback between people, and then to think about what kind of feedback I’d like (and need) to have in my programming, and how this program, and programs like it, could help with that.

On the Slack, I found myself saying this:

Norm Kerth said
“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”1

I have come to understand the truth of this. In the moment, in every moment, we make the best choice that we can. Not the best there could ever be, not the best we even wish we’d make, but the best we can, right then and there.

Therefore, what we need is support to make better choices. we do not need punishment. Punishment, at best, only stops us, never starts us. Yes, if we don’t even know we did poorly, maybe someone needs to tell us. That’s quite rare.

We don’t need negative “feedback”. We don’t need focus on weakness. We need help keeping our eye on the ball.

We do the best we can

I suspect that most of us have experience of having some very attainable goal, some habit that we want to create, or one that we want to eliminate … and then finding that we just don’t get there. We don’t reach the goal, we don’t exercise consistently, we keep eating entire bags of chips … whatever it is we set out to do, we don’t get there.

And then someone, quite often we ourselves, beats us up over this “failure”. We then feel badly because we couldn’t even do this simple thing.

Norm asks us to remember that we have done the best we were able to do, in that time and place, with those resources, with that amount of energy, will power, and wisdom. That’s not to say that we cannot do better: quite likely, we can. If we keep trying, quite likely we will. But there and then, Norm says, and I agree, we did the best we could.

When the book came out, I took the Prime Directive to be a pose, an attitude we tried to keep in mind as we looked back on the past. A sort of “forgiving” attitude. But I didn’t believe the Directive to be literally true. Today, I think it is literally true. We are always doing the best we can. It’s just that our best isn’t as good as we can hope or imagine it to be.

You can buy that, or not. I commend the idea to your thinking and feeling. We’re here to talk about feedback, however, and the light of “the best we could” is an important light to shine on past behavior.

Negative feedback and power

In a business situation, I want to suggest that negative feedback only flows one way. It flows from someone of higher power down to someone of lower power. The boss tells us we’re doing a bad job, We don’t get to tell him. We tell our employees they’re doing a bad job. They don’t get to tell us.

Even when we try to open up to feedback from lower-powered people, it doesn’t work well. We may try to anonymize it, to make the power imbalance less visible.

Even when it is “peer to peer” feedback, if you pay attention to what’s going on, I think you’ll find that there’s almost always an implied power gradient in play. Joe is smarter than Ann, and his voice is louder. He can tell Ann what’s wrong with her code. She can’t tell him.

I don’t want to push this too far, but negative feedback often feels like an attack to the recipient. It feels like an assertion of power over them. A big reason why it feels that way is that, qite often, it is an assertion of power over.

Negative feedback extinguishes behavior

Psychologically, we can often cause someone to stop doing something bad by using negative feedback. What is almost impossible is to get someone to start doing something good using negative feedback. We turn away from the pain, but we don’t necessarily turn in the direction that’s desired.

And, quite often, the feedback doesn’t even get us to stop, it gets us to hide. The boy doesn’t stop smoking, he stops smoking at home. (He might start using breath mints, if you think that’s a good thing.)

To create positive behavior requires what psychologists call “reinforcement”. When we do something good, we get a food pellet or a pat on the head. We become more likely to do the good thing. So if Bill didn’t run the tests before pushing his code, what can we do? A food pellet might work, but we can be almost completely sure that shouting at him about running the friggin’ tests will not cause him to address testing in a thoughtful and creative way.2

Don’t Criticize, Condemn, or Complain

The above is one of the Dale Carnegie principles from the famous, ancient book, *How to Win Friends and Influence People”. The rule immediately following that one is “Give honest, sincere appreciation”.

When it comes to working with people, that old, arguably out of date advice seems to me to still have a lot of mileage left.

I have no specific advice for dealing with people, but I’ll try to be responsive to tweets and emails on the people side of the feedback subject.

We’re really here to talk about this “Codea Stats” program, and, more generally, the kind of feedback we’d like to have about our programs.

The Making App

GeePaw Hill has written about the “making app” as opposed to the “shipping app”. The “making app” is the toolset we have for working on whatever the shipping app is. Most of that toolset is probably pretty standard, the language and libraries and IDE that we use. Parts of the making app may be more directly aimed at the specific app. Hill is working on a gerrymandering app and has written a specialized tool that he can use to observe the program in action, step by step.

Today I want to try to think about the notion of the “making app” for Codea Lua, in the light of CodeaUnit, and this nascent Codea Stats program. I started out writing it because of some simple notion that it might be interesting to know some things about my Codea program. (There was also a desire to work on anything but the dungeon program for a while, but we’ll discount that.)

In the initial article, I listed these things we might want to know about, the number (and details about)

  • tabs
  • classes
  • methods (per class?)
  • non-method functions
  • test suites
  • tests
  • test “expect” calls

Let’s step back and try to think about what goes wrong when I’m working with Codea, and how the stats app, or some other code, might help me with that.

Steps Too Large
Often I come to some large chasm of missing function, and, not seeing a small step toward it, I take a large leap of faith and just go for it. Almost always, I succeed. The other 75 percent of the time, I wind up reverting, or engaged in a long debugging session, in which I am mostly confused, angry, and afraid. Sometimes I even wish I wasn’t honor-bound to tell the truth in these articles.
No Tests
Often I proceed without tests. I know better, but I’ve developed some good rationalizations about when it “just doesn’t make sense” to test. One of my favorites is in drawing and screen layout, where it’s “easy” to see if it looks good and “hard” to test what it’s going to look like. That may even be true, but the result of that thinking seems to be that I often skip tests when they’d be perfectly feasible and quite helpful.
Few Commits
Correlated with large steps and few tests, we find relatively long periods of time without code committed to the repo. On the one hand, a long period between commits often means a long period where the program is unstable and I can’t instantly fix it. And on the other hand, the long period means that when I make a particularly bad step, I cant readily revert it out. My work would be much smoother if there was a save point before every attempted step forward, so that I could unwind neatly and try a new path.
Dot vs Colon
Some percentage of the time, I write x.y() when I should have written x:y(). Both are legal. The former is rarely what I want, given my programming style.
Missing return
Fairly commonly, when I write a new function, I forget to return the result. This is particularly common when the function includes an accumulating internal variable: I often forget to return the accumulated value.
Accidental Globals
Variables default to global in Lua. You have to declare them as local if you mean them not to be local. Often I forget to do that, particularly when writing tests. This is usually not a big problem, because I tend to name “real” globals with upper-case initial characters, but it’s tacky and can lead to problems.
Test Reporting
Tests in CodeaUnit seem always to provide too much information or too little. Right after a test is created, I’d like to see it indicated as OK or to show its error in the console. And whenever a test fails, I’d like to see the error report in the console. After a test is working, and I’ve moved on to another one, I don’t really want to see it printing OK: it fills up the console and makes it harder to spot the occasional test failure.

This is exacerbated by the fact that the basic CodeaUnit that I use with new programs isn’t quite up to date with the ones that I use in larger apps. In particular, I’ve tweaked the code that displays results, and that tries to color the screen red when tests fail, and the like.

Curiously, none of these problems seems to be addressed by a program for enumerating classes, methods, and functions. Those are interesting facts, and I am almost sure that they’ll tell me useful things about the program, but they don’t see to be in the top ten.

Hmm. Looking at this list of issues, I don’t see anything that’s really helped much by a program that can list classes and methods or even functions.

It might be useful to see some kind of general progress information. Imagine a graph showing number of classes, methods, functions, lines, tests, expects … over time. We’d hope to see tests and code growing more or less proportionately.

Interesting, But Not Useful

This is distressing. I’m glad I started with all that thinking about negative feedback, because I’m kind of thinking that this stats program has been interesting to write, but it isn’t focused on what I need. Well, I have done my best, or at least the best I had on those days. What now?

Well, one thing that is quite good is that I’ve built up a lot of expertise in parsing the code and scarfing out information. That’ll surely be useful someday, perhaps even someday soon.

But let’s see what would be useful. And, ideally, possible.

CodeaUnit “Update”

It’s probably time to do a revised version of CodeaUnit. We’d want to make a list of all the things that might be good, but I think what would be quite fine would be to have it better integrated with itself. Today, each occurrence of a _:test creates its own output, and you get a separate summary:

green

There are issues here. Not least, there are more tests than this shows. After a tweak, I get this:

green2

Now we see that there are actually five features. We also see that one of them has a failing test and yet the display is still green. That bug is fixed in the version that runs in D2–I think–but it has not been put into the default version that I use. We also see that two features have the same name. That’s a copy-paste issue. The underlying reason is that creating a CodeaUnit fixture is a bit tricky, and so I like to copy an old one, and, often, I forget to change its description.

As it works now, CodeaUnit treats each test function as separate, and it has essentially no useful global state other than the Console contents. I think that the original author expected that you’d write one big test function and then use separate _:describe sections within it to separate out your tests. My own usage doesn’t quite parallel that: I like to keep tests and code together, so I wind up with multiple functions each containing one or more describe.

test && commit || revert

Kent Beck has this wild idea of programming such that whenever your tests run, the code is committed, and whenever they fail to run, your code is reverted.

Read the article, but in it he says:

As part of Limbo on the Cheap, we invented a new programming workflow. I introduced “test && commit”, where every time the tests run correctly the code is committed. Oddmund Strømme, the first programmer I’ve found as obsessed with symmetry as I am, suggested that if the tests failed the code should be reverted. I hated the idea so I had to try it.

I hate the idea, and my toolkit won’t let me try it, but if negative feedback extinguishes behavior, TCR would certainly extinguish something, perhaps my love of programming.

Now, if you’re like me, your mind immediately leaps to the idea that if you didn’t write but one test that 2 equals 2, you’d never get a revert, but on the other hand, even then you’d commit a lot of garbage to the repo. So that hack won’t quite work.

Anyway, I can’t quite do this trick today. However, I believe that WorkingCopy can be driven from iOS shortcuts using a feature called “x-callback-url”. So it might be possible to add a feature to CodeaUnit that would trigger a commit or revert automatically. That said, my experience with revert is that Codea doesn’t always respond to the revert unless you exit from the editor and return.

Still, it’s an interesting possibility, and we can surely do better with the general output issues, if not with auto commit.

Timers and Nagging

It is surely possible for CodeaUnit (or some other part of our MakingApp) to know how long it has been since the last green tests, and it could remind us or take other action. Codea has the ability to save and read key-value pairs in local data, project data, and global data areas. We would probably use the local data, which is local to the specific machine, and could record how long it has been since the tests were green.

Of course, if you never try to run the program, it might be difficult … unless it were possible to run more than one codea program at a time … or if we wrote some globally accessible info that a shortcut could watch. I’m not at all sure that the iPad will let that happen.

There might be other ways of inserting reminders and triggers to encourage more frequent commits and other good things.

Smarter Parsers

Things like missing returns are pretty difficult to spot in Codea, because the keyword begin-end format is harder to parse than nested brackets would be. In a nested-bracket language, we could pattern match to the final bracket in a function and look for a return. In Lua we’d have to skip over if-end, for-end, else-end, elseif-end, function-end, and probably some others that I’ve forgotten. Difficult without a full language parse.

Still, our pattern recognizers are fairly powerful and we might be able to spot a few issues.

Bottom Line (For Today)

Looks like we need to take a step back on the notion of the “Making App” for Codea, probably with a primary focus on CodeaUnit. And we have some interesting experiments that we could do as well, with WorkingCopy’s callbacks.

A question that needs to be asked, of course, is whether this is a kind of gold-plating or following one’s technical nose into a rathole. We have a Dungeon program, or whatever our next program is, that, if we were a business, should be our primary focus. And, yes, in that world, we’d want to be pretty careful not to spend a lot of time on things like useless statistics. We might do well to do a quick-and-dirty scan to get the info we need, but to avoid spending multiple days working on a general statistics finder.

We have a luxury here chez Ron, which is that we’re here to learn, to think about programming and how best to do it, so that we can turn our focus to learning about the Making App, not with an eye to writing the perfect one, but to opening our minds to the possibilities, especially the possibilities that can be implemented inexpensively.

What have we learned today? Well, quite possibly that we’re a bit down the old rathole here and that we should perhaps back out and take another angle. Then again, we could always just keep digging …

Stop by next time to find out what I do. I’m kind of curious myself.


  1. Norm, for those who may not know, was the author of Project Retrospectives, and he really brought focus to the retrospective practice. The quote above is what he called “The Prime Directive”. 

  2. Quite often, with something like this, we need to change the system, not the person. If code can’t be pushed unless all the tests are green, neither Bill, nor anyone else, will push untested code. No hassle, no shouting, everything goes smoother.