Today I have no one to pair with, and I don’t plan to do any work on the iPad project. (That could change, I reserve the right to change my mind at any time.) I do have two things to think about.

First, I might document the expected flow and objects a bit, just because I can’t stop thinking about things.

Second, I’ve noticed a significant difference between how Tozier and I pair, compared to my experience with Chet and others. I’d like to explore that.

Third1, I’ve been thinking about politics in the USA and the world, and I might write about that. If I do, I’ll probably spare you and put that writing somewhere else.

The flow of the thing

Roughly, this little application is supposed to work like this:

  1. Application is set up to run at intervals, with cron.
  2. There’s a designated communication folder in Dropbox, call it Designated.
  3. When I write an article, I put its folder, say iPad-4 into Dropbox/Designated.
  4. When I’m ready to publish it, I put a file named, say, gogo in Dropbox/Designated.
  5. When the cron ticks, it runs our app.
  6. App looks for gogo. If it doesn’t find it, it exits.
  7. App checks to see if it is already doing a build. If it is, it exits.
  8. App marks that it is doing a build.
  9. App copies all the top-level folders from Designated into the site source folder. (Position to be decided.)
  10. App triggers Jekyll build.
  11. If build fails, put a message back into Designated and exit.
  12. Copy all the folders in _site, corresponding to the ones in Designated, up to
  13. Copy all the index folders in _site up to
  14. Copy the home index file in _site up to
  15. Mark that we’re not doing a build.
  16. Put a message back into Designated and exit.

There are some interesting tricky bits here. We want the cron to run fairly frequently, so that when we kick off a build it proceeds soon. But a build takes a few minutes at least, so it’s likely that the cron will fire again. So we need some kind of semaphor to make sure we don’t start a build on top of a build. Relatedly, the user (me) would like to know that something’s going on.

We need to handle errors from Jekyll, exiting the process and putting some information where the iPad can see it.

There are some fixed files that must be moved, such as the index pages and the home page. We’ve probably forgotten at least one.

Bottom line, there’s a fair amount to think about. And …

How we work

Tuesday, I noticed a difference in how Tozier and I seem to think and approach building this thing. I chatted with Chet about it Wednesday, and with Tozier yesterday. I’ll babble about it now.

As we build something, we can’t keep everything in mind. We use various strategies and tactics to decide what’s on the worktable and what isn’t. When Chet and I are pairing, we usually agree quite closely on what’s on the table. Now partly that’s because we’ve been pairing for over twenty years, so we can pretty much finish each others’ sentences. But generally when I’m pairing with other folks, it’s the same: we quickly agree about what we have on the table, and what’s deferred.

There are at least two kinds of “deferred”. In one case, something comes up, and with a sentence or two we dismiss it to deferral. In another case, we say a thing or two, make a note so we don’t forget, and then defer it. (Of course some things come up and stay on the table.)

Now I very rarely even write a note. I’m quick to decide that we don’t have to think about it now, and I’m confident that if the idea is important, it’ll come up again, so we don’t need a note. If it doesn’t come up, we didn’t need it. But why can I be so quick to decide we don’t have to think about it now?

I have an example. It’s typical if not exactly accurate, and goes something like this:

Tozier and I are talking about whether, after FTPing something up, and checking that the file name is present on the remote, we should check the contents. I said that I would generally not do so. I’m checking that we correctly asked for all the files we wanted moved, and I don’t check to see if they are OK. I think we used to call this thinking “Rule 38: Trust your I/O.”

When I’m working at the level of making a call to some system object, in this case the Net::FTP object, I don’t check its work. I choose to trust that it works. I might check to be sure I called it correctly, but I’m not going to check byte for byte that it moved the files correctly.

Sometimes, of course, this backfires, and FTP has not moved the files correctly. I can even think of ways that could happen. I’m confident enough of two things, so that I don’t generally check. First, I’m sure that if the system object fails, I’ll find out soon enough, like the page won’t be there or will be malformed. Second, once I’ve checked a little (the file is there), I consider the odds good enough to mean that I don’t need to check further.

Obviously, Rule 38 thinking can fail. And when it does, one raises the threat level on the software and system in question, and tests a little more thoroughly.

Bill’s reflexes on things like this are different from mine. He thinks about things, gets concerned about things, brings up things, that seem to me to be obviously off the table.

Now there’s no particular reason to think that two individuals would always match on what’s on the table. But it feels different to me, from pairing with others. We talked a bit about why that might be.

First, Bill’s experience is from a different culture, less software, more biology, more social, more complexity. Second, and related, one main thrust of his work is in genetic programming. When programs are created by genetic evolution, they often get the right answers (I’m assured) but reading them, it’s very difficult, sometimes almost impossible, to figure out how they’re doing it.

These programs often do truly horrible things. Suppose the program is trying to work out how many weeks there are in a provided number of days. Well, easy enough, divide by seven, isn’t it? So when you look at the best-evolved program in the batch, you see that it has messed around at length in some generally obscure fashion and then suddenly there is a 21 on top of the stack and it divides that into another obscure number that it got by adding the input together three times, except that it took some square and cube roots along the way, while also appending the input number to itself a few times and then stripping out some characters in the middle of the string. Meanwhile, there are a hundred other calculations going on that have nothing whatsoever to do with days or weeks, but somehow all come together to ensure that that 21 is on the top of the stack at the right moment.

From what Bill has told me, often, it’s not even that clear what’s going on.

These programs aren’t modular at all, in any sense a human could understand. They don’t try to “express all their ideas”: they don’t seem even to have ideas. They don’t remove duplication: often they seem to work by adding more duplication. They don’t minimize entities: they seem to proliferate entities until somehow some of the entities all get together and divide three times the input by twenty-one.

It seems plausible to me that if you lived in a world like that, you might be more inclined to think that any issue could come up at any time, where I can set it aside, sure that it only comes up here or there but not both. You might consider far more possible orders of doing things than I would, because I pick an order that I consider logical, and am not likely to change it unless I get a good reason to do so.

I think in terms of objects, functions, modules, transformations. Genetic programs don’t seem to think in those terms at all. They’re like ants exploring the forest. They don’t have a plan. They just mostly go where other ants have gone, which tends to lead them to good places to be.

I’m considering the possibility that Tozier is actually just a large pile of ants in shorts and a t-shirt.

  1. I lied about the “two”.