Robot 15
Let’s start by talking about design. Software design. I definitely don’t know anything about any other kinds. Interesting thoughts interwoven with vague mumblings.
I have a principle1, that I’ll call a design principle, that I like to follow: I try to design my objects so that the information they need is in a form that is convenient to them. In particular, for the things they do frequently, they should find what they need “at their fingertips”, and in just the form and quantity they would prefer, if they were sentient2.
Following this principle, I tend to get code that is short, quick, and easy to understand. When the code is difficult to write, or hard to read, it’s often a sign that the code isn’t dealing with the right information in the right form.
When the code is hard to test, it may be a sign that the program’s information isn’t in the right form, but often, it means that the information is in a good form for computers, but not for expressing tests. That, in turn, may be a sign that we need some tool work, some improvement to the Making side of things.
Let’s put this in the context of the Robot World program. It seems clear to me that there are two principal3 objects, the Robot and the World4. So, I want to focus on giving them the information they need, in the form they need it.
Now, there’s a special thing happening. One team (me) is programming both the Robot and the World. So it’s only natural, in that case, to develop objects that serve both sides. In a more real-world situation, there might be two separate teams, or even one World team and many Robot teams, and in that situation we wouldn’t be surprised to see each component designed differently. There are many good ways to do the same thing.
“An example would be handy right about now.”5
When scanning for information, it is the nature of reality that the information we get is relative to ourselves. Things are not usually geotagged with world coordinates for our convenience. In our robot world, the result of this is that something two steps in front of us has relative coordinates (0,2). Its real location might be (327,53). We don’t know, although if we happen to know that we are located at (327,51), then we can work it out if we need to.
Early on, I came up with the idea for Knowledge, a collection of Facts, which have content, x, and y. In the case of the World, x and y are the real coordinates of the content in the world. In the case of the Robot, it’s not so clear, because, at least for now, the Knowledge is relative to the robot’s starting position, and it doesn’t know its real world coordinates at all.
It turns out that the information is available. I didn’t know that and so far am not providing or using it.
So while it is important that a world or a robot has the big picture, which brings all the known world contents together relative to each other, scanning works best relative to our current location. One way to handle that would have been to add and subtract our current location from the various x’s and y’s. It seemed to me that there was a better way, the object called Lens, which fits in between the user and its Knowledge, and translates between real and relative coordinates transparently.
If I recall correctly, I discovered the need for this object. I didn’t design it in from the beginning. As I started to work with scanning, I started needing to write code like
x + offset.x
y + offset.y
I quickly became bored with this, and worse, once I started thinking that way, I quickly became confused as to when I needed to apply the offsets and whether I needed to add them or subtract them. The code was telling me that the information it had, and the information it needed, were not the same. It was telling me to do something.
In the end, that need caused me to come up with the Lens idea.
Now the neat thing about objects is that they can appear to be just some thing from one viewpoint, but under the covers they’re doing all kinds of things. And there’s more than one way to do those things, sometimes with code, and sometimes by changing the form of the information.
You may recall the refactoring that went on to give us this code:
function World:scanNSEWinto(packets, lens)
local direction = {
east= {mx= 1,my= 0},
west= {mx=-1,my= 0},
north={mx= 0,my= 1},
south={mx= 0,my=-1}
}
self:scanInDirection(packets,lens, direction.north)
self:scanInDirection(packets,lens, direction.south)
self:scanInDirection(packets,lens, direction.east)
self:scanInDirection(packets,lens, direction.west)
end
function World:scanInDirection(packets,lens, direction)
for xy = 1,5 do
self:scanFactInto(packets,lens, direction.mx*xy, direction.my*xy)
end
end
Earlier on in the development of scanning, there were separate functions for the various directions, varying just slightly in how they did the scanning.
function World:scanEast(accumulatedKnowledge, lens)
for x = 1,5 do
self:scanFactInto(accumulatedKnowledge,lens, x,0)
end
end
function World:scanWest(accumulatedKnowledge, lens)
for x = -1,-5,-1 do
self:scanFactInto(accumulatedKnowledge,lens, x, 0)
end
end
function World:scanNorth(accumulatedKnowledge, lens)
for y = 1,5 do
self:scanFactInto(accumulatedKnowledge,lens, 0,y)
end
end
function World:scanSouth(accumulatedKnowledge, lens)
for y = -1,-5,-1 do
self:scanFactInto(accumulatedKnowledge,lens, 0,y)
end
end
The full refactoring is written up in Robot 6 and I am proud of how I went step by step, discovering and removing duplication. It went about as smoothly as programming ever can.
I took information that was encoded in code, as the parameters to function calls, and put those values into the table:
local direction = {
east= {mx= 1,my= 0},
west= {mx=-1,my= 0},
north={mx= 0,my= 1},
south={mx= 0,my=-1}
}
Then I just used the table values in the calls:
self:scanInDirection(packets,lens, direction.north)
self:scanInDirection(packets,lens, direction.south)
self:scanInDirection(packets,lens, direction.east)
self:scanInDirection(packets,lens, direction.west)
Essentially, procedure was converted to data.
As another example, here’s some code from one of our newest objects, LookPacket:
function LookPacket:asFactDetails()
local convert = { N={0,1}, S={0,-1}, E={1,0}, W={-1,0} }
local mul = convert[self._direction]
if not mul then return nil,nil end
return self._type, self._distance*mul[1], self._distance*mul[2]
end
Earlier on, there was this method:
function Robot:convertToXY(dir,steps)
if dir == "N" then
return 0,steps
elseif dir == "S" then
return 0,-steps
elseif dir == "E" then
return steps,0
else
return -steps,0
end
end
We refactored from the earlier one to the current one in a few steps, if I recall, but you can see that the early one uses procedure to come up with some values, and the later one uses a table, plus some procedure, to create the same values.
By changing the information from a series of if statements into a table, we improved the code.
And your point is … ?
My point is, that I like to arrange the information that my objects work with so that the code becomes simpler. Changing the information changes the balance between code and data, and we wind up with code that is easier to understand, quicker, and generally better—by our own standards—than it was before.
We’ll bring this around to a key matter for this game in a moment but first, I noticed something.
function World:scanNSEWinto(packets, lens)
local direction = {
east= {mx= 1,my= 0},
west= {mx=-1,my= 0},
north={mx= 0,my= 1},
south={mx= 0,my=-1}
}
self:scanInDirection(packets,lens, direction.north)
self:scanInDirection(packets,lens, direction.south)
self:scanInDirection(packets,lens, direction.east)
self:scanInDirection(packets,lens, direction.west)
end
A Possibly Interesting Digression
Why do we have the rather obvious duplication there? No deep reason, it just got that way. Can we do better? Let’s take a couple of steps:
function World:scanNSEWinto(packets, lens)
local direction = {
east= {mx= 1,my= 0},
west= {mx=-1,my= 0},
north={mx= 0,my= 1},
south={mx= 0,my=-1}
}
for dir,_ in pairs(direction) do
self:scanInDirection(packets, lens, direction[dir])
end
end
This is rather obviously correct. And the tests run.
We can of course improve it a bit more:
function World:scanNSEWinto(packets, lens)
local direction = {
east= {mx= 1,my= 0},
west= {mx=-1,my= 0},
north={mx= 0,my= 1},
south={mx= 0,my=-1}
}
for _dir,mxmy in pairs(direction) do
self:scanInDirection(packets, lens, mxmy)
end
end
Is that better? Are either of those better? Or does the original better express what is going on? The final code here is smaller … and slower, by a tiny margin. Which should we prefer? I prefer the longhand, but I freely grant that another programmer, or even I on a different day, might do differently. Our job is to decide.
Today, I decide to leave it the way it was. Revert.
Now, where was I?
The Socket Issue
Avid readers will recall that the specs for this application, aimed at the learners who will be building it as part of their learning process, requires that World and Robot communicate across sockets, so that the game world can reside on one computer somewhere across the Internet, and the various robot players can connect in from wherever they are, and play the game. Said readers will also recall that I am, so far, almost completely ignoring that requirement.
There are at least two reasons for that.
First, I program and write what interests me, in the attempt to show how I approach things, so that interested readers may observe and think about their own process. Whether I serve as a good example or a bad one, well, I don’t know. So far, I don’t want to do the socket thing, and I believe that Codea has little support for it, and, well, I just don’t want to. My house, my rules.
Second, however, even if I were to do the socket part, I would still want my Robot and my World to be uncontaminated by any socket thinking, to the maximum extent possible. I want my Robot to think that is has the ability to call methods on World, and I want World to think that it gets calls from Robots that it knows about, and that it returns results to them. I want them to believe that they are objects in the same programming space, to the maximum extent possible. So, even if I were to work on the socket stuff, I would want my Robot and my World to be unaware that that was going on.
A big question in our minds should be whether this design can be made to work if and when we start to break it up across machine boundaries. If it can’t be done readily, then we’ll have wasted a lot of work. If it is too difficult, we might not be able to complete the product. We might get a bad grade in the class.
That would be bad.
Should We Be Worried?
Personally, I’m not worried. I reckon I can write about something else, or write an article about how badly I screwed up this implementation. If all else fails, one can always serve as a bad example not to be emulated.
But someone else on the team6 might be more concerned. The first few times they bring up the issue, we might just explain how we’ll handle it. If their concern—or their rank—is high enough, we might have to do some work to show that our scheme would work. And, of course, at some point we actually have to do it.
Now in my case, I know where there’s an example of Codea doing very rudimentary messaging between two iPads, using sockets. And I know where there’s a coroutine example of Lua sockets, allowing for a kind of asynchronous almost multi-threaded behavior if we need it. And I can imagine a scheme where the World put incoming messages into a queue and unwound them, sending replies back as needed, although I really think we won’t need it. And I certainly know how to build a pair of Façade objects that let Robot think it has a World to talk to, and World think it has a Robot to talk to.
And yet, for all practical purposes, I’ve never done very much of anything like that. So I can accept that someone might not be comfortable with the fact that “Hey the team doesn’t seem to be doing anything about the network multiprocessing internet socket whatzis that we all know we need”.
What am I going to do about this? Well, right now, almost nothing. I admit that I’m getting tempted to work on the socket part, and I’m very aware that GeePawHill actually started there, and he knows the people and problem better than I do. So I might start working in that direction, and I can totally understand someone getting worried. And this is article 15. It represents about a week’s programming for one person. Surely I get a semester or something to do this. Surely I don’t need to work on this yet?
You spoke of a walking skeleton?
There’s the rub. A walking skeleton needs to really be end to end, and in the case of this game, that probably means it should be running in two separate processes, ideally on two separate computers. I grant that. And I’m working on an iPad, and they don’t do multi-processing, and my other iPads are either on a shelf somewhere or in the other room so honest I just can’t.
No, honestly, I just don’t want to. At least not yet.
How About a Spike?
Damn, you’re good at arguing with me, picking up my own ideas and slapping me in the face with them.
Yes, OK, it might well make sense to at least do a spike, showing that we can get a message across between two Codea programs on two different iPads. Or phones: maybe I could run it on my phone … it’s more powerful than my other iPad anyway …
OK. I’ll give it some thought, do some research, try an experiment, see if I can gin up something credible. If I can’t, I’ll see what lesson we might draw from the situation.
But the point was a design principle, and this brings me back to it.
The Larger Design
When (if) this thing runs across the network, we’ll have some kind of interface objects that look like a Robot to the World or a World to a Robot. Those interface objects will provide the same interface as the thing they represent. Internally, they’ll translate messages from their Robot (or World), translate them to the specified JSON information, and transmit that information over a socket. At the other side, another object will receive the JSON, convert it back to what its associated World (or Robot) expects to see, and present it.
As our design stands, the Robot calls methods on the World and gets returns back. The World receives calls and returns information. The World never calls the Robot.
Our interface objects will emulate that behavior. We could start writing them now, if we wanted to. Maybe we will, but I don’t think it’ll be very interesting.
Some good news: a quick look at the Codea documentation tells me that it has json.encode
and json.decode
. Some experimentation with those should get us to the point where the only serious unsolved issue will be getting a string sent back and forth between processes.
Should I work on the socket things? Reader input will influence me … somewhat.
Closer to Home
I started thinking about this with a cat in my face while still waking up. I think we have a small violation of my principle, and I wanted to look at it. Here it is:
function Robot:scan()
local packets = self._world:scan(self._name)
for i,packet in ipairs(packets) do
self:addLook(packet)
end
end
function Robot:addLook(lookPacket)
local item,x,y = lookPacket:asFactDetails()
self:addFactAt(item,x,y)
end
function Robot:addFactAt(item,x,y)
if not x or not y then return end
self.knowledge:addFactAt(item,x,y)
end
function LookPacket:asFactDetails()
local convert = { N={0,1}, S={0,-1}, E={1,0}, W={-1,0} }
local mul = convert[self._direction]
if not mul then return nil,nil end
return self._type, self._distance*mul[1], self._distance*mul[2]
end
The LookPacket was invented as an object that is little more than a plain dictionary, We gave it the asFactDetails
method, mostly because I didn’t feel that it should know how to create a Fact … indeed no one should know that but Knowledge.
But it should be clear from the code above that the Robot wishes it had been given an array of something other than LookPackets. It would be happier with an array of “fact details” in some form. The code to do the conversion from N S E W to x and y does have to exist somewhere, but here we have saddled the Robot with knowing how to unwrap and recast the LookPacket. It’s not bad … but it’s not quite aligned with the idea that the Robot should deal with things on its own terms.
I think that’s just a tiny signal amid lots of noise … but it seems to me to be a bit of a rough edge in the system. We’ll want to keep an eye on this in view of our design principle of having our main objects working with natural elements.
Robot doesn’t care about this!
Wait. I’m glad we talked about this. Why does Robot have any interest in this at all? Why doesn’t he do a scan, get back whatever comes back, and give it to his current Lens or Knowledge, saying “Here, eat this”?
This is why we think about these things, and when we have people to talk with, why we talk about them. We want to ask questions until we see a good enough way … and then keep asking, because often we’ll later find a better way.
Why isn’t Knowledge more helpful?
An even deeper question is whether we want the Robot to ask for the scan and pass the result to the Knowledge, or whether we want it to ask the Knowledge to scan on the Robot’s behalf. The latter has some appeal.
Is there data here, not just procedure?
Or … thinking further … maybe there should be such a thing as a Scan object. Robot creates a Scan object and gives it to the Knowledge, saying “Here, update yourself according to this Scan”, and the Knowledge says to the Scan, OK, give me your array, and the Scan says Hold on a moment there Bunkie, and sends a message to the World saying “Fill in this Scan” and after a while World gets back to us and the Scan resumes and forms the array and returns it to Knowledge, which fills in its, um, knowledge and returns to the Robot.
In a real threaded situation, those things might actually happen fully asynchronously.
We’ll need to turn some of this into code. Perhaps next time. Today, you’ve got enough to read, and I’ve written enough for you.
See you next time!
-
Those are my principles, and if you don’t like them … well, I have others. – Marx ↩
-
We’re programming a robot. Surely it’s got a good shot at being sentient, at least compared to a large pile of quotations, even if the quotations are more eloquent. ↩
-
See what I did there? ↩
-
The clue is in the name … ROBOT … WORLD. ↩
-
– Brian Marick ↩
-
“We’re in big trouble, guys!” – Ed Anderi ↩