Choices, Plans, Estimates
We consider things to do, pick what seems most urgent, deal with it. We have a decent discussion of estimation, planning, how to know if you’re slowing down, and what to do if you are.
As we catch our breath from (finally) getting the WorldEntity in place, firming up the separation between World and client, some needed work comes to mind:
- Start supporting more world objects than just Bots. Blocks, for example.
- We want different flavors, scents, pheromones, something like that, for grouping purposes.
- We want the Bots to start grouping objects, without adding much behavior at all.
- I think I noticed what may be an issue.
Naturally, we are all now entirely focused on this “issue”. What was it? Well, I’ll tell you.
As I watched the game run yesterday, in preparation for taking the screen shot, it seemed to me that the Bots would sometimes run into a block or the wall and kind of stop for a bit before turning away. It is natural that they would stop for one clock tick, since the rule is “step, notice you haven’t moved, change direction”, but it seemed to me that they were pausing longer than that. I could be wrong. I want to be wrong.
I think, though, that I’d like to be more certain, and I propose to patch in a few print statements to see what happens. Roughly like this:
If the World determines that a Bot hasn’t moved after a step, it will print something to that effect. And when the Bot notices that it hasn’t moved and needs to change direction, it will print something. And, possibly, we might print something from DirectConnection.
Now you might well ask, don’t we have a test for this? And we might, and let’s see if we can find one.
def test_change_direction_if_stuck(self):
world = World(10, 10)
client_bot = world.add_bot(9, 5)
client_bot.direction_change_chance = 0.0
client_bot.do_something(DirectConnection(world))
assert client_bot.location == Location(10, 5)
assert client_bot.direction == Direction.EAST
client_bot.do_something(DirectConnection(world))
assert client_bot.location == Location(10, 5)
client_bot.do_something(DirectConnection(world))
world_bot = world.map.at_id(client_bot.id)
assert world_bot.direction != Direction.EAST
assert client_bot.direction != Direction.EAST
def test_stop_at_edge(self):
world = World(10, 10)
client_bot = world.add_bot(9, 5)
client_bot.direction_change_chance = 0
client_bot.do_something(DirectConnection(world))
world.update_client_for_test(client_bot)
assert client_bot.location == Location(10, 5)
client_bot.do_something(DirectConnection(world))
world.update_client_for_test(client_bot)
assert client_bot.location == Location(10, 5)
OK, these look good to me. In the first test, we start at x = 9, with no chance of a random direction change. We do something, which will include a step, taking us to x = 10. We do something again, and observe that we are still at x=10. We do something a third time (because we have to allow for the update to be processed at the beginning of the next turn) and find that we are no longer facing east, the default initial direction. We do not check our location. Perhaps we should. I add the check and yes, we are not at 10,5 any more. This is reassuring.
How about the second test. OK, first of all, it shouldn’t need those calls to update. I modify that one, removing those unneeded updates, and adding a check for a move after a final do_somethign
:
def test_stop_at_edge(self):
world = World(10, 10)
client_bot = world.add_bot(9, 5)
client_bot.direction_change_chance = 0
client_bot.do_something(DirectConnection(world))
assert client_bot.location == Location(10, 5)
client_bot.do_something(DirectConnection(world))
assert client_bot.location == Location(10, 5)
client_bot.do_something(DirectConnection(world))
assert client_bot.location != Location(10, 5)
This, too, passes. I’m becoming more certain that it works as advertised, but am still tempted to toss in the prints. Let’s do one more thing: we’ll look at the code we punched into game
yesterday.
def run_one_bot_cycle(self):
robots = []
for entity in self.world.map:
if entity.name == 'R':
robots.append(entity)
for bot in robots:
client_bot = Bot(bot.x, bot.y, bot.direction)
result_dict = self.world.fetch(bot.id)
client_bot._knowledge.update(result_dict)
connection = DirectConnection(world)
client_bot.do_something(connection)
That’s a bit of a crock, because, since we have not created a permanent set of Bots, and therefore we have to update a new instance on each time through. Let me improve that code a bit with some better names.
def run_one_bot_cycle(self):
world_robots = []
for world_entity in self.world.map:
if world_entity.name == 'R':
world_robots.append(world_entity)
for world_bot in world_robots:
client_bot = Bot(world_bot.x, world_bot.y, world_bot.direction)
result_dict = self.world.fetch(world_bot.id)
client_bot._knowledge.update(result_dict)
connection = DirectConnection(world)
client_bot.do_something(connection)
We should improve the game, so that it stores client bots instead of world ones, but it really does need to see the world entities because it is drawing everything, blocks and whatever else may one day appear.
Commit: tidying.
I am really pretty sure now that everything is OK, but I still want to take a couple of minutes and punch in those prints.
class World:
def step(self, bot: Bot):
old_location = bot.location
self.map.attempt_move(bot.id, bot.forward_location()) # changes world version
if bot.location == old_location:
print(f'{bot.id} did not move')
self.set_bot_vision(bot)
self.set_bot_scent(bot)
def update_for_state_machine(self):
self._knowledge.gain_energy()
if self.location == self._old_location:
print(f'{self.id} changed direction')
new_direction = self.change_direction()
return new_direction
else:
return []
Now I’ll just run the game and watch the prints. And I am doubly surprised. Maybe more than doubly.
First of all, it printed ‘# did not move’ innumerable times with various numbers. It never printed that it changed direction from that printout. I quickly think about the ==
and check Location to see if it implements __eq__
and it does:
class Location:
def __eq__(self, other):
return isinstance(other, Location) and self.x == other.x and self.y == other.y
Note
:I could have reasoned to the conclusion that ==
worked, because the tests work. But this check came to mind, and looking at the actual code is a more direct way to be sure. You pays yer money and you takes yer choice.
I add a print for a random direction change. Change the game to create just one bot, to make it easier to watch. I observe that the bot runs into things and stands there while the world reports did not move and the bot does not notice until finally it changes direction on the random flag.
Since I believe the tests, I think this has to be caused by our hack to make the game work. Let’s see if we can do it more nearly correctly.
I’m just going to change game
. It has no tests, it’s just a demo, and as such, well, whoever wrote it didn’t do any tests and I’m not going to start now.
class Game:
def __init__(self, world):
self.world = world
self._running = True
self._display_surf = None
self.scale = 20
self.size = (self.width, self.height) = ((world.width + 1) * self.scale, (world.height + 1) * self.scale)
self.timer_event = None
self.clock = 0
self.client_bots = []
def add_bot(self, client_bot):
self.client_bots.append(client_bot)
def run_one_bot_cycle(self):
for client_bot in self.client_bots:
connection = DirectConnection(world)
client_bot.do_something(connection)
if __name__ == "__main__":
world = World(40, 40)
build_random_blocks()
game = Game(world)
for _ in range(20):
client_bot = world.add_bot(10, 20)
game.add_bot(client_bot)
game.on_execute()
And the world and client prints match, although in a fashion that surprised me for a moment. Since the fact that we didn’t move is only noticed on the Bot’s next turn, the matching message about direction changing comes 20 lines after the message from the world, because we have to cycle through all the bots until this one gets its next turn. But it’s clearly working now.
I remove the prints, check tests, and commit: Fixed bug in game demo code causing bots to stall occasionally.
Let’s sum up, we have 200 lines in the article, and we’re at a lovely stopping point
Summary
We’re almost exactly one hour into the morning’s work. We have considered an issue that didn’t look right in the demo. We checked some tests and improved them. Despite the tests running correctly, a quick couple of prints seemed in order and sure enough, they showed a problem. The problem itself was due to a quick and dirty fix to the demo yesterday, which caused the client bots never to see that they had not moved. In essence that was because we created a new client bot every time through the update loop. The fix was easy: cache the client bots when we create them, and use them in the update. The game code is better and the main loop is simpler.
But we had four things on our plate, causing WorldEntity to support more than just bots, giving blocks a color or scent or flavor or aroma or different hair color or something, and causing Bots to group blocks by that attribute instead of all together. Should we feel good, or badly, right now?
Well, we consciously chose to chase what looked like a bug, and which really was a defect. We improved our tests along the way, and thus gained more confidence in them. And we fixed the problem that we had spotted. What’s not to like?
It’s just that somewhere in our fantasies we thought we’d do something like create an enum
for entity type and have it include at least bot and block, and then use that thing for the Blocks, instead of the Block class, which is pretty useless anyway, and then maybe we’d put in an aroma
enum and get started on turning scent
into a collection or aroma intensities and thus be a bit further along on grouping, which is, after all, the big story that’s up next.
Well, I’m sorry, but I can’t do two things at once, and you sat right here and watched me choose to chase what really was a defect, so if you really wanted me to let that defect slide, what happened to your famous commitment to quality anyway, you should have mentioned it at the time.
Is there a point?
In fact there is a point. No matter how carefully we plan and estimate, our plans will immediately change when we face the code. So we cannot assume, nor should we demand, that we “meet” our estimates. There’s only one good way to ensure that you always meet or exceed your estimates and that’s to estimate Way The Hell High. And nobody wants that.
But gold-plating??
One might legitimately be concerned about gold plating. Here chez Ron, I surely do more code refinement and tweaking than would be reasonable in a real product environment where new capabilities mattered. That’s because these articles are about changing code to make it better, as well as more capable, so we take many examples, some of which one might do in a real situation and some of which one might not.
In a product environment, I have never seen significant “gold plating” going on. There’s too much pressure to get the things done that management wants. What I’ve seen instead was that too little code improvement was taking place, and too many corners cut. I’ve seen weak tests that were allowing problems to slip through. I’ve seen code that was a thicket of brambles and that was slowing down everyone who worked with it. But gold plating? I’ve not seen that.
Then how can we track and predict?
I’m glad you asked. Keep your stories about the same size, and count how many you’re getting done per unit time. Observe whether the curve is improving or declining. If it’s declining, take a serious look at why that’s happening. The most likely possibilities are:
- We’re having to fix too many problems;
- The coding is going more and more slowly.
The first situation probably calls for a bit more testing, and perhaps back–filling tests for existing code. The second calls for refactoring, which itself probably calls for more and better tests.
And neither of those situations can possibly be improved by calling for more features faster. It will not help to whip the ponies harder. In fact, it will slow things down even further.
“Slow is smooth and smooth is fast.”1
Bottom Line
We set out to see if there was a problem. There was. We improved tests, providing confidence that the core code was working as intended. We checked our demo code, found it wanting, and improved it as well. A good morning. Time for an iced chai latte, or the beverage of your choice. See you next time!
-
This mantra is thought to have originated with the US Navy SEALs. While I am strongly opposed to war and do not idolize violence, this notion applies far beyond the mission of these brave members of our military. Slow is smooth and smooth is fast in our line of work as well. ↩