BOTS State
I’ve opened my pairing zoom at 0640, hoping Bryan was planning to be up. Perhaps he’ll turn up. Meanwhile I’ll try to spike something useful.
Our current do_something
looks like this:
class Bot:
def do_something(self):
self.gather()
self.move()
def gather(self):
if self.tired > 0:
self.tired -= 1
return
if not self.inventory:
self.take()
if self.inventory:
self.tired = 10
else:
# Check to see if there's a block here
# Drop the block if there's one
self.world.drop(self, self.inventory[0])
self.inventory = []
self.tired = 10
def take(self):
self.world.take(self)
def move(self):
old_location = self.location
if random.random() < self.direction_change_chance:
self.change_direction()
self.step()
if self.location == old_location:
self.change_direction()
def step(self):
self.world.step(self, self.direction)
Computer wants a reboot. Hold on … OK, better.
As I mentioned yesterday, both take
and drop
are quite rudimentary:
class World:
def take(self, bot: Bot):
entity = self.find_entity(bot.location)
if entity:
self.map.remove(entity.id)
bot.receive(entity)
def drop(self, bot, entity):
entity.location = bot.location
self.add(entity)
def find_entity(self, bot_location):
directions = Direction.ALL
for direction in directions:
search_location = bot_location + direction
entity = self.map.entity_at(search_location.x, search_location.y)
if entity:
return entity
return None
The take
just looks at the four cells adjacent to the bot and as soon as it finds an entity, it removes it from the world and gives it to the bot. I think we should probably require the bot to choose the direction in which to take. The drop currently drops right where the bot is, as if it were dropped out the bottom of the robot, so that when the robot moves away, the dropped item will be right there. Maybe that’s OK?
Maybe take
should work the same way, taking only by moving on top of a thing, like a chess piece? My current opinion is that we should drop in a direction as well as take
, but dropping right where we are is a lot more convenient.
A concern is this. We want our early behavior, which we’re working up to, to be that when the bot finds itself adjacent to a block, it drops its block beside it. Suppose we define “adjacent” to mean in one of the four squares north, east, south, or west of the bot’s current location. So what we really want is to drop the thing on the square we are currently on. If we require drop in a direction, dropping a block next to a block would require backing away and then dropping, at least a two-part operation.
Or … or … we could have our bot recognize sooner that it is near a block, perhaps like this:
B__
_R_
___
If we spotted that situation, we could drop our block immediately.
We have a kind of dilemma here:
- We are trying to make a bot and game programming situation where the bots offer some programming challenges;
- We are trying to build up our version of the game as simply as possible.
I think we should resolve these forces thus:
- The final
take
anddrop
will require a direction, N E S W, in which to take or drop. - We’ll sneak up on that implementation with steps that are as simple as we can manage.
Somewhat complex story, solved incrementally and as simply as we can manage. Kind of what we do all the time.
OK … I think I’ll spike, on the bot side, figuring out whether there is a block near by.
Saved by the bell
And … again, Bryan arrived just in time, followed soon by GeePaw. With GeePaw kibitzing and Bryan driving, we tried three or four times to improve the do_something
above. On the final try, we had this:
def do_something(self):
if self.state == "walking":
if self.tired <= 0:
self.state = "looking"
elif self.state == "looking":
if self.beside_block():
self.take()
if self.inventory:
self.state = "laden"
self.tired = 5
elif self.state == "laden":
if self.tired <= 0:
self.world.drop(self, self.inventory[0])
self.inventory = []
self.tired = 5
self.state = "walking"
self.move()
def beside_block(self):
return True
def take(self):
self.world.take(self)
def move(self):
old_location = self.location
if random.random() < self.direction_change_chance:
self.change_direction()
self.step()
if self.location == old_location:
self.change_direction()
def step(self):
self.world.step(self, self.direction)
self.tired -= 1
We were pretty well fried by the time we had that working and the tests all rearranged and working, including a neat little intermittent failure that first confused us because we missed it failing, and then confused us because it was intermittent. So despite some pretty obvious opportunities for refactoring, we called it a morning as shown above.
The basic scheme is that gathering blocks has three states, walking, looking, and laden. We walk until we are not tired. Then we are looking (for a block), which is meant to end when we find ourselves beside a block and take it. At that point we are “laden”, thanks, Hill, and we immediately drop the block as soon as we are not tired. (Tired seems to be the wrong word here, doesn’t it?)
The effect of that in the game is the same as shown yesterday, the robot wanders around picking up blocks and dropping them after a while.
Enhancements will include actually knowing whether we are near another block, bot for attempted pickup and drop-off. And, of course, we’ll make the code look more and more like a state machine as we go forward. I’m wondering whether it may turn out to be some kind of nested state machine, if that’s a thing. It has been so long since I studied them that I no longer remember all the ins and outs.
It was a somewhat frustrating morning but we managed to stay calm and came through it all right. I suspect that some days are just like that. Whether we were a bit off our game, or just didn’t see the right spot to start peeling up the label, I’m not sure. It wasn’t bad … but it wasn’t high-flying accomplishment either. I think we managed to get a little joy out of the morning, especially since the end result is a decent step on the way to a better design.
And that’s what progress is, decent steps forward. See you next time!