Robot 40
My Customer (me) wants the radar screen rotated to forward = up. The developers (me) are happy to oblige. P.S. Came back later for the thrilling conclusion.
At some point a while back, we changed the screen to stay with up = north. Test users (me) have found operating the tank difficult in that situation, since you have to press the “forward” key even when you are facing west to go the way the robot arrow icon is facing. If the World’s spec were different, we could just make WASD work as they do in our famous Dung program, but here we are required to do a turn and then send forward messages.
So our product design folks and our decision-making customer—all me— have decided that the thing to do is to go back to up meaning forward,
There is an alternative. We could program the controls so that, say, the A key, if you’re facing north, turns you to the west, and then hitting A again moves you. And so on. This idea has not gained currency with the team, so we’re going to rotate the screen.
We have tried several approaches to this notion, and had it working in some previous version. The Robot part of the game has the notion of the Lens, which is already in use to adjust the results of incoming information. The issue is that when you do a “look”, the information that comes back is in terms relative to the robot’s position. So if you’re at 10,10, and there is an obstacle at 9,10, right beside you, when you do a look, you get an indication that there is an obstacle to your west at distance 1.
- Knowledge and Lenses
-
At the bottom, the Robot has an instance of Knowledge, which is nothing more than a collection of x,y,thing. It’s intended that x and y be absolute coordinates … but to be honest, all that stuff is hidden inside Knowledge and Lens, so I’m not really sure whether knowledge is relative to 0,0 or to the robot’s starting x and y. We’ll suppose that it’s absolute but it doesn’t matter.
- Knowledge
-
At the bottom, Knowledge has a pretty weak implementation that relies on there being no more than one thing at any given x,y location. We may someday have to improve it. Our reasoning has been that the details are isolated, so they can be improved as needed without hassling any users.
- Lens
-
The Lens has the same protocol as Knowledge. It contains x and y offsets, and adjusts incoming coordinates by its offsets. So if you are at 10,10, you have a lens that says 10,10 and you look for the fact at 1,0 relative to you, the lens gives you the fact at 11,10, which is just the fact you want.
- Rotating Lens
-
We implemented, once upon a time, a class RotatingLens that adjusts the incoming coordinates to effect one of the four possible rotations we can encounter. That object has been TDD’d, but is not in use.
Whenever we get a response back from the World, our x and y coordinates may have changed, and in our standard response we do this:
function Robot:standardResponse(response)
self:setResponse(response)
self.knowledge = self.knowledge:newLensAt(self:x(), self:y())
end
We set the lens according to our current coordinates. Knowledge returns a lens this way:
function Knowledge:newLensAt(x,y)
return Lens(self,x,y)
end
The upshot is that our knowledge may be an instance of Knowledge, but is more likely an instance of Lens. And Lens works like this:
function Lens:newLensAt(x,y)
return Lens(self._knowledge, x,y)
end
So we repeatedly replace the Lens with a better one. We don’t mind creating and destroying these tiny objects. We trust our garbage collector, and anyway it doesn’t happen often.
We could, if it became an issue, morph the existing Lens. We think immutable objects are better. And, as I say, we trust our garbage collector.
Let’s see how our RotatingLens works. It was intended to be an experiment, although we have in fact used it briefly.
Matrix (!)
Oh yes. Before I even show you this, let me get you to brace yourself. In the current version of RotatingLens we use a class Matrix that handles the rotation. Let’s explore that first.
What we have is a limited implementation of a 3x3 matrix, which is the approved scheme for rotating and translating 2d vectors. We have implemented, longhand, multiplication of matrixmatrix and matrixvector:
function Matrix:__mul(mOrV)
local m1 = self.m
if self:isMatrix(mOrV) then
local m2 = mOrV.m
local res = {}
for i = 1,#m1 do
res[i] = {}
for j = 1,#m2[1] do
res[i][j] = 0
for k = 1,#m2 do
res[i][j] = res[i][j] + m1[i][k]*m2[k][j]
end
end
end
return Matrix(res)
else -- must be vector
--self:require3x3()
local vmatrix = Matrix{ {mOrV.x}, {mOrV.y}, {1} }
local w = self*vmatrix
local ww = w.m
return vec2(ww[1][1],ww[2][1])
end
end
Don’t read that, just feel that you’ve been slapped in the face. We might just take it as read that we have matrix multiplication on matrices and vectors such that we can use them in RotatingLens.
RotatingLens
The RotatingLens has a matrix, which starts out as the identity matrix I, such that I*v = v for all vectors v. In other words, no rotation.
Then, when you tell the lens you want the fact at x,y, it does this:
function RotatingLens:factAt(x,y)
xa,ya = self:adjust(x,y)
--print("looking in ", xa,ya)
return self.knowledge:factAt(xa,ya)
end
function RotatingLens:adjust(x,y)
local v = self.matrix*vec2(x,y)
return v.x,v.y
end
And, if you do a turn, you can tell the RotatingLens about it:
function RotatingLens:turn(direction)
-- when robot turns right, world rotates left and vice versa
if direction=="right" then
self.matrix = Matrix:rotateLeft()*self.matrix
elseif direction=="left" then
self.matrix = Matrix:rotateRight()*self.matrix
end
end
That code just takes the matrix needed, one that rotates you 90 left or right, and multiplies it by the current matrix, and uses that as the new matrix.
Let me lean back and assess this a bit.
Assessment
Yeah, well, this is too much, but it’s interesting. the biggest problem is that the Matrix stuff is focused on turning right or left but the information we really get from a response is our direction, N, E, W, or S. We could resolve that by providing four pre-calculated matrices, N, E, W, and S, and simply using whichever is appropriate.
Second, I don’t think we really want two kinds of lenses, one that translates and one that rotates. Now, our Matrices could handle translation, I think. I believe what you do it put dx and dy into the top two rows in the last column. Something like that.
I’m torn between knowing that matrices are the official way to handle translations and rotations, and the fact that our rotations are only 90 degrees, and we don’t really know the action, we know the result.
Let’s think about what happens to coordinates after rotation.
Suppose we’re looking north, and we see something at 3,2, three to the east and two north. If we then turn to face south, the same thing will appear to be 3 to the west and 2 south, relative to us, or -3,-2 relative to us. Similarly if we face east, it’ll be at -2,3, and west it’ll be at 2,-3.
It seems to me that we’ve already “solved” this problem here, in the World’s look code.
No. We really haven’t, because World only looks at right angles to the Robot, so it only deals with the situation where one of x or y, relative to the robot, is zero.
A plan is coming into view …
A “Plan” …
My tentative plan is to use the Matrix capability, trimmed down. The matrix, given x,y relative to the origin, can return x’,y’, the coordinates if you’re facing a different direction. We’ll only need four matrices. There’s a bit more there than we need …
No. It is too much. Let me try something here, printing some matrices.
_:test("Directional Matrices", function()
local m = Matrix:unit()
print("N")
print(m)
m = m*Matrix:rotateRight()
print("E")
print(m)
m = m*Matrix:rotateRight()
print("S")
print(m)
m = m*Matrix:rotateRight()
print("W")
print(m)
end)
That prints:
N
1 0 0
0 1 0
0 0 1
E
0 -1 0
1 0 0
0 0 1
S
-1 0 0
0 -1 0
0 0 1
W
0 1 0
-1 0 0
0 0 1
Only the first two rows and columns vary. And what they tell us is this. If the two rows and columns are
a b
c d
And we have the raw coordinates x,y, then rotated by that matrix, those coordinates become
a*x+b*y, c*x+d*y
Trust me, I’m a mathematician and this is possibly even correct.
OK, now I have a tentative plan.
Nearly a Plan
We’ll extend the existing Lens to include the robot’s direction, which we have at the time we create the Lens. We’ll use the direction to look up a,b,c,d in a little table, and we’ll do that calculation shown above on x and y to get our new x and y. And, of course, we’ll also have to adjust by the offsets that we already have.
Let’s do some new Lens tests.
function test_Lens()
local knowledge
_:describe("Lens", function()
CodeaUnit_Detailed = false
_:before(function()
knowledge = Knowledge()
knowledge:addFactAt("3,4", 3,4)
knowledge:addFactAt("-2,3", -2,3)
end)
_:after(function()
end)
_:test("Creation includes direction, default N", function()
local lens = knowledge:newLensAt(0,0)
_:expect(lens:factAt(3,4)).is("3,4")
_:expect(lens:factAt(-2,3)).is("-2,3")
_:expect(lens:direction()).is("N")
end)
end)
end
This should fail looking for direction,
1: Creation includes direction, default N -- TestKnowledge:93: attempt to call a nil value (method 'direction')
And …
function Lens:init(knowledge, x, y, direction)
self._knowledge = knowledge
self._x = x
self._y = y
self._direction = direction or "N"
end
function Lens:direction()
return self._direction
end
Now to extend the test a bit …
local lens = knowledge:newLensAt(0,0,"E")
_:expect(lens:direction()).is("E")
Should fail with N.
1: Creation includes direction, default N --
Actual: N,
Expected: E
Implement:
function Knowledge:newLensAt(x,y, direction)
return Lens(self,x,y, direction)
end
function Lens:newLensAt(x,y, direction)
return Lens(self._knowledge, x,y, direction)
end
Expect green. Green. Commit: Lens stores direction.
Let’s get the matrices.
_:test("Lenses return correct matrices", function()
local lens
lens = knowledge:newLensAt(0,0,"N")
local x1, y1, x2,y2 = lens:matrix()
_:expect(x1).is(1)
_:expect(y1).is(0)
_:expect(x2).is(0)
_:expect(y2).is(1)
end)
Will fail looking for matrix method.
2: Lenses return correct matrices -- TestKnowledge:101: attempt to call a nil value (method 'matrix')
Implement:
function Lens:matrix(direction)
local t = {
N={1,0,0,1}
}
return table.unpack(t[direction] or t.N)
end
Repeat for the others, carefully.
Discover that that code passes because of the default. Perhaps we shouldn’t have that. The correct code is:
function Lens:matrix()
local t = {
N={1,0,0,1},
S={-1,0,0,-1},
E={0,-1,1,0},
W={0,1,-1,0}
}
return table.unpack(t[self._direction] or T.N)
end
We need to refer to our existing direction: it’s not a parameter to the method. This test now runs:
_:test("Lenses return correct matrices", function()
local lens
lens = knowledge:newLensAt(0,0,"N")
local x1, y1, x2,y2 = lens:matrix()
_:expect(x1).is(1)
_:expect(y1).is(0)
_:expect(x2).is(0)
_:expect(y2).is(1)
lens = knowledge:newLensAt(0,0,"S")
local x1, y1, x2,y2 = lens:matrix()
_:expect(x1,"S").is(-1)
_:expect(y1,"S").is(0)
_:expect(x2,"S").is(0)
_:expect(y2,"S").is(-1)
lens = knowledge:newLensAt(0,0,"E")
local x1, y1, x2,y2 = lens:matrix()
_:expect(x1,"E").is(0)
_:expect(y1,"E").is(-1)
_:expect(x2,"E").is(1)
_:expect(y2,"E").is(0)
lens = knowledge:newLensAt(0,0,"W")
local x1, y1, x2,y2 = lens:matrix()
_:expect(x1,"W").is(0)
_:expect(y1,"W").is(1)
_:expect(x2,"W").is(-1)
_:expect(y2,"W").is(0)
end)
Now let’s transform the coordinates.
_:test("Transform coordinates", function()
local lens, x,y
lens = knowledge:newLensAt(0,0,"E")
x,y = lens:rotate(3,2)
_:expect(x).is(-2)
_:expect(y).is(3)
end)
Fails looking for rotate.
3: Transform coordinates -- TestKnowledge:129: attempt to call a nil value (method 'rotate')
Implement:
function Lens:rotate(x,y)
local x1,y1,x2,y2 = self:matrix()
return x*x1+y*y1, x*x2+y*y2
end
A few more quick tests:
_:test("Transform coordinates", function()
local lens, x,y
lens = knowledge:newLensAt(0,0,"E")
x,y = lens:rotate(3,2)
_:expect(x).is(-2)
_:expect(y).is(3)
lens = knowledge:newLensAt(0,0,"S")
x,y = lens:rotate(3,2)
_:expect(x).is(-3)
_:expect(y).is(-2)
lens = knowledge:newLensAt(0,0,"W")
x,y = lens:rotate(3,2)
_:expect(x).is(2)
_:expect(y).is(-3)
lens = knowledge:newLensAt(0,0,"N")
x,y = lens:rotate(3,2)
_:expect(x).is(3)
_:expect(y).is(2)
end)
Now we’re just a tiny ways away from using this baby. Let’s reflect.
Reflection: Where Are We?
We’ve extended Lens to know its direction. We’ve added a matrix that reflects that direction. We’ve added a rotate function that, given x and y, returns the rotated x and y.
It seems that all we have to do now is:
- Pass in direction in the code that sets the lens in Robot;
- Apply the rotation function before offsetting.
Could it be this easy? Let’s first commit: Lens has matrices and can rotate coordinates.
Hacking?
Now I’m on a clean code base. Let’s first put in the direction, which should be harmless, as we aren’t really using it yet.
function Robot:standardResponse(response)
self:setResponse(response)
self.knowledge = self.knowledge:newLensAt(self:x(), self:y())
end
This just needs direction added:
function Robot:standardResponse(response)
self:setResponse(response)
self.knowledge = self.knowledge:newLensAt(self:x(), self:y(), self:direction())
end
That should be harmless. Test. Green. Commit: record direction in Robot lenses.
Now, probably I should do this in the tests, but I can’t resist putting this code into Lens:
function Lens:addFactAt(content,x,y)
local xr,yr = self:rotate(x,y)
return self._knowledge:addFactAt(content, self._x+xr, self._y+yr)
end
function Lens:factAt(x,y)
local xr,yr = self:rotate(x,y)
return self._knowledge:factAt(self._x+xr, self._y+yr)
end
I gotta run this and see what it does. It does one good thing: it rotates the view almost exactly as I’d wish.
But it does some bad things. The rotation is backward. The pics above are N, E, S, W in order and facing E I should have the black pits at the top, not the green obstacles.
And when I store new items, they appear to the left and right of the screen, no matter which way I’m facing.
I think we have too much code dealing with rotation now.
Let’s first look at how we put information away. I think we’re rotating it by hand and we should stop doing that.
function LookPacket:asFactDetails()
local convert = { N={0,1}, S={0,-1}, E={1,0}, W={-1,0} }
local mul = convert[self._direction]
if not mul then return nil,nil end
return self._type, self._distance*mul[1], self._distance*mul[2]
end
This is supposed to be returning type, x, and y. And it looks correct … if an object is North at d, its coordinates are (0,d), and that’s what we return.
I add a print:
function Lens:addFactAt(content,x,y)
local xr,yr = x,y
xr,yr = self:rotate(xr,yr)
print(x,y,xr,yr, content)
return self._knowledge:addFactAt(content, self._x+xr, self._y+yr)
end
This tells me that the world is sending me the same coordinates for a look, no matter what my orientation is. Let’s check the spec.
As far as I can tell, the look values are in compass coordinates, that is, they’re accurate. Then we shouldn’t rotate on addFactAt.
function Lens:addFactAt(content,x,y)
-- do not rotate on input, we get compass coordinates
return self._knowledge:addFactAt(content, self._x+x, self._y+y)
end
This gives me screen behavior that I think is correct. Here’s a movie.
Can I commit this? Let’s think about it.
Reflection II
Well, on the one hand, it’s clear working. I confess I’m not entire clear on what I changed. Working Copy will tell me.
In Main, I forced the Robot icon lookup to RN, robot north, because the icon should no longer rotate: the screen is rotating.
In Lens, I do not rotate x and y on input, just on fetch, because we are provided offsets in compass directions and we fetch relative to our rotation.
And in Lens, I reversed the E and W rotations, because when we turn right (to face east) we rotate the screen left.
I could surely commit this and it would be working. But is it the right thing to do? I think not. I need at least one test to cover the when do we rotate and when don’t we question. The existing tests need to change to deal with the changed E and W values.
I’ll keep my “branch” open, and break for the day, unless I mysteriously get energy enough to code more later on today. I will change the existing tests to be green.
That done, I think I’ll just remove the change to Lens that rotates:
function Lens:factAt(x,y)
local xr,yr = x,y
--xr,yr = self:rotate(x,y) -- not yet
return self._knowledge:factAt(self._x+xr, self._y+yr)
end
And remove the icon patch:
function drawRobot()
dir = CurrentRobot:direction()
fact = "R"..dir
--fact = "RN" -- not yet
gd = GridDisplay:display(fact)
pushMatrix()
scale(0.8, 0.8)
sprite(gd.asset, 0, 0)
popMatrix()
end
Now we’re green and gameplay is not changed. I can commit with honor. Commit: new Lens rotation ready to be installed, needs tests especially for in-out.
Let’s sum up.
Summary
I just couldn’t resist trying the new code out, and in doing so I learned some key things, like that I had the rotations backward. I also learned that the World always provides look info in real compass coordinates, so that I have to store facts unrotated and return them rotated.
So the experiment was well worth trying.
And I think it’s pretty natural to try the obvious fixes, which were trivial: don’t rotate on saving facts, reverse the E and W data to make it turn the other way.
The temptation is then, well, it works, commit it … and we leave just a little bit of a gap in the tests. The code is working, but is it right? Should it be more obvious why we don’t rotate on adding but do rotate on fetching?
And of course the code isn’t very robust. You could call it with a direction of “foo” and it wouldn’t give a diagnostic. (It also wouldn’t break: it would act as if you had said “N”.)
I feel righteous for not committing the new feature, and even more righteous for adjusting the tests and taking out my other changes, saving them for next time.
I could finish up today, but I’d have to work overtime, and I don’t like to do that.
We’ll do it tomorrow. See you then!
P.S.
I had a few moments:
_:test("Put away unrotated, get back out rotated", function()
-- it's that way because World returns compass positions, unrotated
lens = knowledge:newLensAt(0,0,"S")
lens:addFactAt("2,2", 2, 2)
_:expect(lens:factAt(-2,-2)).is("2,2")
end)
And in the code:
function Lens:factAt(x,y)
local xr,yr = x,y
xr,yr = self:rotate(x,y)
return self._knowledge:factAt(self._x+xr, self._y+yr)
end
And in the display:
function drawRobot()
gd = GridDisplay:display("R")
pushMatrix()
scale(0.8, 0.8)
sprite(gd.asset, 0, 0)
popMatrix()
end
And in GridDisplay:
local assets = {
R =asset.builtin.UI.Yellow_Slider_Up,
--used if screen doesn't rotate
RN=asset.builtin.UI.Yellow_Slider_Up,
RE=asset.builtin.UI.Yellow_Slider_Right,
RS=asset.builtin.UI.Yellow_Slider_Down,
RW=asset.builtin.UI.Yellow_Slider_Left,
}
I decided to leave the handling of rotation as a bit of history, and because there are some tests for it.
Everything works as intended. Commit: Screen now rotates, tank always centered and pointing up.
P.Summary
An added test, a few lines of change, and we’ve got the rotating screen working, and with decent objects doing the work.
I could imagine that we could have addFactAt
and atFactAtRotated
if we ever need the latter.
For now, good enough!