As small as this program is, it shows us examples of issues that arise in larger applications, and suggests similar changes to those we’d make in the wild.
I did find an egregious defect yesterday, where the game was not playing the large vs small saucer sounds correctly. It appears that I accepted a suggested name that was the wrong one. The correct code is this:
def update(self, delta_time, fleets): if self._size == 2: player.play("saucer_big", self._location, False) else: player.play("saucer_small", self._location, False) self.fire_if_possible(delta_time, fleets) self.check_zigzag(delta_time) self._move(delta_time, fleets)
And that gives us something to think about.
Someone even more fanatical than I am would suggest that
self._size==2 is kind of magical and that the saucer should have some better way of doing that. Which does make me wonder, what use do we make of that value? We do have this:
@property def always_target(self): return self._size == 1
And there’s this, which actually uses size:
def create_surface_class_members(self): raw_dimensions = Vector2(10, 6) saucer_scale = 4 * self._size Saucer.offset = raw_dimensions * saucer_scale / 2 saucer_size = raw_dimensions * saucer_scale Saucer.saucer_surface = SurfaceMaker.saucer_surface(saucer_size)
And this, which is quite obscure:
def score_for_hitting(self, attacker): return attacker.scores_for_hitting_saucer()[self._size - 1]
There are two scores for hitting the saucer, one for the large and one for the small. We look up which one to use with that somewhat cryptic code.
We create the saucer with a single constructor, passing the size, which defaults to 2 (large). And there are two tests that check to see that we create the right size saucer.
Now in a program of this size, maintained by a team of one person and one cat, this is arguably good enough. But in a large program, maintained by many people over many months or years, this sort of thing can cause confusion and mistakes. One name for this problem is “Magic Number”, Here, the numbers 1 and 2 are used to mean that the saucer is small or large. They are used, with an adjustment, to index into a score table that is not even owned by this class.
Look at this code again!
class Saucer(Flyer): def score_for_hitting(self, attacker): return attacker.scores_for_hitting_saucer()[self._size - 1]
The attacker here might be a missile, an asteroid, or a ship. And we are going to send that object
scores_for_hitting_saucer, and then index into whatever they give us back, which had better have the score for hitting a small saucer at 0 and the score for a large saucer at 1.
Folx, that is coupling, and there are issues with it:
First, if we implement some other object that can collide with a Saucer, perhaps a cosmic ray laser, or a space bird, I don’t know, maybe there are birds in space, they’d be small, we might just not have seen them, that object would need to implement
scores_for_hitting_saucer, and it would have to contain two elements, one for large and one for small.
How would the bird implementor even know that they had to do that?
In our current design, we don’t even have an interface that requires that method, so the implementor of class Bird would not get any early warning. And since birds are rare in space, we might not trigger an error until the game was burned into ROM and shipped all over the world. And that would be bad.
What might we do about these concerns?
The linkage between “attackers” and the saucer should be sorted out somehow. Possibly there should be a score table somewhere that would be indexed by target and attacker, maintained in a central position. There should probably be a defined interface for Flyers, and it should probably require them to implement any scoring methods that are needed. It seems probable that there are other similar methods: yes, there is
scores_for_hitting_asteroid, which people are also expected to implement.
We should probably have constructors,
Saucer.large, covering whatever internal values we use to deal with saucer differences.
We might consider two Saucer subclasses, large and small, although my guess is that that would not be a good idea.
We have spoken before about whether there should be two main abstract subclasses of Flyer, one for colliding flyers, and one for those that cannot collide. One key issue is that all Flyer subclasses interact with other objects, and the fundamental rule of the game is that each object of class Foo will send
interact_with_foo in response to the base-level
The present convention is that the “true collider” methods are abstract and must be implemented, and others are not abstract, so that you can implement them if you care to. The abstract ones are these:
class Flyer: @abstractmethod def interact_with_asteroid(self, asteroid, fleets): pass @abstractmethod def interact_with_fragment(self, fragment, fleets): pass @abstractmethod def interact_with_missile(self, missile, fleets): pass @abstractmethod def interact_with_saucer(self, saucer, fleets): pass @abstractmethod def interact_with_ship(self, ship, fleets): pass
Does anyone actually implement the
fragment one? No. There are twelve implementors in the game, and two in the tests, and they all
pass. That one really shouldn’t be abstract. In fact, we don’t even want them to be able to interact with anything: they are really just a visual effect. We could imagine that some hidden object might wait for them to subside or something but no visible object interacts with fragments, nor should it.
And if we changed our mind … any object that wants to do so can implement
interact_with_fragment and start getting the messages.
I’ve made a Jira sticky to clean that up. Even in this tiny program there are a dozen implementations cluttering up our understanding.
But back to saucer. In a larger application, would we even make similar “mistakes” to these?
I believe that we would make similar mistakes at scale, for a number of reasons. First, it would be rare to have a team so seasoned that they would not make mistakes like these. I stand as an example of that. Second, most any team would be implementing their application incrementally, and ideas that hold water when the system is small often begin to leak as things grow. Third, most any team is going to be under pressure to deliver, and pressure also creates shortcuts like these, and pressure creates leaks. And finally, over the long course of time in a real application, stuff happens.
OK, then what might we do here that would be similar to a larger situation?
I can think1 of at least three things we might do: create more and better interfaces for our objects to inherit; create factory methods for objects like Saucer; devise some new classes providing general approaches to things like scoring.
- In a dynamic language like Python, interfaces are not as necessary as they seem to be in languages like Java, C#, or Kotlin, but in terms of expressing what protocol an object should implement, they are available and valuable. We’ve used the abstract class Flyer to our advantage after realizing that we were forgetting to implement key methods.
- Factory Methods
- Even if “size” as an integer is a useful value for a Saucer to have internally—which is not obvious—there’s no good reason why non-saucer objects should know about those magic numbers. Providing factory methods would allow the Saucer implementor to set up whatever useful values she wanted, and Saucer users could be given useful, meaningful properties and methods as needed.
- Additional Classes
- There are “ideas” in many programs that are not very well expressed in the code. In our current Asteroids™, scoring is an example.
Scoring is spread all over the code, and it consists of nothing but magic numbers. Some of the scoring values are isolated in our
uuniversal constants module, but others are cryptically stored as short arrays containing zero, for things that should not score. Furthermore, it is quite likely that the score values will be tuned as we play the game, and it is even likely that “they” will ask us to provide a way—DIP switches were big back then—to set up the game’s scores locally.
In object-oriented programming, we generally express ideas using classes, and it seems likely that scoring is the sort of idea that should be isolated and better expressed.
In my many years of programming, on more computers2 and in more languages than many of you could name, I have seen the same issues arising again and again. A cruel person would say that that’s because apparently I never learn, but I think the facts are that most of us only sort of learn, and the creation of software is so difficult, goes on so long, under so much pressure, that as much as we learn, we humans3 still make the same mistakes, leave the same issues over and over.
For me, that’s part of the value of a small but fairly long-term project like my repeated implementations of Asteroids. We get to see things in the small that we’ll encounter in the large, and our solutions in the small will show us ways of addressing the concerns in the large.
Today’s tools will help. Even in a duck-typing language like Python, PyCharm does an amazing job of refactoring without making mistakes. Testing frameworks allow us to write tests that support ongoing change, and that help us see when we’ve done something oddly, because tests tend to exercise our code as other users will, which is often different from what we thought would be needed. Linters and other warning generators will help us find problems and potential problems. PyCharm even hassles me when my code lines are too long.
Because we can address common code concerns in the small, we gain confidence that we can address them in the large. Not every problem will show up here, and not every improvement here is applicable to larger scale, but many or most of the problems do show up and many or most of our solutions do apply.
Of course, when it comes down to it, I just love programming, thinking about programming, and sharing my thoughts with my readers, both of you. All three. Four. Whatever.4
See you next time!
A bold and unsupported claim! ↩
Summer of 1961. IBM 704, 7090, 1401, 1620, 360; SDS/Xerox 940, Sigma 7; DEC PDP-1; Burroughs E-101; 6502 et al. APL, Assembler(many), BAL, BASIC, C, C++, C#, COBOL, Commercial Translator, Delphi, Forth, FORTRAN, Java, IPL-V, Kotlin, Lisp, Logo, Lua, MAD, MASM, Pascal, Pastel, Perl, PHP, Prolog, Python, Ruby, Self, SIMSCRIPT, SLIP, Smalltalk, SNOBOL, Squeak, and probably some that I’ve forgotten. ↩
I’m not likely to try any of the ChatGPT or similar AI things, although you never know what I might do: heck, I never know what I might do. But so far, it seems that the AI programs may or may not make the same mistakes that we do. Since they work by copying code they’ve read on the Internet!, I suspect that they do. The more important issue is that the AI programs do not write the program you ask for: they show you what the program might look like. Often, they make up library calls that don’t even exist. “This is not an answer to your question: it is what an answer might look like.” ↩
Maybe five. I don’t know. ↩