All right, already! We break down and draw a visible picture, mostly because I wanted Chet to share in the discovery and I knew I couldn't hold off over the holidays. It looks cool. Does it tell us anything?

Audience Input Changes the Show

Kelly Anderson continues to torture me mercilessly about drawing some pictures and looking at them. Now John Donaldson chimes in, asking whether we’ve demonstrated anything to our customer. He points out that Chet “knows” what is accomplished, so that he needs less care and feeding than a real customer.

I found myself making a similar point to Chet the other day, that he is being too easy a grader in terms of wanting verification that what we’ve done is actually working. We’ll try to bear down on that as we go forward. We do have some FitNesse fixtures that haven’t been written about yet.

At this writing, however, we have about ten days of sessions (twenty hours) plus whatever we’ve done on our own, and though we have demonstrated our ability to read in a blast and find its center, we haven’t really completed any stories.

We were going to try to decide some stories this morning, the Friday before the holiday. However, with Kelly shouting “draw pictures” in my ear all the time, I know I’m not going to be able to resist doing some work on that over the holidays. So when Chet gets here (Amer’s, Brighton), I’m going to ask him to work on a bit of output. I think it’s important that he pair on this one, for two reasons. One is that I have more graphics experience than he has, and I don’t want to race ahead. The other is that he has more patience with stupid things that don’t work than I do, and I want him around to help with whatever frustrations arise.

We’re at Amer’s because of the free Internet: I figure we’ll need to surf on this topic. And I surfed something up early this morning, and copied in this little sample program from the Java Developers Almanac:

public class CreateImage {

    public static void main(String[] args) {
        CreateImage image = new CreateImage();
        image.doit();
    }

    public void doit() {

        // Create an image to save
        RenderedImage rendImage = myCreateImage();

        // Write generated image to a file
        try {
            // Save as PNG
            File file = new File("newimage.png");
            ImageIO.write(rendImage, "png", file);

            // Save as JPEG
            file = new File("newimage.jpg");
            ImageIO.write(rendImage, "jpg", file);
        } catch (IOException e) {
        }
    }

    // Returns a generated image.
    public RenderedImage myCreateImage() {
        int width = 100;
        int height = 100;

        // Create a buffered image in which to draw
        BufferedImage bufferedImage 
          = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);

        // Create a graphics contents on the buffered image
        Graphics2D g2d = bufferedImage.createGraphics();

        // Draw graphics
        g2d.setColor(Color.white);
        g2d.fillRect(0, 0, width, height);
        g2d.setColor(Color.black);
        g2d.fillOval(0, 0, width, height);

        // Graphics context no longer needed so dispose it
        g2d.dispose();

        return bufferedImage;
    }   
}

You can tell I copied this: it has comments in it. Notice how helpful the comments are. I particularly like this bit:

        // Graphics context no longer needed so dispose it
        g2d.dispose();

Helpful, isn’t it? Never would have guessed that, would you? Note that this code is commented for publication! Chet remarked that it’s no wonder the comments we encounter in real life, written by some poor overworked programmer, are so bad.

The program produces a JPG and PNG version of a black circle on a white background. They look like this:

image

and

image

Not much, but the code does show how to draw something. The rest will be details. We may also want to pop up a window with the picture in it. I have code lying around somewhere that does that, and will dig it out if that’s the direction we go. For our purposes right now, we’re probably perfectly OK with writing files and viewing them in our various viewers.

Our Picture

We coded up a picture very quickly, by substituting our own code for the squared circle in the example above. Our version looks like this, with one interesting bit highlighted:

public class CreatePatternImage {

    final String folder = "Data\\";

    public static void main(String[] args) {
        CreatePatternImage image = new CreatePatternImage();
        image.doit();
    }

    public void doit() {

    RenderedImage rendImage = patternImage();

    try {
        File file = new File(folder+"patternImage.png");
        ImageIO.write(rendImage, "png", file);

        file = new File(folder+"patternImage.jpg");
        ImageIO.write(rendImage, "jpg", file);
    } catch (IOException e) {
    }
    }

    public RenderedImage patternImage() {
        int width = 2048;
        int height = 1536;

        BufferedImage bufferedImage 
          = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);    
        Graphics2D g2d = bufferedImage.createGraphics();
        g2d.setColor(Color.white);
        g2d.fillRect(0, 0, width, height);
        AffineTransform tx 
          = new AffineTransform(   1.0,   0.0, 
                                   0.0,  -1.0, 
                                1024.0, 768.0);
        g2d.setTransform(tx);

        ShotPattern pattern = new ShotPattern(folder+"PB270011.bmp");

        g2d.setColor(Color.GREEN);
        g2d.fillOval(pattern.centerOfMass().getX(), 
                     pattern.centerOfMass().getY(), 
                     50, 50);

        g2d.setColor(Color.BLACK);
        g2d.setStroke(new BasicStroke(3));
        g2d.drawLine(-25, 0, 25, 0);
        g2d.drawLine(0,-25,0,25);

        g2d.setColor(Color.RED);
        for (Hit hit: pattern.hits) {
            g2d.fillOval(hit.getX(), hit.getY(), 10, 10);
        }

        g2d.dispose();    
        return bufferedImage;
    }   
}

OK, well. The program draws the center of mass as a green circle of diameter fifty. It draws a cross-hair at the center of the page; and for each hit, it draws a red circle of diameter ten. Some methods should be extracted to make it more clear, but we were here to draw a picture, not create beautiful code. Yet.

The AffineTransform, as of course you see immediately, inverts the Y axis to accomodate the fact that BMP files are upside down, and translates the axes so as to get the whole picture displayed. The result looks like this:

image

… except that the picture you see here is scaled down to 500 pixels wide instead of 2048. Click that one for the full-size one. You may be able to see some odd color artifacts in the picture. Those might be due to the JPG conversion: the PNG version doesn’t have the artifacts. To give you a sense of the whole thing, here’s a subset of the JPG, at full scale:

image

We actually built this code in a few passes. First we put in the hits loop; then the green center of mass; oops then we put the center of mass in before the hits, so that it wouldn’t cover any dots that were underneath it; then we coded the horizontal part of the crosshair, then the vertical. We looked at it each time. Perhaps we fiddled some of the circle diameters a few more times – I don’t recall.

So … what did we learn? Well, we like the picture: it’s good looking and evocative. It looks like a shotgun blast on a target. Is the center of mass correct? We don’t really know, but it looks intuitively good. Is it even in the same place each time we run the program? We have no real reason to believe that it is, except that we think the code doesn’t change it.

The picture gave us ideas for more pictures that we’d like to have, and that’s good, because our vision of the eventual product is very focused on graphical displays of information. We have no more information than we did before regarding whether the picture has found all the pixels, but it might be possible to get that information by displaying all of the pixels as well. Maybe I’ll try that just for fun, over the holidays.

Would the picture we have here have given me more confidence that I had found all the pixels in the original file? I’m not sure. It looks like the bmp picture, and that’s good. But the dots on the bmp are so small that it’s hard to compare. This picture might have led me to draw another that would be “better” for determining whether we found all the clumps, but I’m not at all sure what that picture would be. We do like the picture, though – it’s good for looking at and gaining confidence.

This picture is entirely untested other than by eye. If the transformation is off a bit, or the crosshair isn’t right in the middle, we’ll never know it. I’m not very concerned about that. But I have decades of programming experience that get me more than ready to gloss over the absence of tests and accept what my eyes tell me. That’s not consistent with what Chet and I teach. It’s troubling how easy it is to step away from the near-total confidence that TDD gives.

We did learn one more thing. Chet brought along the original paper target, and since this picture was so easy to see, we compared the paper to the picture. We are now totally certain that the picture does not include some of the holes in the paper. Somewhere in between taking the picture and however Chet converted it down to monochrome, lots of holes were lost: no pixels at all in the BMP file, where the human eye can readily see the pellet strike in the paper. From the small areas we looked at, it is quite credible that the paper contains 1500 holes, while the input BMP file only contains 773 contiguous black areas. Very interesting.

Note that the 700 vs 1500 question hides an important issue. We don’t have an image processing problem (yet): we have an image creation problem. We’ve already been talking about how we would take the pictures of the targets, and think that we’ll need to produce special target paper for the customers to shoot at. We’ll try taking the pictures with the targets on a light box, so that we have light on dark instead of dark on light.

But the key fact is this: there does not exist a way to process the BMP file and find the 1500 holes! The pixels are not there at all. Looks like Chet’s scheme to justify a more expensive camera is a lock now.

Summing Up

Drawing the picture was easy enough, and it was fun trying to remember what I once knew about affine transformations. We got a few interesting pictures while putting the values in the wrong place: it’s much easier to do on paper with a matrix.

With our big red dots we could see the picture much more clearly. It gave us a good intuitive sense of where the center of mass was, which looks good. We were also encouraged to compare the paper sheet to the picture, which we were never tempted to do with the original BMP file, and that led to the discovery that there are no pixels at all where there are definitely holes in the paper. (Many of the holes in the paper are nearly healed, which is why they don’t show up, I imagine.)

We didn’t write a test all day, and didn’t even run the ones we have. This leaves us with an odd feeling. Of course we just have a tiny bit of code, and it “obviously” works, but we don’t have the same kind of certainty that we would with TDD code. When we change the display code, and we will, our only way of knowing that it is broken may be to look at the pictures again. That’s definitely troubling.

We’ll keep an eye on that, and of course you’ll keep an eye on us as well, calling us out when we mess up. That will be soon, I’m sure …