Oddly, I take as today’s writing prompt, a strange tweet from a person of curious name.

Here’s the tween in question:

@agileklzkittens: "It is not a developer's moral or professional responsibility to mitigate dysfunctionality from above. Despite whatever @RonJeffries or @unclebobmartin tells you."

It is not a developer’s moral or professional responsibility to mitigate dysfunctionality from above.

Despite whatever @RonJeffries or @unclebobmartin tells you.

While Bob Martin and I surely agree on many things, the area where we most firmly disagree is surely that of moral and professional responsibility, so to find our names together under that heading is surprising, at least to me. Nonetheless, I am moved to write a bit, prompted by Mr Kittenz’ tweet.

My Philosophical Outlook

I’m not sure where I first read of the “Veil of Ignorance” notion, but it is due to the philosopher John Rawls. Realizing that we are all biased by our own situations, Rawls suggested that when making a moral judgment, we imagine that we do not know who we are and what our personal situation is.

Associated with this idea are two principles, liberty, which says that we should provide people with the most freedom possible, without impinging on others’ freedom. Difference says that we should guarantee each individual an equal chance to prosper, so that differences in our “social contract” should work to the advantage of the less prosperous.

There’s a nice little writeup here, which I cribbed for the above notes..

In my own making of social moral or philosophical decisions, I try to place myself, not just behind the veil of ignorance, but in the situation of those who are, in whatever way, beset by events. (Mentally in their position. I do not go out and try to get actually beset by racism, sexism, genderism, poverty … the list is endless. I try to imagine what it is like for those less fortunate and less privileged than I am, and to work to improve the recognition of their situation, and to improve it. I do not claim to be very good at the work, but I’m getting better at understanding and trying to help where I can.)

I also have a non-zero-sum attitude. I believe that our universe and our intelligence makes it quite possible for everyone, worldwide, to have the same benefits. I think there are plenty of resources and plenty of money and plenty of intelligence for solving problems — if only we would.

Down to Cases

We should turn now to the question of what “moral and philosophical responsibility” we have to “mitigate dysfunctionality from above”. Mr Kittenz does not specify what kind of dysfunction he has in mind, so I’ll just treat that as an opportunity to talk about things I’ve seen.

Unfair Distribution of Pay

I’ve seen a manager, in a situation where the company was losing money, induce their employees to take huge pay cuts, while insisting that their own compensation could not be reduced at all. Well, their house, their rules, but I’d rate that as about a Category 6 level of Tacky.

What is the responsibility of the worker in that situation? I don’t see any specifics, but I think they’d be wise to look out for themselves and seriously consider moving to a less perilous and more fair working situation.

Product Does Actual Harm

What if the dysfunction from above was causing our product to be literally dangerous to its users? Maybe there is undue time pressure, or pressure to reduce testing.

If the pressure is literally going to harm people, I think I’d argue that the developers have a responsibility not to allow the harm. There are many ways they might accomplish that, including simply doing the necessary work to keep the harm away. They might also push back, escalate the situation, become whistleblowers, resign, and so on.

I think that responsibility isn’t about “mitigating dysfunction” so much as it is about simply not working so as to harm people.

Does that mean that developers should not work on surveillance tech, or weapons tech, or the like? That’s not a call that is easy to make without details.

I worked for some years at Strategic Air Command, including working on software that assisted in targeting nuclear weapons. There is no doubt in my mind that nuclear weapons are immoral. YMMV, I suppose. But the purpose of the US effort was to assure that the result of use of nuclear weapons would be so horrible that no one would ever use them.

I knew the people in that line of work, and they were not Dr Strangelove, General Buck Turgidson, or Major “King” Kong. They knew, we all knew, the horrible cost of a nuclear war and believed that making it sufficiently horrible was the best way to prevent it. So far, the idea has worked.

I can readily imagine that many of my readers would refuse to work in that situation, and today, I might make the same decision. At the time, it seemed to be the logical thing to do.

I think developers do have a moral and philosophical responsibility not to support work, not to do work, that is harmful to other humans. I’d probably say that the work should not be harmful to animals or the planet.

General Morality Answer

I think I’d say that the general rules include not doing evil or harmful things ourselves, and not helping others do evil or harmful things.

General Professional Answer

I think one has a responsibility to do one’s work in a way that is not harmful to users, to one’s work colleagues, and to the company one works for. That last bit is tricky.

The company we work for, almost certainly, has as one of its key strategies to extract as much work from the employees as it can, at the lowest possible cost. That’s the nature of capitalism. It is also the nature of fascism, authoritarianism, and very nearly all the available forms of management and government.

And, to a degree, it’s appropriate. If we want to accomplish a thing, doing so at minimal cost is at least one of the objectives one might have. There might be other objectives counter to that. If we are making some high-craft item, custom furniture or knives or pots, we may well use slow tedious hand work, rather than stamping out pots and running them through a spray painter that splats color onto them before they are baked by the thousands.

Our pot-making might take hours of our time per pot, while the factory spits them out for 39 cents each. Even so, when we make our hand-crafted pot, we don’t want to waste time: we want to spend time wisely, providing value along the way.

And this brings me to the notion of how we do our software, how much of ourselves we put into it, how much pride we want to take in it. And how much joy.

Pride and joy. I’ve heard that phrase. And in work, they do go together. It’s hard to enjoy work in which we cannot take pride. And I, for one, do not want to do work that I do not enjoy and cannot take pride in.

I want to work in a place and at a pace that lets me find things to be proud of, and things in which to find joy, every day. I would like everyone to find work with those properties.

If I were in a company that, through “dysfunction”, interfered with my ability to be proud and to find joy, I would want to change that, or to move on, to go somewhere else. Do I have a moral or professional responsibility to fix the problem?

No, but I have a personal responsibility to myself, which might best be met by mitigating that dysfunction somehow.

And, because of the veil of ignorance, the liberty principle, and the difference principle, I feel like the other people in the organization have an equal right to feelings of pride and joy, and I feel that those principles suggest that I do have a moral responsibility toward those around me.

So … Yes.

So … and I didn’t expect to wind up quite here … yes, we do have some personal, moral, professional responsibility to mitigate dysfunction wherever we find it. It’s not about “agile”, but it might be about kittens, and not killing them, nor letting them be killed.