The prize for the most insightful comment goes to hliyan:
- About 8 years ago I was gifted a copy of Ray Dalio’s Principles. Being a process aficionado who thought the way to prevent bureaucracy was to ground process in principles, I was very excited. But halfway through I gave up. All the experience, the observations, the case studies that had led Dalio to each insight, had been lost in the distillation process. The reader was only getting a Plato’s Cave version.
Oops, I didn't send my reply comment. I've just posted it, yes, that information did change my mind about this case.
There was a lot of comments on the original article about accountability sinks. People seem to feel strongly about human judgement being replaced by formal processes. So let’s dive in into the comments!
Ben Pace writes:
That’s a completely understandable position, but when you read about what actually went wrong at LAX, the argument about disincentivization loses some weight (from the original Asterisk article):
Firing Wascher would not help much to solve these issues. Fixing the underlaying problems, on the other hand, would. After all, if your approach to safety is hiring only the supermen who make no errors, then you have already failed. On the other hand, threatening Wascher and, in effect, all air traffic controllers, would lead to cover-ups and would make identification of those underlaying issues much harder.
All that being said, it’s obvious that blameless culture does not work in some contexts. Specifically, it does not work, when there is a huge incentive to cheat. If blameless culture was used, say, in a governmental procurement agency, pretty soon all the contracts would be awarded to the friends and the relatives.
However, I used to feel that we can get away with the blameless culture in the cases where the incentives of the organization and the employee are aligned. For example, neither the doctor, nor the hospital wants patients to die. Hospital can therefore assume no ill intent on the part of the doctor. Or consider the pilots. Neither they, nor the airlines want planes to crash…
But other comments made me question my assumptions. Here’s one from philh:
That was intriguing enough to make me read the linked article and oh, boy:
You are left wondering whether reckless human behaviour can get any more terrifying. But those doubts are promptly resolved when you move to the comment by Brinedew:
Brinedew dryly comments:
Clearly, in the real world, there’s a need to balance accountability with the ability to cope with unintentional errors. Multiple commenters have pointed to the concept of “just culture”. Some of them have also mentioned the book by Sidney Dekker.
From the blurb at Amazon:
Feels relevant, but I haven’t read the book, so no guarantees.
The prize for the most insightful comment goes to hliyan:
I was aware of the value of practical examples in education — a tradition going back to Comenius and his School by Play — and in conveying knowledge more broadly. After all, my original article was essentially a collection of practical examples. But I had never considered the idea of using examples to set rules, whether at the workplace or in legal contexts, instead of relying on unambiguous, cut-and-dried laws and processes.
And it’s not only that an examples are easier to relate to. They have a nice property of not spelling out the rule directly, just showing one instance of it. It is left to the practitioner to infer the underlying rule. Instead of saying “In situation X, do Y,” it says “Here are some examples of how a decent human being would behave in situations similar to X." Heck, there can even be multiple mutually contradictory examples making it clear that in the end, its up to every individual to interpret the spirit of the law rather than to blindly follow its letter.