baboon6 Posted January 12, 2014 Posted January 12, 2014 The book isn't without its issues but definitely worth reading. Here's a more critical review: http://online.wsj.com/news/articles/SB10001424127887324549004579065750053378722
Tony Evans Posted January 12, 2014 Posted January 12, 2014 I am less than impressed by the review: "One crucial fact must be kept in mind: none of the roughly 70,000 nuclear weapons built by the U.S. since 1945 has ever detonated inadvertently or without proper authorization” but: “thousands of missiles are hidden away, literally out of sight, topped with warheads and ready to go” -- and “every one of them is an accident waiting to happen.” uh?? Good performance so far is not a guarantee for the future. It would have been true, at 8 AM on the morning of September 11, 2001, that thousands of islamists wished the US ill and wanted to kill as many Americans as they could, but not one mass casualty incident had been caused by them inside the US. It would also have been a true statement that a mass casualty incident caused by islamists terrorists was an event waiting to happen. That's why so much attention is paid to nuclear surety. The high complexity of the systems, combined with the cost of even a single failure, make it so.
Phil Posted January 12, 2014 Posted January 12, 2014 Carl Sagan did some interesting work on how politics and power can corrupt lessons to be learned about previous nuclear surety failings, this making it more likely for similar future events to occur.
Tony Evans Posted January 13, 2014 Posted January 13, 2014 Carl Sagan did some interesting work on how politics and power can corrupt lessons to be learned about previous nuclear surety failings, this making it more likely for similar future events to occur. Of course I can't go into details, but just about every regulation or procedure we used to follow in special weapons security were the kind of things that naturally give rise to the thought, "I wonder who fucked this up the first time?" So while the potential for not learning lessons exists, it always seemed to me that everything we did was the result of lessons learned.
Phil Posted January 13, 2014 Posted January 13, 2014 (edited) @Tony (this quote thing doesnt seem to be working) I have no doubt. But he did do some interesting research on organisational learning. He didn't conclude it didn't happen - just that it can be corrupted by parochial interests and organisational politics. It's a sociological bent on something that a lot of traditional management science etc takes for granted that it occurs and it occurs in a rational manner. I would argue that organisational learning is far more complex than that and that a number of barriers to it exist - cultures of blame, basic information difficulties, different interpretations and so forth. Edited January 13, 2014 by Phil
Tony Evans Posted January 16, 2014 Posted January 16, 2014 (edited) I have no doubt. But he did do some interesting research on organisational learning. He didn't conclude it didn't happen - just that it can be corrupted by parochial interests and organisational politics. It's a sociological bent on something that a lot of traditional management science etc takes for granted that it occurs and it occurs in a rational manner. I would argue that organisational learning is far more complex than that and that a number of barriers to it exist - cultures of blame, basic information difficulties, different interpretations and so forth. The thing is, just because something can happen, that doesn't mean it will happen. US nuclear surety measures aren't perfect, by any means, but they tend to err in the direction of squeezing too much. rather than too little, out of a teachable moment or event. Once again, I can't give you details, but it's true. Edited January 16, 2014 by Tony Evans
Adam_S Posted January 16, 2014 Posted January 16, 2014 http://www.bbc.co.uk/news/world-us-canada-25753040 Thirty-four US Air Force officers in charge of launching nuclear missiles have been suspended over accusations they cheated on proficiency tests.The Air Force said a small number of staff had been texting answers to the routine tests to others, while others had known but failed to report it.The ranks involved range from 2nd lieutenants to captains.The cheating allegations emerged during investigations into alleged drug use by personnel at other bases.Air Force Secretary Deborah Lee James told a news conference the cheating involved officers based at the Malmstrom Air Force Base in Montana, and related to a monthly test all nuclear missile staff must take."Some officers did it," she said of the cheating. "Others apparently knew about it, and it appears that they did nothing, or at least not enough, to stop it or to report it."
Archie Pellagio Posted January 16, 2014 Posted January 16, 2014 "Pssst...what's question three? What does The Big Red Button do?"
Phil Posted January 16, 2014 Posted January 16, 2014 I have no doubt. But he did do some interesting research on organisational learning. He didn't conclude it didn't happen - just that it can be corrupted by parochial interests and organisational politics. It's a sociological bent on something that a lot of traditional management science etc takes for granted that it occurs and it occurs in a rational manner. I would argue that organisational learning is far more complex than that and that a number of barriers to it exist - cultures of blame, basic information difficulties, different interpretations and so forth. The thing is, just because something can happen, that doesn't mean it will happen. US nuclear surety measures aren't perfect, by any means, but they tend to err in the direction of squeezing too much. rather than too little, out of a teachable moment or event. Once again, I can't give you details, but it's true. I'm not saying these measure are shit. I am simply saying that research bears out that learning processes in organisations, even ones given responsibility for some very hazardous operations,, are constantly compromised and are vulnerable to organisational politics. There's further research that shows that technical and social redundancy can actually increase risk in a number of circumstances. Snook wrote an interesting book on the 2x Blackhawks shot down in 1994 called Friendly Fire. He dealt with how the social-redundancy of the AWACs crew probably contributed to the accident. Perrow makes a good case for technical redundancy often making systems more dangerous as they increase the two properties which he thinks makes systems prone to failure - complexity and tight-coupling. He argues complex, tightly coupled systems are very vulnerable to failure. Technical redundancy increases complexity and coupling - therefore such measures can make the situation more dangerous.
Tony Evans Posted January 18, 2014 Posted January 18, 2014 1. I am simply saying that research bears out that learning processes in organisations, even ones given responsibility for some very hazardous operations,, are constantly compromised and are vulnerable to organisational politics. There's further research that shows that technical and social redundancy can actually increase risk in a number of circumstances. Snook wrote an interesting book on the 2x Blackhawks shot down in 1994 called Friendly Fire. He dealt with how the social-redundancy of the AWACs crew probably contributed to the accident. 2. Perrow makes a good case for technical redundancy often making systems more dangerous as they increase the two properties which he thinks makes systems prone to failure - complexity and tight-coupling. He argues complex, tightly coupled systems are very vulnerable to failure. Technical redundancy increases complexity and coupling - therefore such measures can make the situation more dangerous. 1. Just try organizational politics on the flight deck of a carrier, or within the confines of a rifle squad. Nuclear surety systems are larger and more bureaucratic, of course, but they tend to partake of that dynamic more than the other. There are some things, you know, where people actually GAS about getting things right, because failure is just too costly. 2. I'm a software developer. Believe me, I understand system complexity. I also understand that a lot of times systems are just naturally and irreducibly complex. So you baby hem along and make them work. You don't snivel about the risks involved. You live with them.
Phil Posted January 18, 2014 Posted January 18, 2014 (edited) Nuclear surety systems are larger and more bureaucratic, of course Well that's entirely the point isn't it? A rifle squad or a flight deck is not the same thing as 20th Air Force or whatever - it's an entirely different level of analysis. There are some things, you know, where people actually GAS about getting things right, because failure is just too costly. That's right failure is just too costly. Often to careers. I'm a software developer Everyone here Tony knows all the jobs you've had so no need to remind me. I also understand that a lot of times systems are just naturally and irreducibly complex. So you baby hem along and make them work. You don't snivel about the risks involved. You live with them. That's the main point of Perrows work - you do have to live with them when they are in being. So the bigger question is the normative one of who decides who has to live with these complex, and therefore risky systems which argues Perrow and others, are prone to what he calls "normal accidents". Edited January 18, 2014 by Phil
Tony Evans Posted January 20, 2014 Posted January 20, 2014 1. Well that's entirely the point isn't it? A rifle squad or a flight deck is not the same thing as 20th Air Force or whatever - it's an entirely different level of analysis. 2. That's right failure is just too costly. Often to careers. 3. Everyone here Tony knows all the jobs you've had so no need to remind me. 4. That's the main point of Perrows work - you do have to live with them when they are in being. 5. So the bigger question is the normative one of who decides who has to live with these complex, and therefore risky systems which argues Perrow and others, are prone to what he calls "normal accidents". 1. Actually that's only part of the point. It is possible for large organizations to design around and rely on personnel reliability and integrity. 2. Please note that in a healthy nuclear surety system, putting career goals above program objectives is considered a personal failure that has to be fixed, not a system feature that has to be lived with. Note that the actions that have started and fueled this discussion are in fact considered aberrant and culpable personal failures, not normative organizational behavior. 3. Then, by the same token, no need to lecture about complexity theory, hmmm? Anybody in my technical subculture is well aware of it, right? 4. Cute. "...while [complex systems] are in being." Like on could rationally choose not to have them. 5. But here we see where your head is really at. You truly believe that complexity is optional. It ain't. There are as many dimensions to a problem as exist within the problem space. There's no deciding them away.
Gunguy Posted January 20, 2014 Posted January 20, 2014 But Tony, Obama said he could fix the complicated health care mess and with a wave of his hand make it all better. You mean he might have lied? Oh, the shock!!!! I know this is off topic but it was begging me to comment.......
Phil Posted January 20, 2014 Posted January 20, 2014 1. Actually that's only part of the point. It is possible for large organizations to design around and rely on personnel reliability and integrity. How then? I've explained why I think organisational learning can be compromised. There's some researchers that agree with you, but what you say doesn't address for example the socio-technical interface. And quite what is reliability and integrity? 2. Please note that in a healthy nuclear surety system, putting career goals above program objectives is considered a personal failure that has to be fixed, not a system feature that has to be lived with. Yet it has happened. People have made honest mistakes, know they'll get slammed for it and cover up. For examples I gave you Sagan who explains his political theory of organisational learning in his book. Of course it has to be lived with (that is different from condoning or tolerating) because it is all based on an inherent property of most organisations: people make mistakes, they make slips, sometimes the systems are so built that it is inevitable. People get blamed, and when people get blamed they often try to avoid that culpability. Blame corrupts learning, blame meets a very fundamental social need so blame is ubiquitous. 3. Then, by the same token, no need to lecture about complexity theory, hmmm? Anybody in my technical subculture is well aware of it, right? Please do if you like - always interesting to hear different takes on theories. Like on could rationally choose not to have them. You can't choose to rid oneself of them all. But you can certainly make choices about some complex systems. http://www.theguardian.com/sustainable-business/nuclear-power-germany-renewable-energy You truly believe that complexity is optional. It ain't. Of course it is. We might want outputs that mean we need complex systems but those outputs are socially determined. In modern life we cannot avoid complexity if we wish to continue as we are, but we certainly can make choices about complex systems.
Tony Evans Posted January 21, 2014 Posted January 21, 2014 1. How then? I've explained why I think organisational learning can be compromised. 2. but what you say doesn't address for example the socio-technical interface. 3. And quite what is reliability and integrity? 4. Yet it has happened. People have made honest mistakes, know they'll get slammed for it and cover up. For examples I gave you Sagan who explains his political theory of organisational learning in his book. 5. Of course it has to be lived with (that is different from condoning or tolerating) because it is all based on an inherent property of most organisations: people make mistakes, they make slips, sometimes the systems are so built that it is inevitable. People get blamed, and when people get blamed they often try to avoid that culpability. Blame corrupts learning, blame meets a very fundamental social need so blame is ubiquitous. 6. Please do if you like - always interesting to hear different takes on theories. 7. You can't choose to rid oneself of them all. But you can certainly make choices about some complex systems. http://www.theguardian.com/sustainable-business/nuclear-power-germany-renewable-energy 8. Of course it is. We might want outputs that mean we need complex systems but those outputs are socially determined. In modern life we cannot avoid complexity if we wish to continue as we are, but we certainly can make choices about complex systems. 1. Just because learning can be compromised doesn't mean it must be. The general approach is multiple, independent monitors. That doesn't guarantee that every problem will be detected or mastered, but it lowers the odds of failure considerably. For example, the modern cooperative flight deck environment is the result of learnings throughout an entire industry made up of numerous profit-seeking, almost characteristically selfish business entities. The autocratic flight deck, where the pilot-in-command ruled with a figurative iron fist, was just too much trouble, and an entire culture of command was forced to change in the interest of improved safety and reliability. Rickover's nuclear Navy is another example. There's an organization that actually eschewed technical solutions in favor of human ones. It turned out that, given the right motivations, humans could be more reliable than machines in managing complex and potentially dangerous technologies. 2. I googled "socio-technical interface". I found enough meaningless gobbledygook to fill up encyclopedias. I don't think I'll lose much sleep over leaving that unaddressed. 3. You're kidding, right? 4. "[H]onest" mistakes? No such thing where the discipline of technology is concerned. There's always a cause, rooted somewhere in incompetence or ignorance. Either can be the result of poor personal discipline, poor management, or both. That's why multiple, independent monitors are necessary -- it makes sweeping poor performance under the rug that much more difficult. 5. "Blame corrupts learning"? Reads like a sound bite from a motivational lecture. As such, it is just so much nonsense. Reminds me of that Crichton book, Rising Sun, in which the author claimed that the Japanese don't fix blame, they fix the problem. Well, that is somewhat true of their business culture, but it leads to things like the "window man" phenomenon, and other organizational inefficiencies. The fix is not to eliminate a problem, but to shuffle it off to Siberia. And, as TEPCO proved in the case of Fukushima, avoiding the appearance of blame can lead to the avoidance of necessary work to actually fix the situation. This, of course, is where ruthlessness and integrity are necessary features of those in charge of complex systems. From personal experience, Russians are actually better at this than a lot of people. 6. The point was that there was no call for you to lecture me about complexity. You claim to have known my profession. It should have been obvious that I wasn't in need of teaching on the subject. 7. Trading one set of complexities for another. Imagine the kind of shenanigans that will ensue as the German government, by arbitrary fiat, tries to reduce energy consumption. The complexity of the nuclear power plant (to the degree that it is truly complex, and not just perceived to be) will be exchanged for a complexity of regulations, incentives, enforcements, etc. 8. Nonsense. I work on highly successful and highly complex taxi cab dispatching and accounting software. If my management made the "choice" to reduce complexity in pursuit of simplicity and cheaper -- but by no means improved -- reliability, we'd indeed have less complex software. We'd also have less successful software, because somebody else would engage the complexities of the industry and have a more salable product. Choices about complexity are an illusion. The complexity of a system is determined by the complexity of the problem domain. Period.
Phil Posted January 21, 2014 Posted January 21, 2014 1. Socio-technical is gobblygook? Right. 36 years of peer reviewed research and a central concept in flight deck design and because our resident contrarian hasn't heard of it before it's a silly concept? It's a bit like me stating complexity theory is gobblygook. You can imagine how seriously I took the remainder of your post Tony. 2. If blame doesn't corrupt learning why are there confidential reporting systems in place in many airlines and railway companies and NASA? What do you think might be their motivations here Tony? Spending all that money on motivational nonsense? Again there's been several decades of research on blamism. Peer reviewed and adopted by industry. 3. Complex systems. It's pretty obvious my main point concerned hazardous complexity. Hence my point about Perrows normative stance on it. We do indeed have a choice about complexity since as I've said social choices determine which systems we wish to use. And again a flight deck or nuclear power or surety systems are hazardous. Your cab software isn't. So socio-technical interface is relevant and isn't rubbish. Your flight deck example is a perfect example. Airlines are also a perfect example of large for profit organisations running expensive confidential reporting systems to. They might Tony be basing these on some serious peer reviewed and valid research across decades. Sydney Dekker and James Reason talk about human error and blame. You should do some reading before sounding off with your pedestrian knowledge on the matters of organisational reliability, learning and blame.
Tony Evans Posted January 21, 2014 Posted January 21, 2014 (edited) 1. Socio-technical is gobblygook? Right. 36 years of peer reviewed research and a central concept in flight deck design and because our resident contrarian hasn't heard of it before it's a silly concept? It's a bit like me stating complexity theory is gobblygook. You can imagine how seriously I took the remainder of your post Tony. 2. If blame doesn't corrupt learning why are there confidential reporting systems in place in many airlines and railway companies and NASA? What do you think might be their motivations here Tony? Spending all that money on motivational nonsense? Again there's been several decades of research on blamism. Peer reviewed and adopted by industry. 3. Complex systems. It's pretty obvious my main point concerned hazardous complexity. Hence my point about Perrows normative stance on it. We do indeed have a choice about complexity since as I've said social choices determine which systems we wish to use. And again a flight deck or nuclear power or surety systems are hazardous. Your cab software isn't. 4. So socio-technical interface is relevant and isn't rubbish. Your flight deck example is a perfect example. Airlines are also a perfect example of large for profit organisations running expensive confidential reporting systems to. They might Tony be basing these on some serious peer reviewed and valid research across decades. 5. Sydney Dekker and James Reason talk about human error and blame. You should do some reading before sounding off with your pedestrian knowledge on the matters of organisational reliability, learning and blame. 1. If it had any practical meaning, I think I would have heard of it, having worked in the technological end of corporate America for twenty years. All "36 years of peer reviewed research" means to me is 36 years of academics talking to each other. That's not contrarianism, that's a very simple capability test, usually phrased as "what have you done for me lately?" 2. The purpose behind confidential reporting systems is to find out who to blame, Phil. The whole point is to gain a starting point for a fruitful investigation into otherwise unreported problems. And the investigation will find out where the problem is and, beyond finding a way to fix the problem, bring discipline where it is needed. WRT "adopted by industry", once again, you couldn't prove it by my experience, in several different environments. Of course, you can say that I just fell into holes in coverage. Quite possible, but there is also the reality that the validity of an idea is matched by the ubiquity of its application. 3. Social choices determine which systems we use? Since when has the adoption of any complex system relied on a social choice? Sticking more closely to the topic than we have been, what social choice caused the existence of nuclear weapons? The choice was made for us by technological imperatives. They could be developed, so they were. After that, they followed a logic of their own. Once they existed, large states had to have them, and had to deal with their consequences. And it is not a realistic choice to set them aside. The same kind of thing could be said of our automotive economy, the internet, etc. There was no point in history where there was a clear contingency to have them or not to have them, and there isn't now. WRT "hazardous complexity", it turns out that the more hazardous a system is, the more complex it tends to be. If your socio-technical theorists were correct, the more hazardous a system, the simpler and more transparent we would try to make it. That works with things like murder -- the simple answer is don't do it. Committing a murder increases the complexity of your problems enormously. But when you have a hazardous system whose existence can't be avoided -- see immediately preceding paragraph -- detecting and mitigating potential failures becomes much more important than simplicity and transparency. Also, to a technologist, complexity is pretty much just complexity, regardless of the motivation. Cars, for example, have become more and more reliable at the same time that they have become more and more complex, simply because the market demands it. The same is true of software. The market demands more capable, yet more reliable software as an initial condition to a sale. People talk about complex government software environments with millions of lines of code? Guess what -- industry cranks that kind of stuff out on commercial projects all the time. My own company's apparently negligible software suite has over 100 thousand lines of code in only one of its numerous applications. 4. I can see the same expedients arrived at from simple study of the problems involved, and application of common-sense solutions. In fact, that's exactly how they were arrived at. In the flight deck example, it was pretty obvious that autocracy in the cockpit wasn't working. So they modified the dynamic. It didn't take any academic theorizing to arrive at that. 5. The problem you're having in putting your point across is that practical people who work with complex systems tend to be able to find their own workable solutions. Academics make big claims about how their work changed the world, but the reality is that industry and government -- including the military -- adopt this or that practice based on their own experience of what works. If it matches some academic theory, it's almost always by sheer accident. At best, one can say that the power of theory is descriptive, not prescriptive. BTW, Perrow on Three Mile Island? What he didn't understand about the design of the system was that all of the sensor systems and work rules were governed by the reality that you can't staff a nuclear power control room with nuclear engineering PhDs. It's all well and good to say that more knowledgeable people with simpler systems would have done better. It is probably in fact quite correct to say that. The problem is that knowledgeable people were and are in short supply, and their capabilities have to be applied at the level of system design. There's just not enough of them to go around to apply their capabilities at the level of systems operation. Edited January 21, 2014 by Tony Evans
Phil Posted January 21, 2014 Posted January 21, 2014 (edited) If it had any practical meaning, I think I would have heard of it, having worked in the technological end of corporate America for twenty years. All "36 years of peer reviewed research" means to me is 36 years of academics talking to each other. That's not contrarianism, that's a very simple capability test, usually phrased as "what have you done for me lately?" It simply means the interaction of people and technology Tony. It's not rocket science. It's been around as an area of research for as long as you've been alive I imagine. It's hardly a case of academics talking to each other either - read any modern inquiry into a disaster from pretty much anywhere in the world and you will find it is based on a foundation of socio-technical thinking. The purpose behind confidential reporting systems is to find out who to blame, Phil. But that is a big fat negative. Confidential information systems exist to generate more information and requisite variety so organisations can learn from near-misses and so forth. Lowering culpability normally increases near-miss reporting rates. Which means an organisation receives more safety information to use to make that organisation more reliable. Once culpability is restored then reporting drops again dramatically. An FAA experiment conducted in 1968-71 saw reports of near miss collisions increase dramatically when immunity was offered. When that was withdrawn reporting rates dropped. There are a number of other natural experiments analysed along airline / air traffic control lines to triangulate these findings. They all found that culpability was linked to information flow - qualitative studies show that pilots were indeed motivated to report more because they wouldn't get sacked for it. So what have the academics done for us? Made your flights a lot safer by drawing attention to the fact that blame reduces the capacity of organisations to learn by restricting information flow. Quite possible, but there is also the reality that the validity of an idea is matched by the ubiquity of its application. Typing in confidential reporting system into Google gives you examples from railways, medicine, NASA, nuclear power companies and airlines to name a few. All existing and operating. What has this got to do with nuclear surety systems? Well they are run by humans, humans blame, the military especially likes to break swords over knees - if blamism reduces learning potential it follows that nuclear surety systems may not be as reliable as many think. Is there evidence this happens - yes, read Sagan for them. The choice was made for us by technological imperatives Technological determinism is pretty old fashioned now. Nuclear weapons were developed and utilised for a purpose. Pretty much all established technology exists through social want. Now sure if you want power, then perhaps you have to deal with the fact that nuclear power is the only choice: but nonetheless the nuclear power-plant exists for societies want of power. If your socio-technical theorists were correct, the more hazardous a system, the simpler and more transparent we would try to make it. Well no not necessarily - that assumes that reliability is a key goal. Very often it is not. So there is often no incentive to simplify. Often designers will indeed try to simplify complex systems. Often though you can't - especially hazardous ones - as you've argued yourself. And this is the main point of Perrow - such systems will by their nature suffer accidents, therefore there is a normative debate to be had about who decides these systems should be built? You could argue that society as a whole has decided by want of energy that such systems are acceptable. As we have seen opinion changes rapidly on the methods of generating that power despite our desire for it not changing. Nonetheless, it does not make that normative question disappear - who decides which complex and hazardous systems we should tolerate and to what extent? It's not a hippy question - it is a very real debate in Japan for example. Also, to a technologist, complexity is pretty much just complexity, regardless of the motivation. Fine that's as may be, but technologists are just that. In fact, that's exactly how they were arrived at. In the flight deck example, it was pretty obvious that autocracy in the cockpit wasn't working. So they modified the dynamic. Again that's not true at all. The flight-deck is just one part of the problem. It is one component. The aircraft and its crew work as a system - there are numerous socio-technical interfaces. There's the example of cabin crew seeing engines on fire and not reporting it to the flight-deck. Why on earth wouldn't they? The answer lies in looking at the socio-technical aspects. There is absolutely tonnes of research on aviation safety and thoughts on whole-crew management and so forth. It goes well beyond common sense. Common sense tells you if a cabin crew member sees an engine on fire they tell the pilot. Yet, it's happened. If it matches some academic theory, it's almost always by sheer accident. At best, one can say that the power of theory is descriptive, not prescriptive. That's just your bias. Read any modern inquiry into a disaster, or any modern guidance on some ISOs or business continuity or organisational learning and underneath it is a lot of academic theory. Accident investigation or organisational reliability has benefited an enormous amount from research on the matter over the last few decades. Certainly plenty of academics talk copious amounts of bollocks but there is a lot of scope to turn theory into practice in areas like organisational learning. We see it with confidential reporting systems and modern accident and disaster inquiries which are holistic and systemic. What he didn't understand about the design of the system was that all of the sensor systems and work rules were governed by the reality that you can't staff a nuclear power control room with nuclear engineering PhDs. Socio-technical.See it? Some agree with you - Perrow is technologically deterministic and that the causes of Three-Mile Island were socio-technical - the interaction of complexity, tight coupling and the people who were there to stop it happening. James Reason has written a lot of interesting work on the human - technical interface in practice. Neither technical nor social answers are satisfying in most cases. If someone for example drew that lesson, and changed staffing they then potentially make the plant a safer place (but Perrow would argue never completely safe). Is that a technological response to a technologically complex system? No of course it isn't. I'm not proclaiming Perrow a bible here, I disagree with him on a few things but few disagree that complex, tightly coupled systems make them inherently less reliable because the interactions are less predictable. That is a lesson that cuts across disciplines. Edited January 21, 2014 by Phil
Tony Evans Posted January 22, 2014 Posted January 22, 2014 It simply means the interaction of people and technology Tony. It's not rocket science. It's been around as an area of research for as long as you've been alive I imagine. It's hardly a case of academics talking to each other either - read any modern inquiry into a disaster from pretty much anywhere in the world and you will find it is based on a foundation of socio-technical thinking. Modern incident investigation is based on root cause analysis. Whatever is discovered is discovered, whether it is social or technical. What you're doing here is playing a pair of eights like it was a full house. But that is a big fat negative. Confidential information systems exist to generate more information and requisite variety so organisations can learn from near-misses and so forth. Lowering culpability normally increases near-miss reporting rates. Which means an organisation receives more safety information to use to make that organisation more reliable. Once culpability is restored then reporting drops again dramatically. An FAA experiment conducted in 1968-71 saw reports of near miss collisions increase dramatically when immunity was offered. When that was withdrawn reporting rates dropped. There are a number of other natural experiments analysed along airline / air traffic control lines to triangulate these findings. They all found that culpability was linked to information flow - qualitative studies show that pilots were indeed motivated to report more because they wouldn't get sacked for it. So what have the academics done for us? Made your flights a lot safer by drawing attention to the fact that blame reduces the capacity of organisations to learn by restricting information flow. What you're doing is totally ignoring the vast majority of confidential information systems, from whistleblower facilities to anonymous crime tip lines, designed to root out the culpable. Also, even when a system is intentionally designed just to collect technical or statistical information, the investigation can and does lead to the discovery of culpable activity that has to be addressed. The idea that everything bad is an innocent accident is nonsense. ...if blamism reduces learning potential it follows that nuclear surety systems may not be as reliable as many think. Is there evidence this happens - yes, read Sagan for them. You throw the word "blamism" around like it was magic. (And, perhaps, for you it is a piece of sympathetic magic.) In any case, Nothing is ever all that simple. There are numerous cases in nuclear weapons surety where honesty about a mistake or a problem has led to improvements. You should read Command and Control -- in trying to make his argument about what he calls "the illusion of safety", the author inadvertently documents several events where improvements were made without anyone getting canned or going to jail. The initial design failings of the Minuteman launch control system is an example. But, by the same token, there are instances where culpable action or inaction requires discipline and yes, even blame. To deny that is to deny one of the most important realities of the social dimension you are so interested in. Technological determinism is pretty old fashioned now. Nuclear weapons were developed and utilised for a purpose. Pretty much all established technology exists through social want. Now sure if you want power, then perhaps you have to deal with the fact that nuclear power is the only choice: but nonetheless the nuclear power-plant exists for societies want of power. What you really mean by "old fashioned" is "not liked", in the same sense that Zhivago's poetry was "not liked" -- it reminds certain people of realities that they'd rather not face. In any case, this is not about technological determinism. This is about technological imperatives, which is an entirely different thing. In reality, there was no social choice to discover how the atom worked and what it could do. The research was done by a handful of physicists, using relatively few resources, while everybody else was concerned with their daily lives. But once the research was done, the implications were unavoidable. There was hardly a social want for nuclear weapons or the problems they cause. Even soldiers -- perhaps especially soldiers -- would get rid of them if they could just flip a switch and make them go away. But that choice simply does not exist, and never has. WRT nuclear power plants, I never denied that they were optional. All I said was if you want to replace them with something else, you're trading one set of complex problems for another. Well no not necessarily - that assumes that reliability is a key goal. Very often it is not. So there is often no incentive to simplify. Reliability is a central objective of nuclear surety. The whole point is that weapons are not used inadvertently or in an unauthorized fashion, yet at the same time be certain of use when use is properly authorized. Reliably so. Often designers will indeed try to simplify complex systems. Often though you can't - especially hazardous ones - as you've argued yourself. And this is the main point of Perrow - such systems will by their nature suffer accidents, therefore there is a normative debate to be had about who decides these systems should be built? Ummm...such systems by their nature are more likely to suffer accidents, all other things being equal. But ther is no natural law that demands all other things remain equal. What usually happens with complex systems is that a lot ov money is spent to detect and mitigate all of those accidents waiting to happen. And that kind of approach does work. Otherwise all of the money invested in quality assurance would be meaningless waste. And your car wouldn't go 100,000 miles before any serious maintenance is necessary. You could argue that society as a whole has decided by want of energy that such systems are acceptable. As we have seen opinion changes rapidly on the methods of generating that power despite our desire for it not changing. Nonetheless, it does not make that normative question disappear - who decides which complex and hazardous systems we should tolerate and to what extent? It's not a hippy question - it is a very real debate in Japan for example. And? Of course that's the case. But alternatives are not non-complex and without their own dangers. Some even come with what we would now consider to be negative guarantees. Reducing power consumption, for example, guarantees less stuff. Less stuff would almost certainly include less expenditure on health and safety, since those things are luxuries, not basic needs. Fine that's as may be, but technologists are just that. Horse feathers. Human factors engineering was invented by technologists, for technological purposes, precisely because technologists are a lot more aware of the social than you think. Even our mundane taxi dispatch system is designed with user capabilities and needs in mind. In fact a lot of what I do now is design and implement functionality to accommodate the humans that come in contact with the system, both inside our organization and outside of it. Again that's not true at all. The flight-deck is just one part of the problem. It is one component. The aircraft and its crew work as a system - there are numerous socio-technical interfaces. There's the example of cabin crew seeing engines on fire and not reporting it to the flight-deck. Why on earth wouldn't they? The answer lies in looking at the socio-technical aspects. There is absolutely tonnes of research on aviation safety and thoughts on whole-crew management and so forth. It goes well beyond common sense. Common sense tells you if a cabin crew member sees an engine on fire they tell the pilot. Yet, it's happened. And yet none of that requires a sociologist or anthropologist to correct. That kind of thing comes out in an incident investigation, including reasons why or why not. Common sense does tell you what to do next. And they've been doing it in aviation -- to take a case in point -- pretty much since the beginning. Like you said, it's not rocket science. Can't have it both ways, Phil.
Tony Evans Posted January 22, 2014 Posted January 22, 2014 That's just your bias. Read any modern inquiry into a disaster, or any modern guidance on some ISOs or business continuity or organisational learning and underneath it is a lot of academic theory. Accident investigation or organisational reliability has benefited an enormous amount from research on the matter over the last few decades. Certainly plenty of academics talk copious amounts of bollocks but there is a lot of scope to turn theory into practice in areas like organisational learning. We see it with confidential reporting systems and modern accident and disaster inquiries which are holistic and systemic. Once again, you're playing a pair like it was a big hand. Everything you claim could -- and in most cases probably did -- happen without the intervention of academic research. It's not like the academy has an exclusive position in analysis and mitigation/correction. Socio-technical.See it? Some agree with you - Perrow is technologically deterministic and that the causes of Three-Mile Island were socio-technical - the interaction of complexity, tight coupling and the people who were there to stop it happening. James Reason has written a lot of interesting work on the human - technical interface in practice. Neither technical nor social answers are satisfying in most cases. If someone for example drew that lesson, and changed staffing they then potentially make the plant a safer place (but Perrow would argue never completely safe). Is that a technological response to a technologically complex system? No of course it isn't. I'm not proclaiming Perrow a bible here, I disagree with him on a few things but few disagree that complex, tightly coupled systems make them inherently less reliable because the interactions are less predictable. That is a lesson that cuts across disciplines. In that case "socio-technical" is a meaningless term. Hierarchies are a natural reality, beyond the social or technical, whether taken together or independently. In this case, as in the case of almost any complex system, we're talking about the hierarchy of knowledge. Most knowledge is concentrated in a small space at the top of an organization or discipline, and it is economically and physiologically impossible to evenly distribute it throughout all human brains, even just the ones particularly interested in a given problem space. So we automate to multiply and redistribute the knowledge of experts. That was the purpose of the automation at Three Mile Island, and in every other commercial nuclear power plant. There was no social decision to make it that way. But neither was there any technologically determinative reason. It was simply a consequence of the world being the way it is. I think that is a learning you need to fully comprehend before you go on.
Phil Posted January 22, 2014 Posted January 22, 2014 The idea that everything bad is an innocent accident is nonsense. I've never claimed it was. You've constructed that argument in your head and its false to attribute it to me. To deny that is to deny one of the most important realities of the social dimension you are so interested in. Again I never made such a claim. There was hardly a social want for nuclear weapons or the problems they cause. If nobody wants nuclear weapons why do we have them? Why do some countries not have them when they could? Surely then there must be other factors at play if not all technically able countries have nuclear weapons. Somebody, somewhere, for whatever reason just said "no". All I said was if you want to replace them with something else, you're trading one set of complex problems for another. And there's a vast difference between trading hazardous systems for relatively non hazardous ones and calling it the simple substitution of one complex system for another. Reliability is a central objective of nuclear surety. I have no doubt that is the stated objective of the system as a whole. But again we both know there are differences between stated system goals and what actually occurs for various reasons. And that kind of approach does work. It never works 100% of the time. And some would argue that the consequences of failure are so high that a non 100% safety rate is not acceptable. The precise numbers aside it boils down to the fact that risk is tolerated only to a certain extent. Human factors engineering was invented by technologists I'm not entirely sure what you think a technologist is hence why I didn't tackle it. I would assume that psychologists might have a large role to play in human factor thinking. Well I don't assume I know they do. And yet none of that requires a sociologist or anthropologist to correct. So really what you are doing is rejecting evidence-based practice? Fine by me. You can trust in common sense. Good luck! It's not like the academy has an exclusive position in analysis and mitigation/correction. Again who said that they were omnipotent??! Most knowledge is concentrated in a small space at the top of an organization or discipline I would say that that is completely the opposite of what actually happens. But there's no point having the debate with you because producing decades of research across several disciplines that has been incorporated into practice doesn't quite cut it with you.
Tony Evans Posted January 23, 2014 Posted January 23, 2014 (edited) Phil, The dripping disdain is just killing me... Anyway, after going round and round on this, I realized what the disconnect is here. You're promoting a completely false dichotomy. It would never have occurred to me -- or any serious student of technology that I've ever known or heard of -- that there wasn't a social dimension to technology. Or a technological dimension to society. As far as I am -- or have ever been -- concerned the two are inseparable. So naturally I would take the suggestion of a "socio-technical interface" as nonsense. The idea that divide exists between them and that such a divide has to be addressed, is simply ludicrous. WRT the rest, no, I'm not inventing positions for you and putting words in your mouth. I'm taking you at face value. You seem to have an overwhelming interest is avoiding blaming anyone for anything. Sorry, but accountability is usually part of any serious attempt to fix things. The avoidance of accountability is a very real part of human nature, but it's not a reason to want to stop holding people accountable. If we wanted to find out everything we desired about murder, for example, we could, just by telling murderers that they'll get off, if they just come in and tell their stories, completely and honestly. And no, that's not argumentum ad absurdum. Your example of near-miss midair collisions includes incidents (and a lot of them) where somebody is culpably negligent and should be held accountable. Avoiding blame in the interest of finding out more about them will not change the fact that many, and maybe even most, could be averted by airmen and air traffic controllers following existing rules and procedures conscientiously. IOW, you could find out as much as you wanted, and change rules and procedures to suit, and you would still have people goofing up for no better reason than they're lazy and stupid. Now, I know you're going to say this proves your point -- you can't fix people. But that's not what I'm getting at. The point is that the organization can and does learn how to minimize problems, and mitigate them where it can. Hence all of the rules and procedures that do exist. If people aren't perfect in their application, that doesn't mean the organization has failed to learn. WRT the concentration of knowledge, yes, it is true that a lot of operational know-how is distributed throughout the organization. But the overall design of systems relies on the high level of knowledge concentrated in problem domain experts. For example, a power plant operator may know a lot about running a particular set of control systems, and the idiosyncrasies of his particular site. But the plant itself would never have existed without the expertise of a very few power, structural, and electrical engineers. And the design of the control systems the operator knows so well could never have been produced by the operator himself. They are the product of a relatively few systems engineers who know how to map a large set of diverse outputs from all kinds of sensors into an interface the operator can understand and respond to. Edited January 23, 2014 by Tony Evans
rmgill Posted January 23, 2014 Posted January 23, 2014 I would submit that human interface design concepts ARE something that smart technologist will be aware of, poor ones will not be aware of them. There's a whole school of interface design which is to take those into account for how an application or a system is used/interacted with by a user. Ergonomics is one aspect, but intuitiveness or ease of use is another. Consistent interface is another aspect. There are folks who are just utterly clueless about it becuase they understand the underlying software and thus don't see the lack of ease of use of the interface that someone walking up and using it will.
Tony Evans Posted January 23, 2014 Posted January 23, 2014 I would submit that human interface design concepts ARE something that smart technologist will be aware of, poor ones will not be aware of them. There's a whole school of interface design which is to take those into account for how an application or a system is used/interacted with by a user. Ergonomics is one aspect, but intuitiveness or ease of use is another. Consistent interface is another aspect. There are folks who are just utterly clueless about it becuase they understand the underlying software and thus don't see the lack of ease of use of the interface that someone walking up and using it will. if your interface sucks, you lose in the marketplace. And it's always been that way. The steering wheel was a better interface than the tiller, and won the market on ships, cars, and planes (except for special cases of the latter, where the stick was better, such as the tight cockpit of the fighter). The idea that technologists have only recently considered what it is humans are doing with their products, and how -- and then only under the tutelage of academic researchers -- is pure garbage.
Phil Posted January 23, 2014 Posted January 23, 2014 (edited) You're promoting a completely false dichotomy. It would never have occurred to me -- or any serious student of technology that I've ever known or heard of -- that there wasn't a social dimension to technology. But I'm not promoting a false dichotomy. Before we go down the dancing of angels on pin-heads you qualified the above claim twice. Not only students. But serious students. I'd make the claim that your enlightened and ideal serious student of technology is a rarer creature than you imagine. But then you regard your experience and you thought processes with an almost pathological level of generalisability. That and you're not quite grasping what I am saying. I will use an example perhaps. In the 1960s a factory roof collapsed on a snowy day. It seemed a technical failure. The components and/or materials couldn't take the weight and so collapsed. But then a researcher looked at it and saw that the design assumed a level covering of snow with the weight thereby distributed. They also discovered that 20 others roofs of the same design had collapsed that night because drift snow had built up only on certain parts of the roof. What seemed a technical failure was in fact a fallible decision on the part of the designer. And that fallible decision propagated wildly. You talk of root cause analysis - the whole point is that prior to much research in the 60s and 70s the root cause would have been identified as a technical failure (that was the initial conclusion actually). But in reality it was a set of fallible decisions that caused the collapse. Source: Barry Turner - Man Made Disasters. James Reason - Human Error also talks about fallible decisions. You seem to have an overwhelming interest is avoiding blaming anyone for anything. Sorry, but accountability is usually part of any serious attempt to fix things. Some common sense would ironically help here. An organisation with no culpability would lead to an anything goes culture and would ironically reduce reporting information as there'd be nothing to report. You're taking what should be a common sense position on the level of culpability an act should receive, and taking it to some theoretical and abstract extreme. But the overall design of systems relies on the high level of knowledge concentrated in problem domain experts. Oh I imagine so. But who is most familiar with a system on a daily operating basis? Who learns about it at the coal-face and how does that information get to these high level domain persons? I am sure they are indeed most familiar with the design - but they are often some of the least informed people regarding how that design interacts and operates on a routine basis. But what it all boils down to is you rejecting research and substituting it with nothing. Indeed worse than nothing, you assume a level of common sense which even a cursory knowledge of some accidents and disasters shows that it is clearly absent in the extreme. Edited January 23, 2014 by Phil
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now