Loading ...
Sorry, an error occurred while loading the content.
 

RE: [Root_Cause_State_of_the_Practice] Re: Human Behavioral Technology

Expand Messages
  • Terry Herrmann
    Bill, I have experience (as I m sure many others do) being on the oversight end (self-assessment, peer industry assessment, INPO) and the receiving end of
    Message 1 of 27 , Dec 1, 2007
      Bill,
       
      I have experience (as I'm sure many others do) being on the oversight end (self-assessment, peer industry assessment, INPO) and the receiving end of these activities.  How did you determine the error rate is 1/3?  What I'm used to is that the group that identifies an AFI begins by collecting information regarding observed conditions and behaviors.  Opinions are discouraged and opportunity is provided for clarification of the information and rebuttal of conclusions.  The discussion is fact-based and the information must be validated and verified to be correct. There is typically a very productive discussion.
       
      That certainly wasn't always the case a decade or more ago and isn't the case when the group on the receiving end is in a position of denial or the oversight group comes in with an agenda they aren't willing to relinquish.
       
      How do you distinguish between a symptom and a "real problem"?  What criteria do you use for "real problems"?

      Thanks,
                    Terry Herrmann


      To: Root_Cause_State_of_the_Practice@yahoogroups.com
      From: williamcorcoran@...
      Date: Fri, 30 Nov 2007 08:44:26 -0500
      Subject: Re: [Root_Cause_State_of_the_Practice] Re: Human Behavioral Technology

      Steve,
       
      You bring up so many interesting points that some of them will probably not be given their due.
       
      Let's talk about criticism by oversight , e.g., an Area for Improvement (AFI).
       
      There are at least two points to be made.
      1. My experience is that when an oversight group/organization/ agency makes a criticism it usually is based on a symptom of a real problem. About a third of the time the real problem is not the one they have you ticketed for. In these cases, if you work on the AFI you are distracting yourself from the real problem.
      2. If you try to plead innocent of the AFI, the oversight group/organization/ agency will often interpret your commitment to the truth as an affront to their infallibility. (Life isn't fair.) And you will be punished one way or another for your effrontery. This will get you further off the real problem.
      3. One way out is to find your own problems first and inform the oversight group/organization/ agency convincingly so that they will not spend time finding problems that you don't have.
      4. Nothing makes an oversight group/organization/ agency feel more comfortable than evidence that their charges are good at finding their own problems.
      5. Nothing makes them feel worse than evidence that their charges are in denial on real problems.
      Take care,
       
      Bill Corcoran
      Mission: Saving lives, pain, assets, and careers through thoughtful inquiry.
      Motto: If you want safety, peace, or justice, then work for competency, integrity, and transparency.
       
      W. R. Corcoran, Ph.D., P.E.
      NSRC Corporation
      21 Broadleaf Circle
      Windsor, CT 06095-1634
      Voice and voice mail: 860-285-8779
       
      ROOT CAUSE INVESTIGATION HELP LINE 860-285-8779
       
      Join the on-going discussion of Root Cause Analysis problems, puzzles, and progress at http://groups. yahoo.com/ group/Root_ Cause_State_ of_the_Practice/  
       
      Subscribe to "The Firebird Forum" by sending an e-mail to TheFirebirdForum- subscribe@ yahoogroups. com
      ----- Original Message -----
      Sent: Thursday, November 29, 2007 1:52 PM
      Subject: RE: [Root_Cause_ State_of_ the_Practice] Re: Human Behavioral Technology

      This discussion - and this post - brought some thoughts to mind:
       
      There are a number of MILSTDs (with ANSI/ASQC counterparts) which exist specifically for inferring the quality or "acceptability" of a population of items based on a statistically significant sample.  An example web site is: http://www.sqconlin e.com/.  Whether these standards would apply to this discussion I will leave to folks in the forum who have more expertise in statistical analysis than I do...
       
      An analog of the tendency to draw conclusions based on limited data (i.e. as in the ORSE example) may also be seen in the commercial nuclear industry today in both INPO evaluations and a company's internal assessment team activities. Both activities - by nature of the available time and resources - look at a relatively limited dataset of information in a less than scientifically rigorous way...  Following their activities, they are "expected" - or claim - to produce incisive and insightful conclusions as to the strengths or weaknesses of the organization, in a wide range of functional areas. 
      • There seems to be a strongly cognitive element in how these conclusions are reached,
      • What are also usually stated are "...the causes which contributed to this AFI are...".
      Some potential Catch-22 situations arise as a result:
      1. If the organization performs a more formal analysis of an "Area For Improvement" (AFI) and produces an evidence-based conclusion that the original AFI was incorrect or incomplete, they run the risk of being characterized as "defensive" or "isolated" (which then can become a future AFI...).
      2. If the organization simply accepts the AFI without further validation, AND the AFI was incorrect (or incomplete), they run the risk of either "fixin what ain't broke", or developing a false dependence on the resulting corrective actions...
      What is also curious is that often the AFIs seem to be at or very close to the mark concerning organizational flaws...
       

      Steve Marrs

      -----Original Message-----
      From: Root_Cause_State_ of_the_Practice@ yahoogroups. com [mailto:Root_ Cause_State_ of_the_Practice@ yahoogroups. com] On Behalf Of Oldnuke640@aol. com
      Sent: Thursday, November 29, 2007 11:29 AM
      To: Root_Cause_State_ of_the_Practice@ yahoogroups. com
      Subject: Re: [Root_Cause_ State_of_ the_Practice] Re: Human Behavioral Technology

      In a message dated 11/28/2007 6:14:05 A.M. Eastern Standard Time, williamcorcoran@ sbcglobal. net writes:
      I have changed the item to:

      01.311 Comment: If you see one person in an organization behave a certain way in a single certain situation it is either a cultural norm or an aberration.

      One of the problems with the above approach is that there may be multiple variations of cultural norms within a large organization.  The culture in security may be different than the culture in OPS - which may be different than the culture in maintenance.  Indeed, the Davis-Besse safety culture report (on the NRC web page) indicated that significant safety culture variability was noted between the major departments.  This was considered one of the reasons why they were so badly organizationally siloed in 2002. 
       
      There is always a natural tension between wanting to simplify things - and needing to describe the fine detail of a complex system.  I recall some old Navy guys (ORSE Board types) who insisted they could predict the grade on the ORSE within the first 5 minutes of arrival on the ship (a small sample size).  I wonder how much of this was prediction was subsequently validated by assigning the first-impressions grade?  (This is a rhetorical question only! - no need for rebuttal by you old ORSE Board guys out there :-)
       
      The sample size must be large enough to capture enough of the meaningful detail.  If only we had the validated research to tell us definitively just how big to make the sample...




      Check out AOL Money Finance's list of the hottest products and top money wasters of 2007.




    • exiled2
      ... meaningful ... definitively just how ... Well, actually, there is a validated method for calculating the sample sizes you need for different purposes.
      Message 2 of 27 , Dec 2, 2007
        Oldnuke640@... wrote:
        >
        > >
        > The sample size must be large enough to capture enough of the
        meaningful
        > detail. If only we had the validated research to tell us
        definitively just how
        > big to make the sample...
        >

        Well, actually, there is a validated method for calculating the sample
        sizes you need for different purposes. Wikipedia has a little gem of a
        write-up on it here: http://en.wikipedia.org/wiki/Statistical_power
      Your message has been successfully submitted and would be delivered to recipients shortly.