Loading ...
Sorry, an error occurred while loading the content.

closing without having to saving

Expand Messages
  • Nash, CJ (C)
    Hi there, Still fighting with that save problem. Wonder if any of you know the trick to this. If I open something then decide that I didn t want it and
    Message 1 of 23 , Feb 4, 2009
    View Source
    • 0 Attachment
      Hi there,
       
        Still fighting with that "save" problem.  Wonder if any of you know the "trick" to this.  If I open something then decide that I didn't want it and try to close it, JMP forces me to do a "save as".  It doesn't give me an option just to close it without saving or allow me to "save" material in the "normal" manner as in updating what I already have done.  In the cases where I am working on something and just want to save the work I can only do a "save as" instead of having JMP update what I've done.  The only "work around I can figure out is to have a general "trash this" file to heard all the junk I don't want to save and another "figure out how to do this" file to save the work I would have rather updated to an already existing file.  The problem with this approach is that I have a gazillion files that I have to figure out how to name in a manner that I can actually find the most updated version of my work.
       
      So what's the secret?
       
      I've also stopped using Journal because it saves things in a redundant manner.  If I have something open that I want to save to the journal it saves that item and EVERYTHING associated with it, even though I already have all those other things saved on the Journal.  So I end up with a great big journal with about 5 useful items and then the same "supporting" files repeated for each and every one of those 5 items.  Gobbs of redundancy.  So I no longer use Journal.
       

      CJ

    • Nash, CJ (C)
      Howdy, hope everyone is staying warm. Does anyone have any suggestions for good reference material? I m specifically looking for user friendly instructional
      Message 2 of 23 , Feb 4, 2009
      View Source
      • 0 Attachment
        Howdy, hope everyone is staying warm. 
         
        Does anyone have any suggestions for good reference material?  I'm specifically looking for  user friendly instructional material for JMP.  I have a gob of JMP books and they have loads of good information about stuff,...if you have already created it.  What I need is a book that tells HOW to do stuff.  The JMP books all assume you already know how to create things.  I waste hours, sometimes days trying to just find stuff.
         
        Ex, Prediction profile and prediction formula. 
         
        I've spent most of the day plus yesterday just trying to figure out how to make a prediction profiler.  The only thing I've been able to do is make a bunch of little ones with pieces of my data through the "Fit Model" platform. I need to know "how" to make one cumulative prediction profiler that has all my factors and all my responses in one profiler, not a bunch of separate ones.  I found Profiler under the graph area, but that wants me to write a prediction formula and I haven't got a clue how to do that or even where to find instructions on how to make one so I can learn it.
         
        Asking people here where I work is not helpful because they laugh and say "that's why we don't use JMP".  They're amazed that I'm spending so much time trying to learn this. 
         
        Any how too books out there?  I've been all over the web site and all that has is "promotional stuff" or information which is in the books, which isn't all that helpful to a novice.  I am trying not to call the help desk too often or they'll get tired of hearing from me.
         
        Thanks
         
        CJ Nash

        (847)-808-3525

         


      • Mark A Anawis
        Hi CJ, Let s not give up on Mr. Journal just yet. He s our friend. As to your problems, what version of JMP and operating system are you using? I don t see
        Message 3 of 23 , Feb 4, 2009
        View Source
        • 0 Attachment

          Hi CJ,
                 Let's not give up on Mr. Journal just yet. He's our friend. As to your problems, what version of JMP and operating system are you using? I don't see this problems with JMP 7 on Windows XP. It might be easier to give me a call and talk through your problem.
                  Mark


          Mark A Anawis, MA, CSSBB
          Senior Scientist
          ADD
          On Market Quality Engineering
          Abbott
          100 Abbott Park Road
          Bldg. AP8B-3/Dept. 04Z7
          Abbott Park, IL 60064-3500
          U.S.A.
          office (847)-937-4347
          fax (847)-938-2219

          Mark.Anawis@...



          The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




          "Nash, CJ (C)" <cnash2@...>
          Sent by: GLJUG@yahoogroups.com

          02/04/2009 11:33 AM

          Please respond to
          GLJUG@yahoogroups.com

          To
          <GLJUG@yahoogroups.com>
          cc
          Subject
          [GLJUG] closing without having to saving





          Hi there,

           
            Still fighting with that "save" problem.  Wonder if any of you know the "trick" to this.  If I open something then decide that I didn't want it and try to close it, JMP forces me to do a "save as".  It doesn't give me an option just to close it without saving or allow me to "save" material in the "normal" manner as in updating what I already have done.  In the cases where I am working on something and just want to save the work I can only do a "save as" instead of having JMP update what I've done.  The only "work around I can figure out is to have a general "trash this" file to heard all the junk I don't want to save and another "figure out how to do this" file to save the work I would have rather updated to an already existing file.  The problem with this approach is that I have a gazillion files that I have to figure out how to name in a manner that I can actually find the most updated version of my work.
           
          So what's the secret?
           
          I've also stopped using Journal because it saves things in a redundant manner.  If I have something open that I want to save to the journal it saves that item and EVERYTHING associated with it, even though I already have all those other things saved on the Journal.  So I end up with a great big journal with about 5 useful items and then the same "supporting" files repeated for each and every one of those 5 items.  Gobbs of redundancy.  So I no longer use Journal.
           

          CJ


        • Mark A Anawis
          Hi again CJ, When you use Fit Model , you need to put all your y response columns in the Model Specification window in the Pick Role Variables and all your
          Message 4 of 23 , Feb 4, 2009
          View Source
          • 0 Attachment

            Hi again CJ,
                    When you use "Fit Model", you need to put all your y response columns in the Model Specification window in the "Pick Role Variables" and all your x variables in the "Construct Model Effects" windows along with any interaction terms. Then when you run the model, under the "Least Squares Fit" arrow, select "Profilers" and "Profiler". You should get all the ys on the left side stacked up as rows and all the xs and interaction terms at the bottom lined up as columns. As to books, I would recommend getting a paper copy of JMP 6 and looking at the User Guide and Statistics and Graphics Guide books especially. Even though you probably have a higher version, it's easier than reading all those pdfs on the Help Menu. I think they stopped giving out paper books after JMP 6.
            Mark


            Mark A Anawis, MA, CSSBB
            Senior Scientist
            ADD
            On Market Quality Engineering
            Abbott
            100 Abbott Park Road
            Bldg. AP8B-3/Dept. 04Z7
            Abbott Park, IL 60064-3500
            U.S.A.
            office (847)-937-4347
            fax (847)-938-2219

            Mark.Anawis@...



            The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




            "Nash, CJ (C)" <cnash2@...>
            Sent by: GLJUG@yahoogroups.com

            02/04/2009 01:34 PM

            Please respond to
            GLJUG@yahoogroups.com

            To
            <GLJUG@yahoogroups.com>
            cc
            Subject
            [GLJUG] Reference material





            Howdy, hope everyone is staying warm.  

             
            Does anyone have any suggestions for good reference material?  I'm specifically looking for  user friendly instructional material for JMP.  I have a gob of JMP books and they have loads of good information about stuff,...if you have already created it.  What I need is a book that tells HOW to do stuff.  The JMP books all assume you already know how to create things.  I waste hours, sometimes days trying to just find stuff.
             
            Ex, Prediction profile and prediction formula.  
             
            I've spent most of the day plus yesterday just trying to figure out how to make a prediction profiler.  The only thing I've been able to do is make a bunch of little ones with pieces of my data through the "Fit Model" platform. I need to know "how" to make one cumulative prediction profiler that has all my factors and all my responses in one profiler, not a bunch of separate ones.  I found Profiler under the graph area, but that wants me to write a prediction formula and I haven't got a clue how to do that or even where to find instructions on how to make one so I can learn it.
             
            Asking people here where I work is not helpful because they laugh and say "that's why we don't use JMP".  They're amazed that I'm spending so much time trying to learn this.  
             
            Any how too books out there?  I've been all over the web site and all that has is "promotional stuff" or information which is in the books, which isn't all that helpful to a novice.  I am trying not to call the help desk too often or they'll get tired of hearing from me.
             
            Thanks
             
            CJ Nash

            (847)-808-3525

             


          • Nash, CJ (C)
            Hahahaha I like that Mr. Journal thing. I have visions of Mr. Potato head dancing in my mind! I am heading out the door for a meeting as I type. I ll give
            Message 5 of 23 , Feb 4, 2009
            View Source
            • 0 Attachment
              Hahahaha I like that "Mr. Journal" thing.  I have visions of Mr. Potato head dancing in my mind! 
               
              I am heading out the door for a meeting as I type.  I'll give you a call tomorrow if that would be good for you.
               
              Thanks Mark!  :-)

              CJ



              From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of Mark A Anawis
              Sent: Wednesday, February 04, 2009 2:20 PM
              To: GLJUG@yahoogroups.com
              Subject: Re: [GLJUG] closing without having to saving


              Hi CJ,
                     Let's not give up on Mr. Journal just yet. He's our friend. As to your problems, what version of JMP and operating system are you using? I don't see this problems with JMP 7 on Windows XP. It might be easier to give me a call and talk through your problem.
                      Mark


              Mark A Anawis, MA, CSSBB
              Senior Scientist
              ADD
              On Market Quality Engineering
              Abbott
              100 Abbott Park Road
              Bldg. AP8B-3/Dept. 04Z7
              Abbott Park, IL 60064-3500
              U.S.A.
              office (847)-937-4347
              fax (847)-938-2219

              Mark.Anawis@ abbott.com



              The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@abbott. com , and destroy this communication (or any copies thereof) including all attachments.




              "Nash, CJ (C)" <cnash2@...>
              Sent by: GLJUG@yahoogroups. com

              02/04/2009 11:33 AM

              Please respond to
              GLJUG@yahoogroups. com

              To
              <GLJUG@yahoogroups. com>
              cc
              Subject
              [GLJUG] closing without having to saving





              Hi there,


                Still fighting with that "save" problem.  Wonder if any of you know the "trick" to this.  If I open something then decide that I didn't want it and try to close it, JMP forces me to do a "save as".  It doesn't give me an option just to close it without saving or allow me to "save" material in the "normal" manner as in updating what I already have done.  In the cases where I am working on something and just want to save the work I can only do a "save as" instead of having JMP update what I've done.  The only "work around I can figure out is to have a general "trash this" file to heard all the junk I don't want to save and another "figure out how to do this" file to save the work I would have rather updated to an already existing file.  The problem with this approach is that I have a gazillion files that I have to figure out how to name in a manner that I can actually find the most updated version of my work.
               
              So what's the secret?
               
              I've also stopped using Journal because it saves things in a redundant manner.  If I have something open that I want to save to the journal it saves that item and EVERYTHING associated with it, even though I already have all those other things saved on the Journal.  So I end up with a great big journal with about 5 useful items and then the same "supporting" files repeated for each and every one of those 5 items.  Gobbs of redundancy.  So I no longer use Journal.
               

              CJ


            • Mark A Anawis
              Hi CJ, Thurs is fine from 1-2PM or 3-5PM. Mark Mark A Anawis, MA, CSSBB Senior Scientist ADD On Market Quality Engineering Abbott 100 Abbott Park Road Bldg.
              Message 6 of 23 , Feb 4, 2009
              View Source
              • 0 Attachment

                Hi CJ,
                     Thurs is fine from 1-2PM or 3-5PM.
                Mark


                Mark A Anawis, MA, CSSBB
                Senior Scientist
                ADD
                On Market Quality Engineering
                Abbott
                100 Abbott Park Road
                Bldg. AP8B-3/Dept. 04Z7
                Abbott Park, IL 60064-3500
                U.S.A.
                office (847)-937-4347
                fax (847)-938-2219

                Mark.Anawis@...



                The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




                "Nash, CJ (C)" <cnash2@...>
                Sent by: GLJUG@yahoogroups.com

                02/04/2009 03:01 PM

                Please respond to
                GLJUG@yahoogroups.com

                To
                <GLJUG@yahoogroups.com>
                cc
                Subject
                RE: [GLJUG] closing without having to saving





                Hahahaha I like that "Mr. Journal" thing.  I have visions of Mr. Potato head dancing in my mind!  

                 
                I am heading out the door for a meeting as I type.  I'll give you a call tomorrow if that would be good for you.
                 
                Thanks Mark!  :-)

                CJ


                From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of Mark A Anawis
                Sent:
                Wednesday, February 04, 2009 2:20 PM
                To:
                GLJUG@yahoogroups.com
                Subject:
                Re: [GLJUG] closing without having to saving


                Hi CJ,

                      Let's not give up on Mr. Journal just yet. He's our friend. As to your problems, what version of JMP and operating system are you using? I don't see this problems with JMP 7 on Windows XP. It might be easier to give me a call and talk through your problem.

                       Mark


                Mark A Anawis, MA, CSSBB
                Senior Scientist
                ADD
                On Market Quality Engineering
                Abbott
                100 Abbott Park Road
                Bldg. AP8B-3/Dept. 04Z7
                Abbott Park, IL 60064-3500
                U.S.A.
                office (847)-937-4347
                fax (847)-938-2219

                Mark.Anawis@...


                The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




                "Nash, CJ (C)" <cnash2@...>
                Sent by: GLJUG@yahoogroups.com

                02/04/2009 11:33 AM

                Please respond to
                GLJUG@yahoogroups.com


                To
                <GLJUG@yahoogroups.com>
                cc
                Subject
                [GLJUG] closing without having to saving







                Hi there,


                 Still fighting with that "save" problem.  Wonder if any of you know the "trick" to this.  If I open something then decide that I didn't want it and try to close it, JMP forces me to do a "save as".  It doesn't give me an option just to close it without saving or allow me to "save" material in the "normal" manner as in updating what I already have done.  In the cases where I am working on something and just want to save the work I can only do a "save as" instead of having JMP update what I've done.  The only "work around I can figure out is to have a general "trash this" file to heard all the junk I don't want to save and another "figure out how to do this" file to save the work I would have rather updated to an already existing file.  The problem with this approach is that I have a gazillion files that I have to figure out how to name in a manner that I can actually find the most updated version of my work.

                 

                So what's the secret?

                 

                I've also stopped using Journal because it saves things in a redundant manner.  If I have something open that I want to save to the journal it saves that item and EVERYTHING associated with it, even though I already have all those other things saved on the Journal.  So I end up with a great big journal with about 5 useful items and then the same "supporting" files repeated for each and every one of those 5 items.  Gobbs of redundancy.  So I no longer use Journal.

                 

                CJ


              • John A. Wass
                CJ: I m shocked that you are NOT finding how to do it in all of the JMP books. As with SAS there are 2 flavors of JMP books: i) the manuals (there are
                Message 7 of 23 , Feb 4, 2009
                View Source
                • 0 Attachment

                  CJ:  I'm shocked that you are NOT finding 'how to do it' in all of the JMP books.  As with SAS there are 2 flavors of JMP books: i) the manuals (there are currently 5) that come as either paper or under Help/Books on the main menu bar or ii) the "books by users" such as JMP Start Statistics, JMP for Basic Univariate and Multivariate Statistics, etc.  All of these include how-to's.

                   

                  Now as to your specific problem, the reference manuals will be of more help; use the DOE Manual and the Stats & Graphics Manuals.  You use the Prediction profiler for screening (i.e., one output, many inputs) and the Contour profiler for the response surface (few inputs, more than one output), this will allow you to fine to more than one output at once.  Please see pages

                  31-37 in the DOE manual for the Prediction  profiler and pp 306-335 in the Stats & Graphics Manual for the Contour Profiler.  More on this is described under multivariate analysis in the JMP for Basic Univariate and Multivariate Statistics, pp333-350.

                   

                  John Wass

                   

                  From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of Nash, CJ (C)
                  Sent: Wednesday, February 04, 2009 1:34 PM
                  To: GLJUG@yahoogroups.com
                  Subject: [GLJUG] Reference material

                   

                  Howdy, hope everyone is staying warm. 

                   

                  Does anyone have any suggestions for good reference material?  I'm specifically looking for  user friendly instructional material for JMP.  I have a gob of JMP books and they have loads of good information about stuff,...if you have already created it.  What I need is a book that tells HOW to do stuff.  The JMP books all assume you already know how to create things.  I waste hours, sometimes days trying to just find stuff.

                   

                  Ex, Prediction profile and prediction formula. 

                   

                  I've spent most of the day plus yesterday just trying to figure out how to make a prediction profiler.  The only thing I've been able to do is make a bunch of little ones with pieces of my data through the "Fit Model" platform. I need to know "how" to make one cumulative prediction profiler that has all my factors and all my responses in one profiler, not a bunch of separate ones.  I found Profiler under the graph area, but that wants me to write a prediction formula and I haven't got a clue how to do that or even where to find instructions on how to make one so I can learn it.

                   

                  Asking people here where I work is not helpful because they laugh and say "that's why we don't use JMP".  They're amazed that I'm spending so much time trying to learn this. 

                   

                  Any how too books out there?  I've been all over the web site and all that has is "promotional stuff" or information which is in the books, which isn't all that helpful to a novice.  I am trying not to call the help desk too often or they'll get tired of hearing from me.

                   

                  Thanks

                   

                  CJ Nash

                  (847)-808-3525

                   

                   



                • Nash, CJ (C)
                  Good morning! I talked to Mark yesterday and he was a BIG help. I figured I d tell you how we fixed my current problems in case you run into a newbie with
                  Message 8 of 23 , Feb 6, 2009
                  View Source
                  • 0 Attachment
                    Good morning! 
                     
                    I talked to Mark yesterday and he was a BIG help.  I figured I'd tell you how we "fixed" my current problems in case you run into a "newbie" with the latest problems I've been experiencing.
                     
                    Really easy fix.  My preferences were set wrong.  In Reports, I had "close report action" set to save/autosave because I mistakenly thought that it would work like a windows system "save" feature when saving.  You know, ask you where you wanted it saved the first time and then just update it there after as opposed to wanting to do a save as each and every time, and just close something when I select the red x to close, without saving. It was forcing me to save everything, even stuff I did not want to save.  Oops.  I now know that the "save/auto save" gives you a prompt to save whatever it is you're trying to close as a "save as", even if you just want to trash it.  So now I have it set to discard.    
                     
                    The problem I was having with the prediction profile was as follows.   I was trying to make a prediction profile from  graph>profiler, but it wanted me to add a prediction formula.  I didn't know anything about formulas and thought I had to create them from scratch. That's why it wouldn't let me do it.  Now I know how to save the necessary columns and formulas and how to place the model on the data table panel so I can find it again.  If this level of detail is in the books it's scattered around through different sections and I didn't have enough base knowledge in JMP to find all the pieces and put them together. 
                     
                    I also didn't know how to get it to generate the profiler from stuff like Fit model because the book assumes you know to go to Fit Model.  I thought all the examples in the book were generated through the option on the graph menu. (Yes, a novice does miss the obvious, that's why the "Dummies Guides" are so popular.)
                     
                    I am making my own "JMP for Dummies" manual as I learn this stuff though.  So, it's not all to waste.  And as I dig through all the information trying to figure out how to do stuff I find cool items by accident,...and add them to my "Dummies document".
                     
                    Oh, and you'll be happy to know,....ugh do I even tell you,...(cringe)  yes, I've found a couple "script" things that I've actually used...    So the toe is in the water,...thanks to you guys.  :-)
                     

                    CJ

                    (847)-808-3525

                     


                    From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of John A. Wass
                    Sent: Wednesday, February 04, 2009 3:25 PM
                    To: GLJUG@yahoogroups.com
                    Subject: RE: [GLJUG] Reference material

                    CJ:  I'm shocked that you are NOT finding 'how to do it' in all of the JMP books.  As with SAS there are 2 flavors of JMP books: i) the manuals (there are currently 5) that come as either paper or under Help/Books on the main menu bar or ii) the "books by users" such as JMP Start Statistics, JMP for Basic Univariate and Multivariate Statistics, etc.  All of these include how-to's.

                    Now as to your specific problem, the reference manuals will be of more help; use the DOE Manual and the Stats & Graphics Manuals.  You use the Prediction profiler for screening (i.e., one output, many inputs) and the Contour profiler for the response surface (few inputs, more than one output), this will allow you to fine to more than one output at once.  Please see pages

                    31-37 in the DOE manual for the Prediction  profiler and pp 306-335 in the Stats & Graphics Manual for the Contour Profiler.  More on this is described under multivariate analysis in the JMP for Basic Univariate and Multivariate Statistics, pp333-350.

                    John Wass

                     



                    .

                  • Nash, CJ (C)
                    Confound,...that s a weird word, so I looked it up to find out exactly what it meant. I got...confuse, throw, fox, befuddle, discombobulate (really I got
                    Message 9 of 23 , Feb 12, 2009
                    View Source
                    • 0 Attachment
                      Confound,...that's a weird word, so I looked it up to find out exactly what it meant.  I got...confuse, throw, fox, befuddle, discombobulate (really I got that), baffle, bedevil, bewilder, mixed up...
                       
                      So here's my question.
                       
                      If confound means confuse, why do we use confound in statistics as opposed to confuse?  Everybody knows what confuse means, but confound,...well that's just confounding.  (Ok ok quit booing)  lol
                       
                      Really though, is there something that the word confound addresses that confuse doesn't?
                       
                      Thanks!
                       

                      CJ

                    • Mark A Anawis
                      Confound it, CJ! You re over-analyzing everything. lol. Seriously, in DOE, confounding, or aliasing, is the measurement of factors and/or interaction effects
                      Message 10 of 23 , Feb 12, 2009
                      View Source
                      • 0 Attachment

                        Confound it, CJ! You're over-analyzing everything. lol. Seriously, in DOE, confounding, or aliasing, is the measurement of factors and/or interaction effects which cannot be separated. For example, if you perform a DOE with 3 factors (A,B,C) at 2 levels (low, high), you would need 2^3=8 runs to estimate all factors and interactions: A, B, C, AB, AC, BC, ABC. However, if you run a half-factorial, then you need only 2^(3-1)= 4 runs, but the penalty you pay is that you get confounding of factors effects so that you are estimating A+BC, B+AC, C+AB and you don't get ABC. What this means is that  if the A+BC effect is significant you can not determine in this half-factorial design whether it is due to A or BC or a combination of the 2 because A is CONFOUNDED with BC. The same holds for B+AC and C+AB. If you later decide that you want to distinguish between these, you can run a foldover (the other half of the half-factorial) to produce the full factorial.
                        There are other situations besides DOE where confounding is used. For example, there is a confounding variable which is a variable that is associated with both the outcome and the exposure variable. A classic example is the relationship between heavy drinking and lung cancer. Here, the data should be controlled for smoking as it is related to both drinking and lung cancer.
                         I hope this wasn't too long-winded an explanation.
                        Mark


                        Mark A Anawis, MA, CSSBB
                        Senior Scientist
                        ADD
                        On Market Quality Engineering
                        Abbott
                        100 Abbott Park Road
                        Bldg. AP8B-3/Dept. 04Z7
                        Abbott Park, IL 60064-3500
                        U.S.A.
                        office (847)-937-4347
                        fax (847)-938-2219

                        Mark.Anawis@...



                        The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




                        "Nash, CJ (C)" <cnash2@...>
                        Sent by: GLJUG@yahoogroups.com

                        02/12/2009 02:06 PM

                        Please respond to
                        GLJUG@yahoogroups.com

                        To
                        <GLJUG@yahoogroups.com>
                        cc
                        Subject
                        [GLJUG] Definition,..curiosity





                        Confound,...that's a weird word, so I looked it up to find out exactly what it meant.  I got...confuse, throw, fox, befuddle, discombobulate (really I got that), baffle, bedevil, bewilder, mixed up...

                         
                        So here's my question.
                         
                        If confound means confuse, why do we use confound in statistics as opposed to confuse?  Everybody knows what confuse means, but confound,...well that's just confounding.  (Ok ok quit booing)  lol
                         
                        Really though, is there something that the word confound addresses that confuse doesn't?
                         
                        Thanks!
                         

                        CJ


                      • Nash, CJ (C)
                        Actually Mark ,your answer was successful on two fronts. One,...Your opening statement gave me a really nice belly laugh and those are ALWAYS good. Second, I
                        Message 11 of 23 , Feb 13, 2009
                        View Source
                        • 0 Attachment
                          Actually Mark ,your answer was successful on two fronts.  One,...Your opening statement gave me a really nice belly laugh and those are ALWAYS good.  Second, I think your getting use to the simple way my mind works.  lol Because your explanation was easy for me to understand.  Good analogies too, those usually help.  ;-)
                           
                           
                          Thanks  :-)

                          CJ 

                           


                          From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of Mark A Anawis
                          Sent: Thursday, February 12, 2009 3:29 PM
                          To: GLJUG@yahoogroups.com
                          Subject: Re: [GLJUG] Definition,..curiosity


                          Confound it, CJ! You're over-analyzing everything. lol. Seriously, in DOE, confounding, or aliasing, is the measurement of factors and/or interaction effects which cannot be separated. For example, if you perform a DOE with 3 factors (A,B,C) at 2 levels (low, high), you would need 2^3=8 runs to estimate all factors and interactions: A, B, C, AB, AC, BC, ABC. However, if you run a half-factorial, then you need only 2^(3-1)= 4 runs, but the penalty you pay is that you get confounding of factors effects so that you are estimating A+BC, B+AC, C+AB and you don't get ABC. What this means is that  if the A+BC effect is significant you can not determine in this half-factorial design whether it is due to A or BC or a combination of the 2 because A is CONFOUNDED with BC. The same holds for B+AC and C+AB. If you later decide that you want to distinguish between these, you can run a foldover (the other half of the half-factorial) to produce the full factorial.
                          There are other situations besides DOE where confounding is used. For example, there is a confounding variable which is a variable that is associated with both the outcome and the exposure variable. A classic example is the relationship between heavy drinking and lung cancer. Here, the data should be controlled for smoking as it is related to both drinking and lung cancer.
                           I hope this wasn't too long-winded an explanation.
                          Mark


                          Mark A Anawis, MA, CSSBB
                          Senior Scientist
                          ADD
                          On Market Quality Engineering
                          Abbott
                          100 Abbott Park Road
                          Bldg. AP8B-3/Dept. 04Z7
                          Abbott Park, IL 60064-3500
                          U.S.A.
                          office (847)-937-4347
                          fax (847)-938-2219

                          Mark.Anawis@ abbott.com



                          The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@abbott. com , and destroy this communication (or any copies thereof) including all attachments.




                          "Nash, CJ (C)" <cnash2@...>
                          Sent by: GLJUG@yahoogroups. com

                          02/12/2009 02:06 PM

                          Please respond to
                          GLJUG@yahoogroups. com

                          To
                          <GLJUG@yahoogroups. com>
                          cc
                          Subject
                          [GLJUG] Definition,. .curiosity





                          Confound,... that's a weird word, so I looked it up to find out exactly what it meant.  I got...confuse, throw, fox, befuddle, discombobulate (really I got that), baffle, bedevil, bewilder, mixed up...


                          So here's my question.
                           
                          If confound means confuse, why do we use confound in statistics as opposed to confuse?  Everybody knows what confuse means, but confound,... well that's just confounding.  (Ok ok quit booing)  lol
                           
                          Really though, is there something that the word confound addresses that confuse doesn't?
                           
                          Thanks!
                           

                          CJ


                        • schubie728
                          Of course, this makes it sound like confounding or aliasing is always bad, but in actuality we can use it to our advantage when we investigate. When we run
                          Message 12 of 23 , Feb 13, 2009
                          View Source
                          • 0 Attachment
                            Of course, this makes it sound like confounding or aliasing is always
                            bad, but in actuality we can use it to our advantage when we
                            investigate. When we run DOEs or any experiment, we should really be
                            thinking about and testing specific theories.

                            In the A, B, C 3-factor experiment, I should have theories about the
                            main effects and various interactions of the 3-factors BEFORE I run
                            the experiment. If A is confounded with the B*C interaction, but I
                            have a theory grounded in physics or chemistry that the B*C
                            interaction is unlikely, then if (A)+(B*C) comes up as an important
                            effect, then I can say, "well it sure looks like A is the reason why."

                            Now there may be some cases where we are unsure about a theoretical
                            basis for a potential interaction. For example, if we cannot rule out
                            the A*C interaction using physics/chemistry, then if the alias string
                            (B)+(A*C) comes up 'big', then we can break that alias string apart in
                            our next experiment.

                            Aliasing/confounding helps us save resources and learn faster, and a
                            sequential approach ensures that we head in the direction of greatest
                            improvement.

                            - Sean



                            --- In GLJUG@yahoogroups.com, Mark A Anawis <Mark.Anawis@...> wrote:
                            >
                            > Confound it, CJ! You're over-analyzing everything. lol. Seriously,
                            in DOE,
                            > confounding, or aliasing, is the measurement of factors and/or
                            interaction
                            > effects which cannot be separated. For example, if you perform a DOE
                            with
                            > 3 factors (A,B,C) at 2 levels (low, high), you would need 2^3=8 runs to
                            > estimate all factors and interactions: A, B, C, AB, AC, BC, ABC.
                            However,
                            > if you run a half-factorial, then you need only 2^(3-1)= 4 runs, but
                            the
                            > penalty you pay is that you get confounding of factors effects so
                            that you
                            > are estimating A+BC, B+AC, C+AB and you don't get ABC. What this
                            means is
                            > that if the A+BC effect is significant you can not determine in this
                            > half-factorial design whether it is due to A or BC or a combination
                            of the
                            > 2 because A is CONFOUNDED with BC. The same holds for B+AC and C+AB. If
                            > you later decide that you want to distinguish between these, you can
                            run a
                            > foldover (the other half of the half-factorial) to produce the full
                            > factorial.
                            > There are other situations besides DOE where confounding is used. For
                            > example, there is a confounding variable which is a variable that is
                            > associated with both the outcome and the exposure variable. A classic
                            > example is the relationship between heavy drinking and lung cancer.
                            Here,
                            > the data should be controlled for smoking as it is related to both
                            > drinking and lung cancer.
                            > I hope this wasn't too long-winded an explanation.
                            > Mark
                            >
                            >
                            >
                            > Mark A Anawis, MA, CSSBB
                            > Senior Scientist
                            > ADD
                            > On Market Quality Engineering
                            > Abbott
                            > 100 Abbott Park Road
                            > Bldg. AP8B-3/Dept. 04Z7
                            > Abbott Park, IL 60064-3500
                            > U.S.A.
                            > office (847)-937-4347
                            > fax (847)-938-2219
                            > Mark.Anawis@...
                            >
                            >
                            >
                            >
                            > The information contained in this communication is the property of
                            Abbott
                            > Laboratories, is confidential, may constitute inside information,
                            and is
                            > intended only for the use of the addressee. Unauthorized use,
                            disclosure
                            > or copying of this communication (or any part thereof) is strictly
                            > prohibited and may be unlawful. If you have received this
                            communication in
                            > error, please notify Abbott Laboratories immediately by replying to
                            this
                            > e-mail or by contacting postmaster@... , and destroy this
                            > communication (or any copies thereof) including all attachments.
                            >
                            >
                            >
                            >
                            > "Nash, CJ (C)" <cnash2@...>
                            > Sent by: GLJUG@yahoogroups.com
                            > 02/12/2009 02:06 PM
                            > Please respond to
                            > GLJUG@yahoogroups.com
                            >
                            >
                            > To
                            > <GLJUG@yahoogroups.com>
                            > cc
                            >
                            > Subject
                            > [GLJUG] Definition,..curiosity
                            >
                            >
                            >
                            >
                            >
                            >
                            > Confound,...that's a weird word, so I looked it up to find out exactly
                            > what it meant. I got...confuse, throw, fox, befuddle, discombobulate
                            > (really I got that), baffle, bedevil, bewilder, mixed up...
                            >
                            > So here's my question.
                            >
                            > If confound means confuse, why do we use confound in statistics as
                            opposed
                            > to confuse? Everybody knows what confuse means, but confound,...well
                            > that's just confounding. (Ok ok quit booing) lol
                            >
                            > Really though, is there something that the word confound addresses that
                            > confuse doesn't?
                            >
                            > Thanks!
                            >
                            > CJ
                            >
                          • Nash, CJ (C)
                            For the next installment of CJ wants to know... , This question stems from a need to do something other than give the deer in the headlight look when I m
                            Message 13 of 23 , Feb 13, 2009
                            View Source
                            • 0 Attachment
                              For the next installment of "CJ wants to know...",
                               
                              This question stems from a need to do something other than give the deer in the headlight look when I'm asked, "How do you know JMP calculated the Prediction Profiler accurately?" 
                               
                              Prediction Profiler, How does JMP come up with the resulting interactive table,..or whatever you call it.   Of course, I know that the real true way to see how accurate it is is to run verification samples based on the settings it predicts, but how does it come up with those settings? I don't need the actual math, that would bounce so far over my head that I'd never feel the breeze.  Just need to know what it uses.
                               
                              Here's what I think...   Please let me know where I'm off base and if so steer me in the right direction or fill in the gaps.
                               
                              I am assuming (and we all know how flawed that can be) that JMP uses the following data to create an accurate Profiler. (Accurate relative to the data put in.)
                               
                              >Information resulting from the variation in the replicates.
                              >The order of importance you set in the Profiler.
                              >If one response has hugh variation and another has very small variation, I'm assuming it takes that into  account.
                               
                              Set me straight, exactly what does Profiler use when it's creating the Prediction Profiler.
                               
                              Thanks and everyone have a happy and fun Valentine's Day.
                               

                              CJ

                               
                            • Nash, CJ (C)
                              You are very correct. It s pretty important in my work. I work in analytical and synthesis chemistry and interaction isn t automatically bad, but I need to
                              Message 14 of 23 , Feb 13, 2009
                              View Source
                              • 0 Attachment
                                You are very correct.  It's pretty important in my work.  I work in analytical and synthesis chemistry and interaction isn't automatically bad, but I need to know that there is interaction.  Sometimes it's good sometimes not.   
                                 
                                Ex, One project I was working on gave good but very unexpected results.  Even though the results were much better than expected we really had to pinpoint where that unexpected result had taken a left turn from what we expected due to registration regulations.   Being able to identify the exact confounding element was crucial and we were happy in the end to find out that we wouldn't have unexpected regulatory work to do.  Using DOE saved us a lot of time and money trying to pinpoint that little detail. 
                                 
                                So being able to detect and identify those little confounding details is pretty important here in the lab.
                                 

                                CJ



                                From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of schubie728
                                Sent: Friday, February 13, 2009 8:37 AM
                                To: GLJUG@yahoogroups.com
                                Subject: [GLJUG] Re: Definition,..curiosity

                                Of course, this makes it sound like confounding or aliasing is always
                                bad, but in actuality we can use it to our advantage when we
                                investigate. When we run DOEs or any experiment, we should really be
                                thinking about and testing specific theories.

                                In the A, B, C 3-factor experiment, I should have theories about the
                                main effects and various interactions of the 3-factors BEFORE I run
                                the experiment. If A is confounded with the B*C interaction, but I
                                have a theory grounded in physics or chemistry that the B*C
                                interaction is unlikely, then if (A)+(B*C) comes up as an important
                                effect, then I can say, "well it sure looks like A is the reason why."

                                Now there may be some cases where we are unsure about a theoretical
                                basis for a potential interaction. For example, if we cannot rule out
                                the A*C interaction using physics/chemistry, then if the alias string
                                (B)+(A*C) comes up 'big', then we can break that alias string apart in
                                our next experiment.

                                Aliasing/confoundin g helps us save resources and learn faster, and a
                                sequential approach ensures that we head in the direction of greatest
                                improvement.

                                - Sean

                                --- In GLJUG@yahoogroups. com, Mark A Anawis <Mark.Anawis@ ...> wrote:
                                >
                                > Confound it, CJ! You're over-analyzing everything. lol. Seriously,
                                in DOE,
                                > confounding, or aliasing, is the measurement of factors and/or
                                interaction
                                > effects which cannot be separated. For example, if you perform a DOE
                                with
                                > 3 factors (A,B,C) at 2 levels (low, high), you would need 2^3=8 runs to
                                > estimate all factors and interactions: A, B, C, AB, AC, BC, ABC.
                                However,
                                > if you run a half-factorial, then you need only 2^(3-1)= 4 runs, but
                                the
                                > penalty you pay is that you get confounding of factors effects so
                                that you
                                > are estimating A+BC, B+AC, C+AB and you don't get ABC. What this
                                means is
                                > that if the A+BC effect is significant you can not determine in this
                                > half-factorial design whether it is due to A or BC or a combination
                                of the
                                > 2 because A is CONFOUNDED with BC. The same holds for B+AC and C+AB. If
                                > you later decide that you want to distinguish between these, you can
                                run a
                                > foldover (the other half of the half-factorial) to produce the full
                                > factorial.
                                > There are other situations besides DOE where confounding is used. For
                                > example, there is a confounding variable which is a variable that is
                                > associated with both the outcome and the exposure variable. A classic
                                > example is the relationship between heavy drinking and lung cancer.
                                Here,
                                > the data should be controlled for smoking as it is related to both
                                > drinking and lung cancer.
                                > I hope this wasn't too long-winded an explanation.
                                > Mark
                                >
                                >
                                >
                                > Mark A Anawis, MA, CSSBB
                                > Senior Scientist
                                > ADD
                                > On Market Quality Engineering
                                > Abbott
                                > 100 Abbott Park Road
                                > Bldg. AP8B-3/Dept. 04Z7
                                > Abbott Park, IL 60064-3500
                                > U.S.A.
                                > office (847)-937-4347
                                > fax (847)-938-2219
                                > Mark.Anawis@ ...
                                >
                                >
                                >
                                >
                                > The information contained in this communication is the property of
                                Abbott
                                > Laboratories, is confidential, may constitute inside information,
                                and is
                                > intended only for the use of the addressee. Unauthorized use,
                                disclosure
                                > or copying of this communication (or any part thereof) is strictly
                                > prohibited and may be unlawful. If you have received this
                                communication in
                                > error, please notify Abbott Laboratories immediately by replying to
                                this
                                > e-mail or by contacting postmaster@. .. , and destroy this
                                > communication (or any copies thereof) including all attachments.
                                >
                                >
                                >
                                >
                                > "Nash, CJ (C)" <cnash2@...>
                                > Sent by: GLJUG@yahoogroups. com
                                > 02/12/2009 02:06 PM
                                > Please respond to
                                > GLJUG@yahoogroups. com
                                >
                                >
                                > To
                                > <GLJUG@yahoogroups. com>
                                > cc
                                >
                                > Subject
                                > [GLJUG] Definition,. .curiosity
                                >
                                >
                                >
                                >
                                >
                                >
                                > Confound,... that's a weird word, so I looked it up to find out exactly
                                > what it meant. I got...confuse, throw, fox, befuddle, discombobulate
                                > (really I got that), baffle, bedevil, bewilder, mixed up...
                                >
                                > So here's my question.
                                >
                                > If confound means confuse, why do we use confound in statistics as
                                opposed
                                > to confuse? Everybody knows what confuse means, but confound,... well
                                > that's just confounding. (Ok ok quit booing) lol
                                >
                                > Really though, is there something that the word confound addresses that
                                > confuse doesn't?
                                >
                                > Thanks!
                                >
                                > CJ
                                >

                              • Mark A Anawis
                                Hi CJ, I can tell you are doing a lot of thinking about this or you re getting a lot of questions. Using Fit Model, you can use observed values or values from
                                Message 15 of 23 , Feb 13, 2009
                                View Source
                                • 0 Attachment

                                  Hi CJ,
                                         I can tell you are doing a lot of thinking about this or you're getting a lot of questions. Using Fit Model, you can use observed values or values from designed experiments to generated data to build a model. You can use several statistics to tell you how good your model is at explaining the data. The RSquare will tell you the proportion of the variation in the response around the mean that can be attributed to terms in the model rather than random error. RSquare adjusts this to make it more comparable over models using different numbers of parameters. The closer these are to 1, the better. The Analysis of Variance Table tells you if you have terms in the model which are different from zero ( you want p<0.05). The Lack of Fit table tells you if you have the wrong forms of some regressor or too few terms in the model. You want a this p>>0.05. You may not get this if you don't have replication of points or your model is too saturated. Your parameter and effect test tables tell you which parameters and effects are significant (p<0.05). Your residual plot may be one of the best to show you how good your model is since it shows the difference between the calculated and actual values, so a perfect fit would have all points along y residual = 0.  
                                  Now, moving to the Profiler, it does simultaneous calculations on multiple responses using a different model for each response. If you want to optimize the output, then you use desirability where you add in maxima, minima, targets or combinations of these along with weights. You don't have to do optimization (desirability) to use the Profiler. The desirability objective function for multiple optimization is based on geometric mean of transformed responses. JMP uses various smoothing functions to fit your data.
                                         Hope this helps,
                                          Mark

                                  Mark A Anawis, MA, CSSBB
                                  Senior Scientist
                                  ADD
                                  On Market Quality Engineering
                                  Abbott
                                  100 Abbott Park Road
                                  Bldg. AP8B-3/Dept. 04Z7
                                  Abbott Park, IL 60064-3500
                                  U.S.A.
                                  office (847)-937-4347
                                  fax (847)-938-2219

                                  Mark.Anawis@...



                                  The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




                                  "Nash, CJ (C)" <cnash2@...>
                                  Sent by: GLJUG@yahoogroups.com

                                  02/13/2009 08:49 AM

                                  Please respond to
                                  GLJUG@yahoogroups.com

                                  To
                                  <GLJUG@yahoogroups.com>
                                  cc
                                  Subject
                                  [GLJUG] How do Profilers determine...





                                  For the next installment of "CJ wants to know...",

                                   
                                  This question stems from a need to do something other than give the deer in the headlight look when I'm asked, "How do you know JMP calculated the Prediction Profiler accurately?"
                                   
                                  Prediction Profiler, How does JMP come up with the resulting interactive table,..or whatever you call it.   Of course, I know that the real true way to see how accurate it is is to run verification samples based on the settings it predicts, but how does it come up with those settings? I don't need the actual math, that would bounce so far over my head that I'd never feel the breeze.  Just need to know what it uses.
                                   
                                  Here's what I think...   Please let me know where I'm off base and if so steer me in the right direction or fill in the gaps.
                                   
                                  I am assuming (and we all know how flawed that can be) that JMP uses the following data to create an accurate Profiler. (Accurate relative to the data put in.)
                                   
                                  >Information resulting from the variation in the replicates.
                                  >The order of importance you set in the Profiler.
                                  >If one response has hugh variation and another has very small variation, I'm assuming it takes that into  account.
                                   
                                  Set me straight, exactly what does Profiler use when it's creating the Prediction Profiler.
                                   
                                  Thanks and everyone have a happy and fun Valentine's Day.
                                   

                                  CJ

                                   


                                • Nash, CJ (C)
                                  OK Last question for the week,...promise. How do you regenerate the DOE Dialog in the Data Table Box. I didn t know what that was when I made my first DOE
                                  Message 16 of 23 , Feb 13, 2009
                                  View Source
                                  • 0 Attachment
                                    OK Last question for the week,...promise.
                                     
                                    How do you regenerate the "DOE Dialog" in the Data Table Box.  I didn't know what that was when I made my first DOE and deleted it. 
                                     
                                    So now how do I go back into this design and find it so I can put it back in the Data Table box?
                                     
                                    Thanks
                                     

                                    CJ

                                  • Mark A Anawis
                                    It s a good thing I already got that box of chocolates for my wife for Valentine s Day or I would have left by now, CJ! OK, here goes, now don t panic when I
                                    Message 17 of 23 , Feb 13, 2009
                                    View Source
                                    • 0 Attachment

                                      It's a good thing I already got that box of chocolates for my wife for Valentine's Day or I would have left by now, CJ! OK, here goes, now don't panic when I use the word "script". I heard that!. You need to still have the "Custom Design" window open (or similar DOE window). If you have closed it, then you'll need to start over since the data table keeps only the terms for creating a Fit Model. You can check if the "Custom Design" window is still there by selecting "Window" at the top and seeing if "DOE" is still there. So,  while you are still in the "Custom Design" window , open the red arrow next to the "Custom Design" oval and select "Save Script to Script Window". Drag your cursor over the entire script to select it, right-click on it and select "Copy". Close the Script Window without saving it. If you have not already done so, generate the data table containing the design from the "Custom Design" window. In the data table, select the red arrow next to the "Custom Design" oval in the data table. Select "New Property/Script for Custom Design". When the "New Property/Script for Custom Design Window" opens, put your cursor in the "Script" section, right-click and select "Paste". In the "Name" section, enter a descriptive name such as "CJs fabulous Custom Design" and select "OK". You should now see a script with this name in the left side of the data table. When you select the red arrow next to that new script and select "Run Script", you will regenerate the "Custom Design" window.
                                      Hope this was what you needed. Have a great weekend,
                                      Mark


                                      Mark A Anawis, MA, CSSBB
                                      Senior Scientist
                                      ADD
                                      On Market Quality Engineering
                                      Abbott
                                      100 Abbott Park Road
                                      Bldg. AP8B-3/Dept. 04Z7
                                      Abbott Park, IL 60064-3500
                                      U.S.A.
                                      office (847)-937-4347
                                      fax (847)-938-2219

                                      Mark.Anawis@...



                                      The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




                                      "Nash, CJ (C)" <cnash2@...>
                                      Sent by: GLJUG@yahoogroups.com

                                      02/13/2009 02:18 PM

                                      Please respond to
                                      GLJUG@yahoogroups.com

                                      To
                                      <GLJUG@yahoogroups.com>
                                      cc
                                      Subject
                                      [GLJUG] Last question regenerating DOE dialog





                                      OK Last question for the week,...promise.

                                       
                                      How do you regenerate the "DOE Dialog" in the Data Table Box.  I didn't know what that was when I made my first DOE and deleted it.  
                                       
                                      So now how do I go back into this design and find it so I can put it back in the Data Table box?
                                       
                                      Thanks
                                       

                                      CJ


                                    • Nash, CJ (C)
                                      Question on the VIF s, I m doing an Augmentation. I did the RSM (three factors, A, B, and C) and the resulting data indicated that we may be able to use
                                      Message 18 of 23 , Feb 18, 2009
                                      View Source
                                      • 0 Attachment

                                        Question on the VIF's, I'm doing an Augmentation.  I did the RSM (three factors, A, B, and C) and the resulting data indicated that we may be able to use Factor B at a lower volume than expected when paired with the Factor A..  Both products were factors in the original RSM.  So, I'm doing an Augmentation.  Two of the factors (Factors A and C) are remaining unchanged and the values for B need to be lowered.  My goal is to run new test runs and then take another look at the RSM which will contain all three original factors with more runs of course, but with factor B at a wider spread than the original RSM.

                                        Problem, when I try to develop the Augmentation model I get good numbers in my G Efficiency, above 50; and below 1.0 in the Fraction of Design plot, but my VIF's are throwing me. No matter how many runs I add I get consistently bad VIF numbers for the effects of factor B (100, 440) and B*B. Those two are 6.9 and 6.8 respectively in the model I am looking at right now, but have been anywhere form 7.0 and higher (sometimes REALLY higher) on others I've run. All the other VIF's are consistently lower than 2. I have to be doing something wrong because I ran a model with 100 runs and it gave me a G efficiency of 45, the Fraction of Design Space Plot gave me a sigmoidal curve and the two BIT terms were 3.6 and 3.6. 100 is WAY more than I need I just wanted to see if there WAS a number of runs that would give me a good model. JMP is trying to tell me something's wrong, but I've been unable to figure it out thus far.

                                        Another question, when I do the fit model for these Augmented designs the FIT Model box that comes up where you input your factors and responses comes up with "Block" in the model effects. Block isn't a factor or a response, it shows up after I hit "Group new runs into separate block".  I've run the models with this in it and with it removed because I'm not sure if it should be there. I don't think so because when I leave it in and look at the VIF numbers "Block" gets a VIF consistently above 12, B*B are above 6.00 and B(100, 450) are in line with the other effects, below 2.

                                         

                                        Thanks for any suggestions you may be able to offer.  Remember though Mathematically, you guys have more functional brain cells tuned to this stuff than I ,so Stats for Dummies please.  lol  :-)

                                        LOADS of thanks! 

                                        CJ     (847)-808-3525

                                         

                                      • Mark A Anawis
                                        Hi CJ, High VIF is a sign that you have collinearity of factors. That is, 2 of your factors are related to one another such as if you had length and surface
                                        Message 19 of 23 , Feb 19, 2009
                                        View Source
                                        • 0 Attachment

                                          Hi CJ,
                                             High VIF is a sign that you have collinearity of factors. That is, 2 of your factors are related to one another such as if you had length and surface area as x variables. You can often see collinearity in the leverage plots which will appear as a scrunching up of points in the x axis. Other methods are:  examine correlations and associations between variables, regression coefficients change wildly when variables are included or excluded,  standard errors of the regression coefficient are large, predictor variables with strong relationships to response don't show significance. Your remedy is to remove one of the variables.
                                            As to blocking, you do want to check the blocking variable since if your augmented data set shows a difference seen as a statistical significance between block 1 (original DOE) and block 2 (augmented data set), then you are not controlling some variable which is not part of your design.
                                             Hope this helps,
                                             Mark

                                          Mark A Anawis, MA, CSSBB
                                          Senior Scientist
                                          ADD
                                          On Market Quality Engineering
                                          Abbott
                                          100 Abbott Park Road
                                          Bldg. AP8B-3/Dept. 04Z7
                                          Abbott Park, IL 60064-3500
                                          U.S.A.
                                          office (847)-937-4347
                                          fax (847)-938-2219

                                          Mark.Anawis@...



                                          The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@... , and destroy this communication (or any copies thereof) including all attachments.




                                          "Nash, CJ (C)" <cnash2@...>
                                          Sent by: GLJUG@yahoogroups.com

                                          02/18/2009 08:26 AM

                                          Please respond to
                                          GLJUG@yahoogroups.com

                                          To
                                          <GLJUG@yahoogroups.com>
                                          cc
                                          Subject
                                          [GLJUG] VIFs





                                          Question on the VIF's, I'm doing an Augmentation.  I did the RSM (three factors, A, B, and C) and the resulting data indicated that we may be able to use Factor B at a lower volume than expected when paired with the Factor A..  Both products were factors in the original RSM.  So, I'm doing an Augmentation.  Two of the factors (Factors A and C) are remaining unchanged and the values for B need to be lowered.  My goal is to run new test runs and then take another look at the RSM which will contain all three original factors with more runs of course, but with factor B at a wider spread than the original RSM.

                                          Problem, when I try to develop the Augmentation model I get good numbers in my G Efficiency, above 50; and below 1.0 in the Fraction of Design plot, but my VIF's are throwing me. No matter how many runs I add I get consistently bad VIF numbers for the effects of factor B (100, 440) and B*B. Those two are 6.9 and 6.8 respectively in the model I am looking at right now, but have been anywhere form 7.0 and higher (sometimes REALLY higher) on others I've run. All the other VIF's are consistently lower than 2. I have to be doing something wrong because I ran a model with 100 runs and it gave me a G efficiency of 45, the Fraction of Design Space Plot gave me a sigmoidal curve and the two BIT terms were 3.6 and 3.6. 100 is WAY more than I need I just wanted to see if there WAS a number of runs that would give me a good model. JMP is trying to tell me something's wrong, but I've been unable to figure it out thus far.

                                          Another question, when I do the fit model for these Augmented designs the FIT Model box that comes up where you input your factors and responses comes up with "Block" in the model effects. Block isn't a factor or a response, it shows up after I hit "Group new runs into separate block".  I've run the models with this in it and with it removed because I'm not sure if it should be there. I don't think so because when I leave it in and look at the VIF numbers "Block" gets a VIF consistently above 12, B*B are above 6.00 and B(100, 450) are in line with the other effects, below 2.

                                           

                                          Thanks for any suggestions you may be able to offer.  Remember though Mathematically, you guys have more functional brain cells tuned to this stuff than I ,so Stats for Dummies please.  lol  :-)

                                          LOADS of thanks!  

                                          CJ     (847)-808-3525

                                           


                                        • Nash, CJ (C)
                                          Yes that is very helpful. I knew that the VIF showed collinearity, but the information about seeing it is very useful information that I ll add to my
                                          Message 20 of 23 , Feb 19, 2009
                                          View Source
                                          • 0 Attachment
                                            Yes that is very helpful.  I knew that the VIF showed collinearity, but the information about "seeing" it is very useful information that I'll add to my growing "CJ's Dummies guide to DOE". 
                                             
                                            I just couldn't figure out what I was doing that was creating the collinearity.  Found out though,...and of course, with every discovery there arises another question.
                                             
                                              Seems I WAS trying to make the Augment feature do something it wasn't designed to do.  I now know that Augment can be utilized to add runs within the factor ranges that I set for the original design. i.e., Factor A at a range of 2 to 4, and factor B at a range of 20 to 40.  However, it wasn't designed to extend the range of factors in the manner I was trying to do.  I was trying to change the range of factor A from 2 to 8 instead of using A at it's original range of 2 to 4.  So, JMP was looking at ALL the information for factor B as a whole, (original 15 runs and added 8 runs thru augmentation) but it was only using the range designated for the added 8 runs for factor A, ... because that is what I was unknowingly telling it to do, look at 22 runs for factor B and only 8 for factor A.
                                             
                                            So, I went in and changed the coding for factor A to include the ranges for all the combined runs and that took care of the problem.  I don't know if you're suppose to do this, but it worked.
                                             
                                            So here is the question that this spurs;  If I run an experiment and the resulting data tell s me that I set the range of one factor too narrow, what is the correct way to fix it? Consider that you don't have the resources to start the whole thing over, just to add a few runs to complete the picture.
                                             
                                            Possible solutions that come to mind, but I have no clue if they'd work properly are;
                                             
                                            Sit down at the computer, make a new design which encompassed the correct ranges, then go into the data table and change the first 15 numbers that JMP generates for each factor.  Input the factor and range data obtained by the completed DOE and complete the experiments for the remaining runs.  The problems that come to mind, you would have to be careful which JMP designated runs you replaced with the actual runs already completed, and would changing the values of the factors in the data table screw up the math and give less accurate data?
                                             
                                            Or design a matching DOE with the requirements of the new factor range, join the data to the original DOE and proceed.  Problem with this is that it seems as if I'd run into the same problem in analysis that changing the coding to extend the range might present.
                                             
                                            Input?.....  
                                             
                                            Thanks :-)

                                            CJ

                                            (847)-808-3525

                                             


                                            From: GLJUG@yahoogroups.com [mailto:GLJUG@yahoogroups.com] On Behalf Of Mark A Anawis
                                            Sent: Thursday, February 19, 2009 8:44 AM
                                            To: GLJUG@yahoogroups.com
                                            Cc: GLJUG@yahoogroups.com
                                            Subject: Re: [GLJUG] VIFs


                                            Hi CJ,
                                               High VIF is a sign that you have collinearity of factors. That is, 2 of your factors are related to one another such as if you had length and surface area as x variables. You can often see collinearity in the leverage plots which will appear as a scrunching up of points in the x axis. Other methods are:  examine correlations and associations between variables, regression coefficients change wildly when variables are included or excluded,  standard errors of the regression coefficient are large, predictor variables with strong relationships to response don't show significance. Your remedy is to remove one of the variables.
                                              As to blocking, you do want to check the blocking variable since if your augmented data set shows a difference seen as a statistical significance between block 1 (original DOE) and block 2 (augmented data set), then you are not controlling some variable which is not part of your design.
                                               Hope this helps,
                                               Mark


                                            Mark A Anawis, MA, CSSBB
                                            Senior Scientist
                                            ADD
                                            On Market Quality Engineering
                                            Abbott
                                            100 Abbott Park Road
                                            Bldg. AP8B-3/Dept. 04Z7
                                            Abbott Park, IL 60064-3500
                                            U.S.A.
                                            office (847)-937-4347
                                            fax (847)-938-2219

                                            Mark.Anawis@ abbott.com



                                            The information contained in this communication is the property of Abbott Laboratories, is confidential, may constitute inside information, and is intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication (or any part thereof) is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by replying to this e-mail or by contacting postmaster@abbott. com , and destroy this communication (or any copies thereof) including all attachments.




                                            "Nash, CJ (C)" <cnash2@...>
                                            Sent by: GLJUG@yahoogroups. com

                                            02/18/2009 08:26 AM

                                            Please respond to
                                            GLJUG@yahoogroups. com

                                            To
                                            <GLJUG@yahoogroups. com>
                                            cc
                                            Subject
                                            [GLJUG] VIFs





                                            Question on the VIF's, I'm doing an Augmentation.  I did the RSM (three factors, A, B, and C) and the resulting data indicated that we may be able to use Factor B at a lower volume than expected when paired with the Factor A..  Both products were factors in the original RSM.  So, I'm doing an Augmentation.  Two of the factors (Factors A and C) are remaining unchanged and the values for B need to be lowered.  My goal is to run new test runs and then take another look at the RSM which will contain all three original factors with more runs of course, but with factor B at a wider spread than the original RSM.

                                            Problem, when I try to develop the Augmentation model I get good numbers in my G Efficiency, above 50; and below 1.0 in the Fraction of Design plot, but my VIF's are throwing me. No matter how many runs I add I get consistently bad VIF numbers for the effects of factor B (100, 440) and B*B. Those two are 6.9 and 6.8 respectively in the model I am looking at right now, but have been anywhere form 7.0 and higher (sometimes REALLY higher) on others I've run. All the other VIF's are consistently lower than 2. I have to be doing something wrong because I ran a model with 100 runs and it gave me a G efficiency of 45, the Fraction of Design Space Plot gave me a sigmoidal curve and the two BIT terms were 3.6 and 3.6. 100 is WAY more than I need I just wanted to see if there WAS a number of runs that would give me a good model. JMP is trying to tell me something's wrong, but I've been unable to figure it out thus far.

                                            Another question, when I do the fit model for these Augmented designs the FIT Model box that comes up where you input your factors and responses comes up with "Block" in the model effects. Block isn't a factor or a response, it shows up after I hit "Group new runs into separate block".  I've run the models with this in it and with it removed because I'm not sure if it should be there. I don't think so because when I leave it in and look at the VIF numbers "Block" gets a VIF consistently above 12, B*B are above 6.00 and B(100, 450) are in line with the other effects, below 2.

                                            Thanks for any suggestions you may be able to offer.  Remember though Mathematically, you guys have more functional brain cells tuned to this stuff than I ,so Stats for Dummies please.  lol  :-)

                                            LOADS of thanks!  

                                            CJ     (847)-808-3525



                                          Your message has been successfully submitted and would be delivered to recipients shortly.