P values and the lot of editors
- View SourceIn my experience (as a section editor) most colleagues are aware of the limitations of exclusive use of P values. These simply indicate pass or fail of a significance test and do not necessarily indicate practical/clinical significance or meaningfulness. Confidence intervals of differences provide improved measures of uncertainty and effect sizes indicate the magnitude of differences and changes. My personal preference is to have, where possible, all three. By doing so, the reader is provided with as much information as possible to judge whether or not authors' claims about differences and changes can be substantiated.
Appeals can and do work and Ian offers useful advice. For those of you who are unaware of Day and Gastel's (2006) How to Write and Publish a Scientific Paper (6th edn) published by Cambridge University Press ISBN 0-521-671767-1 or O'Connor's (1991) Writing Successfully in Science published by Chapman and Hall ISBN 0-412-44630-8, I urge you to seek them out. For those of you who are, the occasional refresher is recommended.
They are excellent texts and should adorn the bookshelves of all researchers and scientific writers. In particular, Day and Gastel have a chapter entitled The Review Process (How to Deal with Editors) that is germane to the topic of this thread.
Editors and their reviewers spend considerable time fulfilling their task and (usually) provide extensive feedback to authors. Some authors feel aggrieved that their manuscripts have been rejected while many reviewers castigate editors for wasting their time with irredeemably bad submissions. As with most things, the majority of cases lie somewhere between these extremes but editors constantly have to adjudicate.
Edward M Winter
Professor of the Physiology of Exercise
The Centre for Sport and Exercise Science
Sheffield Hallam University
Collegiate Crescent Campus
SHEFFIELD S10 2BP
Tel: 0114 225 4333 (International +44 114 225 4333)
Fax: 0114 225 4341 (International +44 114 225 4341)