Re: Bias in research, and a solution
- I have also followed the thread over recent days. I have to say that Carl Foster's response is spot on for me.
I just cannot see how that adding a formal proposal review process would improve either the quantity of system throughput, or the ultimate quality of publications, to be honest. A new review step gets added, to be performed by the same scientists who we argue are already failing to provide timely reviews of manuscripts. Being one of the links in this chain, I don't want a system that further increases my review load. As it is, I figure with a 25% acceptance rate, I "owe" the system about 4 thoughtful reviews per accepted manuscript of my own. If my reviewing load goes up, I know that the quality of my reviews will go down.
As an author, I have recently suffered the indignity of having what I thought was some of my most relevant work thrown back in my face by an associate editor who clearly did not even read my manuscript abstract. This rocked my confidence in the peer review system to the core. But, I don't see how that problem goes away with the current proposal, for all of its good intentions. Too many studies, too few print pages.
Faculty of Health and Sport
Service Box 422
University of Agder
+47 3814 1347
+47 9161 4587
Re: Bias in research, and a solutionHello all:
I’ll chip in my opinion for what it is worth. Please accept my apologies for typos and words out of place as I am doing this on an iphone in an airport.
I’m sure I will abuse the English language someplace along the way.
It certainly is a interesting discussion and it is discussions such as these that often lead to better science and better publications.
Overall the ideas presented are not too far fetched, but how does one make decisions regarding studies that vary from their original intent. Let me give you two examples that we have recently had to deal with.
One. We recently finished and will have published in JAMA the results of a large clinical intervention trial looking at hemoglobin A1c in four treatment groups with type 2 diabetes. Control. Aerobic training only. Resistance training only. A combo of both. The intervention lasted 9 months. N = 262.
Easy enough so far but now lets look at it from start to finish. It took 7 years to pull off the study. 2 years of NIH applications. 4+ years of intervention. Half a year for data cleaning, analysis, publication. During the course of the study we said we would recruit folks with an HBA1c of 7.5; yet by the end of the study we had to accept those with 7. Why? In the year 2000, data from NHANES stated that 7.5 were the accepted value of being to high. However, during the course of the study physicians appeared to get on board and proper medications have brought patients under much better control whereby current NHANES data now attest that 7.0 is high.
Question. Did we meet our study objective and does our paper still get published?
(We did register it under Clinicaltrials.gov, which anyone can register under regardless of the study. So, perhaps the push would be to have investigators register trials there – it is free – and not reinvent a wheel already in place.)
Overall, I guess we did not do so badly, JAMA is certainly good. But consider the context of the situation.
Critics? At this point someone will probably say, ‘well, this isn’t sports science, it is a clinical trial using exercise as a modality.” Though I would argue to the contrary, let me put forth a second scenario.
Two. Over the last year I performed a study examining time trial performance relative to the administration of an herb. Control vs. Treatment. Intended enrollment, N=20; yet, finishing 14 as the study was too strenuous for many experienced riders as it required intense ‘pre-exhaustion’ bouts of exercise.
Did I pass or did I fail the ‘pre-pub’ agreement?
So, while what is proposed is novel and important when do the criteria of success become muddled by objectivity vs. subjectivity.
Assorted. In my opinion, the journal industry is in an interesting position. Most journals still insist on publishing in hard copy vs. switching to an e-publishing format only. One could easily argue both sides of the publication agenda, as publishing houses need to make money. However, with the world is rapidly moving in an e-publishing manner and it will eventually be only a matter of time where e-publishing passes hardcopy publishing and scientists will move toward the rapid publication to secure their careers rather than wait out the delay of hardcopy publications. It is also more advantageous on the whole as ‘important’ finding are rapidly distributed to areas in need of answers. (Most severe diseases)
MSSE and BJSM both face this problem. MSSE – in my opinion – handles it better and published in hardcopy more rapidly. Using BJSM as an example, we had an article e-published in April 2008, yet not ‘officially’ published until October 2009. I’m sorry, but it’s simply too long to wait and I am sure that it’s a massive headache for the editors.
However, BJSM, again in my opinion, is such a lovely journal because it publishes so broadly and on so many diverse topics that MSSE wouldn’t touch. Personally, I don’t think BJSM should change its publishing goal but use the opportunity to expedite publication via e-publishing articles at a more rapid pace. Yes, I know they have an e-first routing, but please don’t water down your publications by excluding articles on ‘Generalised Ligament Laxity and Shoulder Dislocations after Sports Injuries.’ This is great stuff.
Reviewing. This is such a thankless task and I am afraid the acerbic reviewers will never go away. This falls under the direct responsibility of the editor. Quite frankly, there is no justifiable reason for mean spirited reviews b/c it is plainly apparent that reviewers sometimes champion their own agenda so that they can protect their little ‘research cartel’ and not be competed against. Further, some reviewers simply do not understand the difference between a large clinical trial and a small ex phys related trial.
Again, borrowing from our groups experience, in the past few years we have published a number of papers from a study known as DREW (N=462). Our first paper went to JAMA. Subsequent papers went to other journals. Over a 6 month period I submitted 2 papers to MSSE, both of which ended up going to the same reviewer who wrote back scathing reviews and accusations, data mining, and dividing publications, etc.
Laced between his derisive comments were good points; however, the editor chose to let ‘crap’ run through as well. Firstly, these types of reviews should never be permitted to see the light of day and secondly, if they are, they should be sanitized.
A particularly amazing comment from this particular reviewer was that our subsequent papers could have been covered in on line in a table. Given that there are only a handful of people performing trials of this size I am fairly certain that this individual has never worked with a similar size trial and was fairly ‘ignorant’ of how much data sits in one of these data sets.
All right then….enough bitching, moaning, whinging, weeping and gnashing teeth. Hopefully, some place between the lines you get my overall points.
1. Why not work together with the CONSORT group to orchestrate a clincialtrials.gov like repository, whereby CONSORT journals insist that protocols be placed in prior to running the trial? Better yet, why not encourage the CONSORT group to work with clinicaltrials.gov so there is one repository instead of having dozens. Or, have several recognized acceptable registration sites that are acceptable. JAMA and other journals do this. In essence, they are registered as to their intent to study a topic and method.
2. I am against pre-publication agreements based on intent, as I do not believe they will promote good science and perhaps might even promote lazy science. A certain degree of competition is healthy.
3. Once registered, if an investigator deviates from their protocol, they are responsible for defending it in their methods. Deviations occur for perfectly good reasons that are beyond prediction and planning. This does not (always) mean the study is a wash.
4. Encourage journals to publish full articles, as e-pub a soon as possible, with an appropriate volume and page number. The aforementioned example sat online with no reference information for much to long. I’d be happy to pay $500 or more for an article published in 3 months versus a lower fee for waiting over a year and half in some cases.
Ok then – enough babbling from me. Thanks for listening. Besides, my thumbs are exhausted.
Conrad Earnest, PhD, FACSM
Director: Exercise Biology Laboratory
Division of Preventive Medicine
Pennington Biomedical Research Center
6400 Perkins Road Baton Rouge, LA 70808-4124
Hi Will and all,
I have not followed this discussion as closely as I would have liked, but in principle what Will and Patria propose is a good idea. The Lancet journal has been reviewing and publishing study protocols for a few years now with a provisional commitment to publication (see http://www.thelancet.com/protocol-reviews).
'The Lancet will assess protocols of randomised interventions, systematic reviews and meta-analyses, observational studies, and selected phase I and II studies (novel intervention for a novel indication; a strong or unexpected beneficial or adverse response; or a novel mechanism of action). Our aim is threefold: to encourage good principles in the design of clinical research, to publicise a list of "accepted" protocols, and to make a provisional commitment to publication (see below) of the main clinical endpoints of the study. That commitment is made, obviously, before the results of the study are known.'
This is a step in the direction of your proposal, though clearly not as far reaching. Carl's (and others) point about the problem of securing enough reviewers for this model is well taken. However, reviewing a proposal/ protocol need not be as onerous as reviewing a full paper and the task could be done largely in checklist format. The Lancet requires authors to give the following information in their protocol:
Background, including rationale and any previous systematic review(s)
Design (eg, randomised, parallel-group, double-blind), including:
Inclusion and exclusion criteria
Intervention(s) or method
How randomised (eg, call to central office; for RCTs)
How allocation is concealed (for RCTs)
Primary and any secondary endpoint(s)
Side-effects reporting and quantification (eg, WHO scale)
Statistical analysis plan, including:
Sample size and power calculations
Type of analysis (eg, ITT)
Planned subgroup analyses
Ethical issues, including:
Ethics committee approval
Informed consent form and information sheet
Interim analyses and stopping rules
Is there an independent data-monitoring committee?
With this information provided, when it comes to the review of the full paper after the study is complete, the Lancet editors look for major deviations from protocol, poor reporting or over-interpretation of data, loss of originality or topicality, and submission unreasonably after the planned submission date. All of these would be reasons for rejection. If people 'did what it said on the tin' originally, however, then the paper should/ would be published irrespective of so-called 'positive' or 'negative' findings.
The proposal may be ambitious, but it is an idea worth pursuing in my view.
- My comments to Will with his responses below:
From: Paul Laursen
Sent: Monday, 1 November 2010 9:52 p.m.
To: Will Hopkins
Subject: FW: Bias in research, and a solution
I know I promised I'd get back to you regarding this post. Like everyone probably, I too am routinely disappointed and frustrated when my work, and that of my colleagues, is rejected by reviewers and/or editors, and sometimes even when the reviewer's comments seem generally positive. The exercise can really leave you wondering. I must also admit that your new concept has taken me a while to digest, mostly because the traditional research process is fairly well engrained in me now. After following your new concepts, the thoughts that came into my head mirror at least two of Carl's points. Carl's second point highlights the reviewer issue, which adds to the admin load for everyone.
[Will] I don't think so. The proposal will be a short document, and largely done on a form with prompts (assays for dependent variable, predictors, mechanism variables, subject characteristics; design; analysis...). Also, would you turn down invitations to see what research others are proposing?
And for students already on tight timelines, further holdups mean setbacks in starting the research, and inevitably delays in completion.
[Will] Two-week turnaround.
The other issue that I must admit immediately came to mind, was Carl's last point. When I go back through my publications, I too can see that at least half arose from data collected, where I found something I didn't expect to find, that maybe hadn't been reported previously.
[Will] I am disappointed that you and Carl think that what I am proposing precludes publication of serendipitous findings. A guarantee of publication applies to the main question you have proposed to answer. Of course you can publish other stuff, but a meta-analyst will probably assign it a lower weight, because it is likely to be biased: interesting and unexpected findings are more likely to be Type 1 errors. That's one of John Ioannidis's main points.
Anyway, I too applaud yourself and Patria for attempting to think outside the box here to try to solve this issue, but I'm not sure it will work at the proposal registry stage. Finally, if as Carl mentions, it is really a throughput issue, that we have more papers than journals to publish them, do we then need more journals in the Sport Sciences?
[Will] I am suggesting LESS journals: ONE, but I know that won't happen. Anyway, with publication on the Web and not in print, the volume can be as high as you like. It pays for itself on a per paper basis.
Paul Laursen |Performance Physiologist
NZ Academy of Sport North Island
- Hi Will and others
Some interesting ideas and arguments coming through on this one. I agree that the current system is not working, especially for us sports scientists, but I also agree with many of the ideas of others already posted especially Carl Foster. I hate paying page charges, not only because I find it difficult to actually get the funds from my small institution but because I don't think I should be paying for others to print my work. What of the authors who cannot afford these page charges are they to lose out?
I am surprised by Carl's comment on volume of material for publication. He suggests that a major reason for editors not accepting publications is due to the limitations of the journal to get them edited and printed. This would suggest that some papers are rejected not because they are poor papers but because the journal publishers have filled their quota of "accepted" publications for that month and can't take on any more. I might be naive here but I would like to think that all publishable articles were published.
I like the idea of a publication process based on peer-reviewed proposals, but how would you handle retrospective papers, for instance you might find through a study that a certain bio-marker is a good indicator of training performance but this may not have been the major emphasis of the study when initially proposed. I guess you might get around this by stating out front that the findings were not the initial intent of the study but as they are unique/interesting/challenging you have decided to use them. Perhaps have a separate paragraph at the start of the paper which is compulsory in which authors have to say if the paper is within the scope of the original proposal and if not, why not. You could state your case here.
After reading Alan's comments, you might actually have different areas within the journal, e.g. proposal-related research publications in one section and another section for publications arising from proposed research. Look forward to more comments.
Associate Professor of Exercise and Sport Science
Department of Social Science, Parks, Recreation, Tourism & Sport
6th Floor Forbes Building
P O Box 84
Lincoln University 7647
Christchurch, New Zealand
p +64 3 325 3838 extn: 8565 | m +64 021 257-2600 | f +64 3 325 3857
e mike.hamlin@... | w http://www.lincoln.ac.nz
- Hello all,
I have a read with interest this discussion. Whilst many of the contributors are heavily involved in editorial and teaching demands, I thank them for their input on this very interesting topic.
I would agree the work of journal editorial staff and reviewers is lengthy. Whilst I can appreciate a journal such as Lancet has the resources to accommodate review and publication of proposals, journals of exercise and sport science probably do not have that same luxury.
I would also suggest that a simpler alternative to this problem comes in the form of 'grass roots' education. Within Medicine and allied medicine it is a simple professional requirement to be competent in discussion and appreciation of research. To be able to decide upon what constitutes good and bad research forms part of the clinical reasoning process or 'evidence informed practice'. Practitioners may not be researchers themselves but they are able to examine potential evidence for a treatment or investigation to make an informed decision about the relevance for their practice. The principle is that medical and allied medical students are taught at a very early stage in their training to be able to critic published research. Not all of it is good. Not all of it is used for clinical decision making.
We also have to ask ourselves why we are performing research in exercise and sport science and why there should be quality control of proposals? In medicine, a new treatment can have far reaching community impact. Hence quality research and being able to establish what represents quality research is important. I'm sure many of you can appreciate the concept of an intervention having 'clinical relevance'. Whilst this occurs because the research is soundly developed, conducted and presented, it is also that the research may impact many people in society. From my experience, many sport science researchers perform research because it sounded like a good idea at the time. Other research is just badly conducted and presented. Sport science practitioners also sometimes distance themselves from research in their decision making relying more on experience. Whilst I can accept experience does form part of the clinical decision making process, there is no difference in sport science as it is to medicine in terms research application. Research is not necessarily the basis for practice, but should informs applied sport scientific practice.
In the same way as Medical and health practiitioners are taught early about experiemental design and statistical analysis etc, it is my contention, that exercise and sport science students should also be rigorously educated in a similar way. I appreciate some institutions are already doing this, but others are not. There is a professional obligation for academic staff to reinforce this aspect in students of exercise and sport science.
There is always going to be sub standard research in medicine and sport science, but the carry on effect of a well educated student in the research process is that hopefully there will be better published research, the editorial proccess of sport science journals becomes easier and that applied decisions are made using good quality evidence.
Feedback and comments welcomed