Systematic reviews, with or without meta-analysis are attractive. They appear to offer much more secure answers, by taking ‘the totality of published evidence’. They can be undertaken without major financial outlay, although they may need much time from reviewers. They attract readers to a journal. They are now often required as evidence of an unmet need for evidence, before a new trial is funded. They may be referred to more than other articles. But …
As editor of a journal, Clinical Rehabilitation, I have noticed a major increase in the number of systematic reviews submitted, and published. About five years ago we published about 12 a year. Now we may publish four in a single (monthly) issue. We now reject about 50% of all submitted reviews. Moreover it is becoming obvious that some of the reviews submitted are flawed, sometimes seriously so.
Recently I looked at a systematic review of early mobilisation after cardiac surgery published in 2020. The review identified 6 studies and meta-analysis showed a significant different in the distance walked in six minutes favouring early mobilisation. I then looked at a paper on the same topic in 2017, which found no strong evidence. I assumed that it found less studies, but in fact it found nine some four years before the 2020 paper’s each was undertaken. I returned to the 2020 paper to see, in the methods section that, after their search in August 2019 ” Four studies were added by manual added by manual research after referring to the Ramos Dos Santos et al. study [18].” (This is the 2017 paper.) This means that their search only detected two papers.

My concern was such that I looked for and found a paper undertaking a critical analysis of systematic reviews. This paper is fascinating, and well worth reading. It was written in 2016, and matters are probably worse now. It also considered all systematic reviews, not just medical ones, but I do not think that will alter the main findings.
The ‘bottom line’ finding is shown in figure four at the end of the paper. Of all reviews published:
- 3% are “decent and clinically useful”
- 13% are “misleading, abandoned genetics”
- 17% are “decent, but not clinically useful”
- 20% are unpublished
- 20% are “flawed beyond repair”
- 27% are “redundant and unnecessary”
I would like to think more that 3% of those published in Clinical Rehabilitation are decent and clinically useful. I hope that few are “flawed beyond repair” but I know one or two probably are (peer review cannot, and should not be expected to detect flaws.) I do know that we reject many that are redundant, and many that would not be clinically useful.

So what can a reader do?
The only good quality control system I am aware of is the Cochrane Collaboration, and generally a Cochrane systematic review can be relied upon. Beyond that, I suggest always asking the following questions:
- is the question clinically relevant or useful? If not, leave it.
- are there other similar reviews? Search the Cochrane library, including looking at the reviews from Epistemonikos listed under ‘More’ in the right-hand end tab.
- if so, look at them and compare
- read the methods, and the search strategies and selection criteria and ask, would this find all studies?
- if you know of a study that should be included, check whether it is
- last, be very critical including questioning what bias the authors may unwittingly have. An objective such as “our goal was to confirm that ….” suggests bias!