(5 of 3) All well that ends well…(Letter to the Editor re Choi et al.)

In a series of blog posts I highlighted how it was possible to block a Letter to the Editor Professor Martin Roosli and I wrote regarding a recent systematic review and meta-analysis of mobile phone use and risk of a bunch of different cancer types clubbed. We had some concerns about the methodology in the review, the way they interpreted some of the data, and pointed out missing data on Conflicts-of-Interest that led to erroneous conclusions on impact of CoI.

For reference, here are the links to the original paper and the blog posts:

original Choi et al review

Post of our Letter

Peer Review 1

Peer Review 2

Discussion of ‘white had bias in RF/cellphone health research’

Surprisingly, after we thought this whole saga had reached its end, the editors of the International Journal of Environmental Research and Public Health (IJERPH), got in touch again. Having originally rejected our Letter, they had received further a further Letter (or Letters, I don’t know) also expressing concerns with what was done in this review. The Journal had therefore decided to publish our Letter after all, with the other(s) to follow. The final version of our Letter and the peer reviews can now be found here (note this is slightly modified from the one posted on this blog previously):

Comment on Choi, Y.-J., et al. Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis. Int. J. Environ. Res. Public Health 2020, 17, 8079

It’s great to see that in the end scientific debate was victorious. It remains a shame however, that it had to take several months, an initial rejection, and at least one more Letter highlighting concerns with the Choi et al review, to convince the editors of IJERPH that this was the correct approach to science.

(1 of 3). Response to: Choi et al. “Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis”

This blog post is part of a series of 4 posts. To directly link to other posts, click: post 1, post 2, post 3, post 4

Professor Martin Röösli (@MartinRoosli) and I wrote a Letter to the Editor of the International Journal of Environmental Research and Public Health about a recent systematic review and meta-analysis looking at mobile phone use and tumour risk. As is customary the authors of that paper were invited to respond to our concerns, which they did. Surprisingly, the journal then decided not to publish the Letter and the response to it. This is an incomprehensible editorial decision (although rejecting such letters seems to become more common, possibly because it costs journals money and often highlights problems with their peer-review), and I assume that the authors of the original paper were just as surprised and irritated; having spent a considerable amount of time in writing their response.

The journal did however also include the peer-reviews of the Letter and the response in their rejection email. To say that these were inappropriate and clear evidence of how the peer-review system can be abused by activist-scientist would be putting it mildly. Given this we appealed the editorial decision, but as is common in such situations this was declined without explanation.

So being unable to publish our Letter where it belongs, we decided to publish it here. Given that the reviews were anonymous (enabling them to get away with this) we decided to publish these as well in a 3-post set. I hope that the authors of the response will agree to have their response published as well, so that this will be a 4-post series.

In any case, below our Letter…

We welcome the updated systematic review and meta-analysis of case-control studies of mobile phone use and cancer by Choi et al., which was recently published in this journal [1]. Given the uncertainties that remain to surround the issue of radiofrequency radiation exposure and cancer risk, regular synthesis of available epidemiological evidence continues to be important, and the synthesis published by Choi et al. provides a timely update. However, Choi et al. have made several peculiar decisions in their synthesis which result in difficulties in the inferences that can be made, and which deserve further discussion.

Firstly, the main meta-analysis shown in Figure 2 in [1] combined case-control studies for different benign and malign tumours, including those of the head, but also non-Hodgkin’s lymphoma and leukaemia, and provides one meta-analytic summary of these. It is not common practice to combine different outcomes with different aetiologies in one meta-analytic summary [2] and, given the substantial heterogeneity observed, it is highly questionable if the common risk estimate for diseases with different aetiologies that Choi et al. try to combine in their meta-analysis does exist. [additional note, added in blog only: the issue here relates to combing different, arbitrary, cancers, and does not imply RF can only have an effect on one endpoint. For example, ‘all cancers’ is often studied]. It would be more appropriate to conduct separate meta-analyses by type of tumour, and Choi et al. have indeed done these as well. These results are provided in the Online Supplement (Table S3) and do not provide summary evidence of excess tumour risk for any particular individual tumour types.

Choi et al. further presented subgroup analyses of studies conducted by Hardell et al., studies by the INTERPHONE consortium, and a group of miscellaneous case-control studies. They identify interesting differences between those three subgroups, and conduct further analyses to explore possible reasons for the observed differences. Interestingly, Choi et al. fail to notice the most obvious conclusion from these subgroup analyses, in that both the INTERPHONE-related studies and miscellaneous studies are largely in agreement and do not point to an excess cancer risk from mobile phone use. Evidence of large excess cancer risks are almost exclusively based on the studies by the Hardell group; as already described in earliermeta-analyses[3, 4].In fact, relative excess risks of 90% (30%-170%) and 70% (4%-180%)  reported by the Hardell group (Table 1 and Figure 2) associated with any mobile phone use are implausible high, and do not triangulate [5] with evidence from other epidemiological sources, such as prospective cohort studies [6, 7]  and incidence trends [8]. Incidence trend analyses are generally considered a weak study design but in this specific case of a clear change of exposure of virtually the whole population, limited confounding factors that may change over time and reliable cancer registries, incidence trends are important for evidence evaluation and plausibility considerations.

Even when exposure-response associations are observed (Table 3), and the INTERPHONE studies and miscellaneous studies provide relative consistent estimates (Odds Ratios of 1.25 (0.96-1.62) and 1.73 (0.66-4.48), respectively) of some excess risk associated with a, arbitrary, cumulative call time of at least 1,000 hours, the evidence from the Hardell studies similarly provides an implausibly high Odds Ratio of 3.65 (1.69-7.85); out of line with all evidence from other sources. The INTERPHONE team have spent considerable efforts trying to evaluate whether observed increased and decreased risks could be the result of recall and selection bias [9–13] and a recent study found some indication for reverse causality as an explanation for seemingly protective effects from mobile phone use [14]. It is therefore surprising that Choi et al. have not similarly discussed the likelihood of bias away from the Null in the Hardell studies. disregarding the implausible risk reported by the Hardell group, a summary risk point estimate based on all other case-control studies for 1,000+ cumulative hours of use would be in the order of 1.30-1.50, which triangulates much better with other lines of research.

Choi et al. argue that a plausible explanation for the observed differences could be that the Hardell studies are of better quality than those in the other two groups, based on individual appraisal of each study using the Newcastle-Ottawa Scale and National Heart, Lung, and Blood Institute quality assessment tool of case-control studies (Tables S1 and S2). The differences in rating within and between the three groups of case-control studies are minimal, but Choi et al. rated the methodological quality Hardell studies a little higher quality mainly because they had very high response rates and because they mostly classified as having excellent, blinded, assessment of exposure compared to the INTERPHONE and miscellaneous studies. This seems to be an error or misunderstanding in the use of these criteria. First, achievement of a high participation rate is an asset in an epidemiological study. However, to achieve a participation rate of over 80% in a population based case-control study in Western Countries, as reported in the Hardell papers, is highly unusual nowadays. Regardless, one would expect that in a study with such high participation rates, the proportion of mobile phones users in controls should closely match the official subscriber statistics, which was not the case for the Hardell studies [4]. Thus, serious concerns remain about how these high participation rates have been achieved or calculated.

Secondly, the blinding concept as rated by Choi et al. is inappropriate. Exposure assessment in the INTERPHONE studies was conducted by trained interviewers, which have been susceptible to interviewer bias because they could indeed probably not be blinded to case-control status [15]. However, it is highly unlikely this would have resulted in higher bias compared to the Hardell studies, in which exposure assessment was based on questionnaire-based self-reporting, by cases and controls, of mobile phone use who, by definition, are not blinded to their disease status. Methodological work suggests that both face-to-face interviews and self-administered questionnaires are susceptible to various ‘mode of administration’ biases, but that exposure assessment based on self-administered questionnaires are generally more susceptible to recall bias [15]. As such, the methodology of the Hardell studies should have been classified as being of comparable quality to the other case-control studies in this review, at most.

Choi et al. further looked at source of funding as a possible explanation for observed differences, but provided erroneous funding information. Only the Hardell studies received direct funding from interest groups such as the telecom industry [16, 17] and pressure groups [18], but this was not reported by Choi et al. In contrast, INTERPHONE studies’ industry funding was through a well-established firewall model to avoid influence of the funders on the researchers. There is empirical evidence from human experimental studies that such a funding structure has not resulted in biased study results but in higher study quality, whereas direct funding by interest groups may produce biased results[19, 20]. Further, the three study groups only contribute to either ‘funded by industry’ or not (according to Choi et al.), which makes this analysis non-informative.

Given that observational epidemiological studies are susceptible to various biases, which can result in under as well as over-reporting of true effects, rigorous evaluation is needed to understand why the studies by the Hardell group provide different results than the majority of other case-control studies and with other groups of epidemiological literature. In the absence of direct evidence for any causes of these differences, triangulation of epidemiological studies susceptible to different types of biases [15], such as case-control studies, cohort studies and ecological studies of cancer incidence, as well as with evidence from animal and laboratory studies is warranted. Although some uncertainties remain, most notably for highest exposed users and for new GHz frequencies used in 5G, we can be reasonably sure that the evidence has converged to somewhere in the range of an absence of excess risk to a moderate excess risk for a subgroup of people with highest exposure. Important, over time, the evidence had reduced the uncertainty regarding the cancer risk of mobile phone use.

Funding

No external funding was obtained for this publication.

Author contributions

FdV drafted the first outline. MR and FdV collaborated on subsequent iterations, and both approved the final version.

Conflicts of Interest

The authors declare no Conflicts of Interest.

MR’s research is entirely funded by public or not-for-profit foundations. He has served as advisor to a number of national and international public advisory and research steering groups concerning the potential health effects of exposure to nonionizing radiation, including the World Health Organization, the International Agency for Research on Cancer, the International Commission on Non-Ionizing Radiation Protection, the Swiss Government (member of the working group “mobile phone and radiation” and chair of the expert group BERENIS), the German Radiation Protection Commission (member of the committee Non-ionizing Radiation (A6) and member of the working group 5G (A630)) and the Independent Expert Group of the Swedish Radiation Safety Authority. From 2011 to 2018, M.R. was an unpaid member of the foundation board of the Swiss Research Foundation for Electricity and Mobile Communication, a non-profit research foundation at ETH Zurich. Neither industry nor nongovernmental organizations are represented on the scientific board of the foundation.

FdV’s research is also funded by public or nonprofit organisations. He is partly funded by the National Institute for Health Research Applied Research Collaboration West (NIHR ARC West) at University Hospitals Bristol NHS Foundation Trust. He has done consulting for the Electric Power Research Institute (EPRI), and nonprofit organisation,  in the past, not related to the current publication. He is a member of the UK Government Independent Advisory  Committee on Medical Aspects of Radiation in the Environment (COMARE).

References

1. Choi Y, Moskowitz J, Myung S, Lee Y, Hong Y. Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis. Int J Environ Res Public Health. 2020;17:10.3390/ijerph17218079. https://www.mdpi.com/1660-4601/17/21/8079.

2. Borenstein M, Hedges L, Higgens J, HR R. When Does it Make Sense to Perform a Meta‐Analysis? In: Introduction to Meta‐Analysis. 1st edition. Chchester, UK: John Wiley & Sons, Ltd; 2009. p. 357–64.

3. Lagorio S, Röösli M. Mobile phone use and risk of intracranial tumors: A consistency analysis. Bioelectromagnetics. 2014;35:79–90.

4. Röösli M, Lagorio S, Schoemaker MJ, Schüz J, Feychting M. Brain and Salivary Gland Tumors and Mobile Phone Use: Evaluating the Evidence from Various Epidemiological Study Designs. Annu Rev Public Health. 2019;40:221–38.

5. Lawlor DA, Tilling K, Smith GD. Triangulation in aetiological epidemiology. Int J Epidemiol. 2016;45:1866–86.

6. Benson VS, Pirie K, Reeves Gillian K. GK, Beral V, Green J, Schuz J. Mobile phone use and risk of brain neoplasms and other cancers: Prospective study. Int J Epidemiol. 2013;42:792–802.

7. Frei P, Poulsen AH, Johansen C, Olsen JH, Steding-Jessen M, Schüz J. Use of mobile phones and risk of brain tumours: Update of Danish cohort study. BMJ. 2011;343:d6387.

8. Karipidis K, Elwood M, Benke G, Sanagou M, Tjong L, Croft RJ. Mobile phone use and incidence of brain tumour histological types, grading or anatomical location: A populationbased ecological study. BMJ Open. 2018;8:e024489.

9. Lahkola A, Salminen T, Auvinen A. Selection bias due to differential participation in a case-control study of mobile phone use and brain tumors. Ann Epidemiol. 2005;15:321–5.

10. Vrijheid M, Armstrong BK, Bédard D, Brown J, Deltour I, Iavarone I, et al. Recall bias in the assessment of exposure to mobile phones. J Expo Sci Environ Epidemiol. 2009;19:369–81.

11. Vrijheid M, Richardson L, Armstrong BK, Auvinen A, Berg G, Carroll M, et al. Quantifying the Impact of Selection Bias Caused by Nonparticipation in a Case-Control Study of Mobile Phone Use. Ann Epidemiol. 2009;19:33–41.

12. Vrijheid M, Cardis E, Armstrong BK, Auvinen A, Berg G, Blaasaas KG, et al. Validation of short term recall of mobile phone use for the Interphone study. Occup Environ Med. 2006;63:237–43.

13. Vrijheid M, Deltour I, Krewski D, Sanchez M, Cardis E. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk. J Expo Sci Environ Epidemiol. 2006;16:371–84.

14. Olsson A, Bouaoun L, Auvinen A, Feychting M, Johansen C, Mathiesen T, et al. Survival of glioma patients in relation to mobile phone use in Denmark, Finland and Sweden. J Neurooncol. 2019;141:139–49.

15. Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health (Bangkok). 2005;27:281–91.

16. Hardell L, Mild KH, Carlberg M. Case-control study on the use of cellular and cordless phones and the risk for malignant brain tumours. Int J Radiat Biol. 2002;78:931–6.

17. Hardell L, Hallquist A, Mild KH, Carlberg M, Påhlson A, Lilja A. Cellular and cordless telephones and the risk for brain tumours. Eur J Cancer Prev. 2002;11:377–86.

18. Hardell L, Carlberg M, Söderqvist F, Mild KH. Case-control study of the association between malignant brain tumours diagnosed between 2007 and 2009 and mobile and cordless phone use. Int J Oncol. 2013;43:1833–45.

19. Huss A, Egger M, Hug K, Huwiler-Müntener K, Röösli M. Source of funding and results of studies of health effects of mobile phone use: Systematic review of experimental studies. Environmental Health Perspectives. 2007.

20. van Nierop LE, Röösli M, Egger M, Huss A. Source of funding in experimental studies of mobile phone use on health: Update of systematic review. Comptes Rendus Phys. 2010;11:622–7.

‘Bending Science’ & the Dirty Electricity Industry

In principle the scientific method has a pretty relatively robust system based on peer-review to ensure that any problems in scientific papers are addressed before the paper is published. This methodology is not without its problems, but it is the best we have available; and it is hard to see what other system would solve this. Anyway, it works relatively well as a self-correcting scientific methodology.

One of the problems with it though, is that it is possible to block specific lines of thought – at least for a certain amount of time. For example, if you propose an alternative scientific explanation to a phenomenon with an ‘established’ paradigm it will be quite difficult to get this published (although, if the evidence is strong enough it will be published eventually) while, more problematic, there is abundant evidence of the distortion of evidence and attempts to distort the scientific, including peer-review, process by industry (most infamous of course, by the tobacco industry, but also in relation to carcinogenicity of chemicals used in industry and global warming, for example). 

Depending on who you talk to, this may or may not be the case in the assessment of the carcinogenicity of radiofrequency radiation (RF) from mobile phones, and it is EMF (electromagnetic fields) I’d like to talk about here.  

Not the ‘normal’ EMF characterized by the frequency and amplitude (and shape) of the waves, but a “new” metric that is supposedly the real exposure that causes cancer, and a number of other diseases, in humans. Indeed, it is ‘dirty electricity’; an exposure that is defined not by a clear and precise set of quantitative characteristics, but instead mainly by the fact it can be measured by a dirty electricity dosimeter. Basically, it is a form of RF, but measured by means of voltage changes over time within a certain frequency bandwidth. A better name, therefore, is high frequency voltage transients (superimposed on 50/60Hz fields). For those of you who keep a close eye on this blog, you may remember ‘dirty electricity’ since I have written about this before [direct link]. It is a niche within the EMF research community and, just to make this clear, by no means established or even accepted as a valid scientific hypothesis.

Its proponents put themselves against the established EMF research community which, they claim, is biased because the electric (and mobile phone) industries want to hide the real effects of EMF exposure on humans. This not unexpectedly has some traction within the electro- hypersensitivity (EHS) (or idiopathic environmental intolerance attributed to EMF) community, which is unfortunate. Whatever it is that is going on with respect to EHS in my opinion deserves more attention, but focussing the efforts on something as untested as the dirty electricity hypothesis seems like a bad idea. Similarly, there is a case of a middle school in the US where a perceived increased cancer risk amongst teachers is, instead of first looking at investigating the likelihood of a “normal” explanation first, researchers jumped immediately to ‘alternative scientific explanations’. A good analogue from the medical world is homeopathy, which similarly but at a much larger scale preys on vulnerable people without any evidence of efficacy above a normal placebo effect. Indeed, similarly it is possible to buy ‘dirty electricity’ measurement devices, ‘dirty electricity’ filters (to ‘clean’ your environment), and it is possible to buy ‘dirty electricity’ services that “solve your problem”. This seems primarily an American, Canadian, and I think Australian thing, but it is penetrating the UK as well. People make a lot of money on the back of dirty electricity, and as such it is just another industry.It is (still) relatively small, but it is an industry nonetheless. And with it come the issues generally associated with the bigger industries, such as the publication of research of, let’s say, varying quality to back up the industry’s raison d’etre, the use of PR and media to generate exposure and create a concern in the population, and – and this is what the rest of this post will be about – silence critics.

 I wrote a review about the epidemiology of dirty electricity which was published in 2010 [link to abstract] concluding, based on all available evidence at the time, that although it was an interesting concept there were so many problems with the published studies that it was extremely hard (probably ‘impossible’ is a better word) to say that dirty electricity was associated with increased risk of disease of adverse health effect. Last year it was 5 years ago that this was published and I thought it would be interesting to update the review and see if the proponents of dirty electricity had been able to provide better evidence of this being important.

Coincidentally, others had the same idea and I collaborated with Professor Olsen, a professor in electrical engineering at Washington State University with a nearly endless CV on the many aspects of electric engineering and EMF exposure measurement and assessment, on the updated review. The work got sponsored by the Electric Power Research Institute (EPRI) who are of course interested in this kind of work. This is, of course, important and this was highlighted specifically in our publication. At this stage I think it is important to highlight that EPRI specifically indicated that they did not want to see anything about our work until it was published. So in summary we highlighted the source of funding in any publications, but we were free to do as we wanted. Emphasizing this may seem a bit tedious, but it is quite important in what follows. 

So how does this relate to peer-review and the influence of industry? I’d like to just describe what happened next, since most of these things tend to stay behind closed doors. I am sure other research will have much worse experiences then we had but, well, I don’t know these…and it is my blog…but I’d be quite interested in hearing about those.

*

Anyway, so we conducted our systematic review of the exposure assessment and the epidemiology of ‘dirty electricity’; happy to be guided by what the evidence told us. It recently got published in the ‘Radiation & Health’ subsection of ‘Frontiers in Public Health’ and at the time of writing has about 1,000 downloads. If you are interested you can find it here as open access [link to full text]. I have much appreciation for the editor-in-chief of   ‘Radiation & Health’ who stuck with us throughout the whole peer-review process (more on this later) despite what his personal opinions on the topic may be (Note that I have no idea what these are, which is how it should be), but more about that later.

But it did not start with ‘Frontiers in Public Health’.  

The following will be quite familiar to everybody who has ever tried to have a scientific manuscript published, and I’ll call it Stage 1:  

“This decision was based on the editors’ evaluation of the merits of your manuscript….” 

“Only those papers believed to be in the top 20% in a particular discipline are sent for peer review….”

“Overall, I am not convinced that such a spartan review is warranted and worthy of consideration for publication…”

No hard feelings, this is all part of the process.

Stage 2 then happens subsequently in which the paper is externally reviewed, but rejected. This also generally happens regularly, but in this particular case there is something interesting that is starting to appear. Of, 2-3 reviewers the manuscript got favourable reviews from 2 reviewers but always got the minimum score possible from one reviewer (sometimes 2, depending on the number of reviewers). Given the stark contrast between the scores, we asked the editors about this, and basically if a manuscript receives one negative review it generally gets rejected. So here is a first step where it is possible to block the publication of a paper. Indeed, further discussion revealed that because the scientific proponents of the dirty electricity hypothesis are not that many individuals, an editor looking for appropriate reviewers and who types in ‘dirty electricity’ in, say, PubMed, will end up in the same, relatively small, pool of people.  So just the minimum score would be enough, but of course one needs some sort of rationale. This was generally speaking minimal, but more importantly, has nothing to do with the science. For example: 

“Although the authors acknowledge their funding source, their bias is obvious.  In many respects, it might be better to reject the paper.” “While the authors claim no conflict of interest, this study was funded by EPRI, which some would consider a conflict of interest.”

Nice, despite the fact the funder was clearly stated and we highlighted that the funder had no input in any aspects of the review (see the actual manuscript). However, still acceptable and it is important to make sure any funders and perceived biases are highlighted in a manuscript. Whether it is a reason for rejection is another though…

Anyway, it soon got a bit more personal and unfriendly. I don’t know if others ever review a paper like this, but I can’t say I was very impressed. Moreover, as a reviewer one should address specific scientific points so that the authors can address these if they are a genuine mistake, provide an explanation of why what they did is scientifically correct, or withdraw the manuscript if it really is an incurable problem.  Anyway, at this stage we move into more unscientific, and this time personal, comments:

“It is clear that the authors are unfamiliar…”

“This review appears to be biased..”

“It is clear that the authors are unfamiliar with the biological research within the area of extremely low frequency  (ELF) and radio frequency (RF)”

“They then make a calculation that demonstrates they do not understand dirty electricity.”

“It is clear that the authors are unfamiliar with electricity, the concept of linear and non-linear loads and grounding problems

The language here is not so much scientific as it is emotional, and clearly aimed at quickly suggesting to an editor who will mostly skim the reviews that the work is awful and should be rejected. Other ways of doing this is to use other emotive words, such as “flawed”, “false” etc. in abundance; and presto!:

“I find it deeply flawed with false statements, flawed assumptions and wrong calculations.  It seems to be written by someone who has only a superficial knowledge in this field and has a strong bias in favor of the electric utility. “

“I find this review to be scientifically inaccurate, biased, and deeply flawed and would recommend it not be accepted for publication.”  

“I don’t believe that publishing this flawed review is going to benefit science, policy makers, or the public”

Another nice one for the records, and which I would hope none of the readers of this blog (yes, you!) ever uses:

“The authors of this review article have never done any primary research on HFVT and are not considered experts in this field and would not be invited to review articles on this topic.” 

As a side note, I find the following comment quite telling, and, teaching epidemiology myself I would fail the student who wrote this in an exam:

“When an author allots effects from a variety of studies to “unconsidered confounding factors” it makes me question their objectivity.”

*

So yes, this worked amazingly, and some editors fell for this approach. Not the editor of Frontiers in Public Health: Radiation & Health though, who decided to go with the science on this one.

Of course, why change a winning strategy: 

“The authors definition of Dirty Electricity speaks of a complete misunderstanding.“

“The authors’ definition implies that they have little to no knowledge of electrical engineering, or more specifically high frequency voltage transients.” 

And my favourite:

“Generally the grammar is correct, but too often the language is in error. Some of the errors are so egregious that they raise the question of the authors understanding of electrical engineering and epidemiology.”

Even our thorough approach was questioned:

“The manuscript’s very length suggest they choose to “overpower” the reader obfuscation.” 

Unfortunately, this did not work this time around, and the paper remained in the peer-review process. This was quite a tedious process because a group of the reviewers would be selected, provide a review, and when all scientific points were addressed in our replies they would withdraw from the reviewing process; thus stopping the review process until another reviewer was found, delaying publication, and inconveniencing us as well as the editor. I consider this plain rude, but what it suggests to me is that we got the science right on this. Aside from delaying publication, there was another benefit to this approach for the ‘dirty electricity’ industry. As it turns out one can be as rude and personal as possible because, after withdrawing from the review process, all comments were erased (this is an unfortunate consequence of the Frontiers system). I won’t bother you with these since they follow the same pattern as above, but I’d like to highlight one gem of a comment: 

“EPRI has many far more qualified employees to write such a review but chose to hire De Vocht and Olsen.”

*

I hope you found my walkthrough of this particular peer-review process entertaining. This is what can happen if you do research that can hamper the profits of an industry. Of course this is nothing new, but it is quite interesting that even a small industry (and one that argues it fights against the INDUSTRY) uses the same tactics.  

You may have realised I did not tell you the conclusion of our systematic review. This was done on purpose, because in a way the actual conclusion does not really matter. We conducted this out of academic interest, so whether our conclusion was that this is indeed a dangerous exposure or whether this is nonsense, has no impact on either of us. For convenience , here is the link to the full paper [direct link]