This blog post is part of a series of 4 posts. To directly link to other posts, click: post 1, post 2, post 3, post 4
Professor Martin Röösli (@MartinRoosli) and I wrote a Letter to the Editor of the International Journal of Environmental Research and Public Health about a recent systematic review and meta-analysis looking at mobile phone use and tumour risk. As is customary the authors of that paper were invited to respond to our concerns, which they did. Surprisingly, the journal then decided not to publish the Letter and the response to it. This is an incomprehensible editorial decision (although rejecting such letters seems to become more common, possibly because it costs journals money and often highlights problems with their peer-review), and I assume that the authors of the original paper were just as surprised and irritated; having spent a considerable amount of time in writing their response.
The journal did however also include the peer-reviews of the Letter and the response in their rejection email. To say that these were inappropriate and clear evidence of how the peer-review system can be abused by activist-scientist would be putting it mildly. Given this we appealed the editorial decision, but as is common in such situations this was declined without explanation.
So being unable to publish our Letter where it belongs, we decided to publish it here. Given that the reviews were anonymous (enabling them to get away with this) we decided to publish these as well in a 3-post set. I hope that the authors of the response will agree to have their response published as well, so that this will be a 4-post series.
In any case, below our Letter…
We welcome the updated systematic review and meta-analysis of case-control studies of mobile phone use and cancer by Choi et al., which was recently published in this journal [1]. Given the uncertainties that remain to surround the issue of radiofrequency radiation exposure and cancer risk, regular synthesis of available epidemiological evidence continues to be important, and the synthesis published by Choi et al. provides a timely update. However, Choi et al. have made several peculiar decisions in their synthesis which result in difficulties in the inferences that can be made, and which deserve further discussion.
Firstly, the main meta-analysis shown in Figure 2 in [1] combined case-control studies for different benign and malign tumours, including those of the head, but also non-Hodgkin’s lymphoma and leukaemia, and provides one meta-analytic summary of these. It is not common practice to combine different outcomes with different aetiologies in one meta-analytic summary [2] and, given the substantial heterogeneity observed, it is highly questionable if the common risk estimate for diseases with different aetiologies that Choi et al. try to combine in their meta-analysis does exist. [additional note, added in blog only: the issue here relates to combing different, arbitrary, cancers, and does not imply RF can only have an effect on one endpoint. For example, ‘all cancers’ is often studied]. It would be more appropriate to conduct separate meta-analyses by type of tumour, and Choi et al. have indeed done these as well. These results are provided in the Online Supplement (Table S3) and do not provide summary evidence of excess tumour risk for any particular individual tumour types.
Choi et al. further presented subgroup analyses of studies conducted by Hardell et al., studies by the INTERPHONE consortium, and a group of miscellaneous case-control studies. They identify interesting differences between those three subgroups, and conduct further analyses to explore possible reasons for the observed differences. Interestingly, Choi et al. fail to notice the most obvious conclusion from these subgroup analyses, in that both the INTERPHONE-related studies and miscellaneous studies are largely in agreement and do not point to an excess cancer risk from mobile phone use. Evidence of large excess cancer risks are almost exclusively based on the studies by the Hardell group; as already described in earliermeta-analyses[3, 4].In fact, relative excess risks of 90% (30%-170%) and 70% (4%-180%) reported by the Hardell group (Table 1 and Figure 2) associated with any mobile phone use are implausible high, and do not triangulate [5] with evidence from other epidemiological sources, such as prospective cohort studies [6, 7] and incidence trends [8]. Incidence trend analyses are generally considered a weak study design but in this specific case of a clear change of exposure of virtually the whole population, limited confounding factors that may change over time and reliable cancer registries, incidence trends are important for evidence evaluation and plausibility considerations.
Even when exposure-response associations are observed (Table 3), and the INTERPHONE studies and miscellaneous studies provide relative consistent estimates (Odds Ratios of 1.25 (0.96-1.62) and 1.73 (0.66-4.48), respectively) of some excess risk associated with a, arbitrary, cumulative call time of at least 1,000 hours, the evidence from the Hardell studies similarly provides an implausibly high Odds Ratio of 3.65 (1.69-7.85); out of line with all evidence from other sources. The INTERPHONE team have spent considerable efforts trying to evaluate whether observed increased and decreased risks could be the result of recall and selection bias [9–13] and a recent study found some indication for reverse causality as an explanation for seemingly protective effects from mobile phone use [14]. It is therefore surprising that Choi et al. have not similarly discussed the likelihood of bias away from the Null in the Hardell studies. disregarding the implausible risk reported by the Hardell group, a summary risk point estimate based on all other case-control studies for 1,000+ cumulative hours of use would be in the order of 1.30-1.50, which triangulates much better with other lines of research.
Choi et al. argue that a plausible explanation for the observed differences could be that the Hardell studies are of better quality than those in the other two groups, based on individual appraisal of each study using the Newcastle-Ottawa Scale and National Heart, Lung, and Blood Institute quality assessment tool of case-control studies (Tables S1 and S2). The differences in rating within and between the three groups of case-control studies are minimal, but Choi et al. rated the methodological quality Hardell studies a little higher quality mainly because they had very high response rates and because they mostly classified as having excellent, blinded, assessment of exposure compared to the INTERPHONE and miscellaneous studies. This seems to be an error or misunderstanding in the use of these criteria. First, achievement of a high participation rate is an asset in an epidemiological study. However, to achieve a participation rate of over 80% in a population based case-control study in Western Countries, as reported in the Hardell papers, is highly unusual nowadays. Regardless, one would expect that in a study with such high participation rates, the proportion of mobile phones users in controls should closely match the official subscriber statistics, which was not the case for the Hardell studies [4]. Thus, serious concerns remain about how these high participation rates have been achieved or calculated.
Secondly, the blinding concept as rated by Choi et al. is inappropriate. Exposure assessment in the INTERPHONE studies was conducted by trained interviewers, which have been susceptible to interviewer bias because they could indeed probably not be blinded to case-control status [15]. However, it is highly unlikely this would have resulted in higher bias compared to the Hardell studies, in which exposure assessment was based on questionnaire-based self-reporting, by cases and controls, of mobile phone use who, by definition, are not blinded to their disease status. Methodological work suggests that both face-to-face interviews and self-administered questionnaires are susceptible to various ‘mode of administration’ biases, but that exposure assessment based on self-administered questionnaires are generally more susceptible to recall bias [15]. As such, the methodology of the Hardell studies should have been classified as being of comparable quality to the other case-control studies in this review, at most.
Choi et al. further looked at source of funding as a possible explanation for observed differences, but provided erroneous funding information. Only the Hardell studies received direct funding from interest groups such as the telecom industry [16, 17] and pressure groups [18], but this was not reported by Choi et al. In contrast, INTERPHONE studies’ industry funding was through a well-established firewall model to avoid influence of the funders on the researchers. There is empirical evidence from human experimental studies that such a funding structure has not resulted in biased study results but in higher study quality, whereas direct funding by interest groups may produce biased results[19, 20]. Further, the three study groups only contribute to either ‘funded by industry’ or not (according to Choi et al.), which makes this analysis non-informative.
Given that observational epidemiological studies are susceptible to various biases, which can result in under as well as over-reporting of true effects, rigorous evaluation is needed to understand why the studies by the Hardell group provide different results than the majority of other case-control studies and with other groups of epidemiological literature. In the absence of direct evidence for any causes of these differences, triangulation of epidemiological studies susceptible to different types of biases [15], such as case-control studies, cohort studies and ecological studies of cancer incidence, as well as with evidence from animal and laboratory studies is warranted. Although some uncertainties remain, most notably for highest exposed users and for new GHz frequencies used in 5G, we can be reasonably sure that the evidence has converged to somewhere in the range of an absence of excess risk to a moderate excess risk for a subgroup of people with highest exposure. Important, over time, the evidence had reduced the uncertainty regarding the cancer risk of mobile phone use.
Funding
No external funding was obtained for this publication.
Author contributions
FdV drafted the first outline. MR and FdV collaborated on subsequent iterations, and both approved the final version.
Conflicts of Interest
The authors declare no Conflicts of Interest.
MR’s research is entirely funded by public or not-for-profit foundations. He has served as advisor to a number of national and international public advisory and research steering groups concerning the potential health effects of exposure to nonionizing radiation, including the World Health Organization, the International Agency for Research on Cancer, the International Commission on Non-Ionizing Radiation Protection, the Swiss Government (member of the working group “mobile phone and radiation” and chair of the expert group BERENIS), the German Radiation Protection Commission (member of the committee Non-ionizing Radiation (A6) and member of the working group 5G (A630)) and the Independent Expert Group of the Swedish Radiation Safety Authority. From 2011 to 2018, M.R. was an unpaid member of the foundation board of the Swiss Research Foundation for Electricity and Mobile Communication, a non-profit research foundation at ETH Zurich. Neither industry nor nongovernmental organizations are represented on the scientific board of the foundation.
FdV’s research is also funded by public or nonprofit organisations. He is partly funded by the National Institute for Health Research Applied Research Collaboration West (NIHR ARC West) at University Hospitals Bristol NHS Foundation Trust. He has done consulting for the Electric Power Research Institute (EPRI), and nonprofit organisation, in the past, not related to the current publication. He is a member of the UK Government Independent Advisory Committee on Medical Aspects of Radiation in the Environment (COMARE).
References
1. Choi Y, Moskowitz J, Myung S, Lee Y, Hong Y. Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis. Int J Environ Res Public Health. 2020;17:10.3390/ijerph17218079. https://www.mdpi.com/1660-4601/17/21/8079.
2. Borenstein M, Hedges L, Higgens J, HR R. When Does it Make Sense to Perform a Meta‐Analysis? In: Introduction to Meta‐Analysis. 1st edition. Chchester, UK: John Wiley & Sons, Ltd; 2009. p. 357–64.
3. Lagorio S, Röösli M. Mobile phone use and risk of intracranial tumors: A consistency analysis. Bioelectromagnetics. 2014;35:79–90.
4. Röösli M, Lagorio S, Schoemaker MJ, Schüz J, Feychting M. Brain and Salivary Gland Tumors and Mobile Phone Use: Evaluating the Evidence from Various Epidemiological Study Designs. Annu Rev Public Health. 2019;40:221–38.
5. Lawlor DA, Tilling K, Smith GD. Triangulation in aetiological epidemiology. Int J Epidemiol. 2016;45:1866–86.
6. Benson VS, Pirie K, Reeves Gillian K. GK, Beral V, Green J, Schuz J. Mobile phone use and risk of brain neoplasms and other cancers: Prospective study. Int J Epidemiol. 2013;42:792–802.
7. Frei P, Poulsen AH, Johansen C, Olsen JH, Steding-Jessen M, Schüz J. Use of mobile phones and risk of brain tumours: Update of Danish cohort study. BMJ. 2011;343:d6387.
8. Karipidis K, Elwood M, Benke G, Sanagou M, Tjong L, Croft RJ. Mobile phone use and incidence of brain tumour histological types, grading or anatomical location: A populationbased ecological study. BMJ Open. 2018;8:e024489.
9. Lahkola A, Salminen T, Auvinen A. Selection bias due to differential participation in a case-control study of mobile phone use and brain tumors. Ann Epidemiol. 2005;15:321–5.
10. Vrijheid M, Armstrong BK, Bédard D, Brown J, Deltour I, Iavarone I, et al. Recall bias in the assessment of exposure to mobile phones. J Expo Sci Environ Epidemiol. 2009;19:369–81.
11. Vrijheid M, Richardson L, Armstrong BK, Auvinen A, Berg G, Carroll M, et al. Quantifying the Impact of Selection Bias Caused by Nonparticipation in a Case-Control Study of Mobile Phone Use. Ann Epidemiol. 2009;19:33–41.
12. Vrijheid M, Cardis E, Armstrong BK, Auvinen A, Berg G, Blaasaas KG, et al. Validation of short term recall of mobile phone use for the Interphone study. Occup Environ Med. 2006;63:237–43.
13. Vrijheid M, Deltour I, Krewski D, Sanchez M, Cardis E. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk. J Expo Sci Environ Epidemiol. 2006;16:371–84.
14. Olsson A, Bouaoun L, Auvinen A, Feychting M, Johansen C, Mathiesen T, et al. Survival of glioma patients in relation to mobile phone use in Denmark, Finland and Sweden. J Neurooncol. 2019;141:139–49.
15. Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health (Bangkok). 2005;27:281–91.
16. Hardell L, Mild KH, Carlberg M. Case-control study on the use of cellular and cordless phones and the risk for malignant brain tumours. Int J Radiat Biol. 2002;78:931–6.
17. Hardell L, Hallquist A, Mild KH, Carlberg M, Påhlson A, Lilja A. Cellular and cordless telephones and the risk for brain tumours. Eur J Cancer Prev. 2002;11:377–86.
18. Hardell L, Carlberg M, Söderqvist F, Mild KH. Case-control study of the association between malignant brain tumours diagnosed between 2007 and 2009 and mobile and cordless phone use. Int J Oncol. 2013;43:1833–45.
19. Huss A, Egger M, Hug K, Huwiler-Müntener K, Röösli M. Source of funding and results of studies of health effects of mobile phone use: Systematic review of experimental studies. Environmental Health Perspectives. 2007.
20. van Nierop LE, Röösli M, Egger M, Huss A. Source of funding in experimental studies of mobile phone use on health: Update of systematic review. Comptes Rendus Phys. 2010;11:622–7.