Update on Current Knowledge of RF safety 2021 (The National Register of RF Workers annual held in collaboration with Cambridge Wireless on the 28th April)

I presented an update on the epidemiology of radiofrequency radiation (with some links to the 5G debate) to the The National Register of RF Workers on April 28 2021. Unfortunately, given the scope of the topic and the allocated timeslot, the update had to be very broad brush. All information about the event, including presentations of other speakers can be found here.

The whole seminar can be watched online on youtube:

The slides of my presentation are also available from the first link, but for the purpose of curation I have also added them to to my blog.

If you are interested have a look at the slides, which can be downloaded from the below link. Feel free to comment in the Comments section below.

(5 of 3) All well that ends well…(Letter to the Editor re Choi et al.)

In a series of blog posts I highlighted how it was possible to block a Letter to the Editor Professor Martin Roosli and I wrote regarding a recent systematic review and meta-analysis of mobile phone use and risk of a bunch of different cancer types clubbed. We had some concerns about the methodology in the review, the way they interpreted some of the data, and pointed out missing data on Conflicts-of-Interest that led to erroneous conclusions on impact of CoI.

For reference, here are the links to the original paper and the blog posts:

original Choi et al review

Post of our Letter

Peer Review 1

Peer Review 2

Discussion of ‘white had bias in RF/cellphone health research’

Surprisingly, after we thought this whole saga had reached its end, the editors of the International Journal of Environmental Research and Public Health (IJERPH), got in touch again. Having originally rejected our Letter, they had received further a further Letter (or Letters, I don’t know) also expressing concerns with what was done in this review. The Journal had therefore decided to publish our Letter after all, with the other(s) to follow. The final version of our Letter and the peer reviews can now be found here (note this is slightly modified from the one posted on this blog previously):

Comment on Choi, Y.-J., et al. Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis. Int. J. Environ. Res. Public Health 2020, 17, 8079

It’s great to see that in the end scientific debate was victorious. It remains a shame however, that it had to take several months, an initial rejection, and at least one more Letter highlighting concerns with the Choi et al review, to convince the editors of IJERPH that this was the correct approach to science.

CLASSIC (back by popular demand): 5G and COVID-19: Fact or Fiction?

Below you will find an article I posted previously for my FunPolice blog which, as you may know, I had to stop because ‘.eu’ extensions were not allowed anymore in the UK post-Brexit (one of the smaller issues, I admit). This one is back by popular demand since, surprisingly, this conspiracy theory is still a thing (Feb ’21)…..

Since the COVID-19 outbreak conspiracy theories have been going around social media about links between COVID-19 and the introduction of the  fifth generation of wireless communications technologies (i.e. 5G). Some even claim that exposure to 5G is the real cause of the pandemic; either because it causes COVID-19 or because the pandemic is just a cover-up by governments and industry to hide that they have switched on 5G. As ridiculous as that sounds, this has been gaining ground and has been directly linked to the burning down of communication masts (5G or not…) in the UK and elsewhere. There is an interesting discussion as to why this conspiracy theory has become popular <update: link now gone>. In terms of scientific evidence, there is none that 5G causes COVID-19 and the ‘it’s a 5G cover-up by the world government’ is too silly to discuss. There are some studies out there, mainly based on isolated cells and some small animal studies, that suggest radiofrequency radiation (from mobile phones) can impact on the immune system. The strength of the scientific basis or this is best summarized by Professor Carpenter; well-known in ‘EMF world’:

Another well-known name in ‘EMF world’, Prof Leszczynski, highlighted Martin L. Pall and Arthur R. Firstenberg as two of the main culprits that resulted, amongst other misunderstandings of the scientific evidence, also fuelled the flames of this conspiracy <link>. Moreover, and this is an important point, these individuals and groups moved an entirely reasonable question about some of the gaps in knowledge and enquiries for more research in this area towards the crackpot science bin that also holds antivaxxers, chemtrails, homeopathy and the likes.  It is relatively easy to understand why this theory was always going to be highly implausible. Some of the countries hardest hit by COVID-19 were Iran, Spain and France, neither of which as far as I know have 5G. Unfortunately, all this social media hype drew in some others who, given their training, really should know better. Dr Maria Havas, who previously peddled another failed idea of ‘dirty electricity’, is one of these. She recently published ‘Is there an association between covid-19 cases/deaths and 5G in the United States?’ on her blog (here) (update Feb 21: it seems the page has now been deleted from her website). And of course, this got enthusiastically shared around twitter (and possibly other social media). As many of these studies, it is based on simple ecological (i.e. at group level) correlations of two variables: in this particular case a direct comparison of the number of COVID-19 cases and deaths in US States where 5G has been activated with US States that don’t have 5G (yet, I suppose). And the conclusion seems quite straightforward, as shown in a Table copied from her blog:

The same number of tests have been done in both sets of States, but he number of case in US States with 5G is almost twice as high as States without 5G and the number of deaths is even 126% higher. She also conducted a T-test to show that this difference was statistically significant (interestingly, this only holds for a 1-tailed test, indicating that she completely discounts the possibility of 5G having a positive effect). Let’s have a look at the raw data on which this is based (Excel table also copied from her blog):

Now I have only been to a couple of States in the United States, but eye-balling this Figure suggests to me that 5G was not introduced ‘at random’ across the US. Again, I haven’t been there, but Alabama, Mississippi and Kansas just seem different from, say, Florida, New York and California. That also makes sense from a business point of view, because why would you go through the efforts of investing in building up a new technological network in a place that is largely agricultural and where barely anybody lives? These kind of ecological correlation analyses are always quite difficult to interpret, in that it is not too difficult to come up with other reasons why you would observe this correlation. On twitter, my first guess – given the above idea of where you would introduce such a new technology – was the above correlation was confounded by ‘urbanization’.

Magda Havas was kind enough to link to the primary sources that she used (here and here), so it was straightforward to obtain the same data (well, not completely the same: the COVID-19 numbers had been updated so I will be using data from a couple of days later). That gave me this Figure for COVID-19 related deaths (I am not doing tests or cases because these are too unreliable and directly result from various policies rather than the disease itself), which is pretty much identical to the Figure above:

The mean number of deaths in States with 5G was 136 and in those without 5G 60. This gives a ratio of 127%; nearly identical to the original data.  *So starting from pretty much the same point of departure, I run a basic log-rate model (because it is numbers of deaths per million citizens this is the correct model specification. I don’t think Magda Havas did that in her T-test above though), and get an 82% higher death rate in States with 5G compared to those without (2- tailed p-value ~ 0.06). This is somewhat lower than the straightforward comparison of averages, but as mentioned above based on a more appropriate model…and it still shows a serious excess mortality risk. I hypothesized that this correlation was confounded by something else and thought that ‘urbanization’ was a likely factor. Urbanization rates for US States are easy to obtain (don’t worry I will provide the dataset and R script at the end of this blog for you to play with), and I ran that model.  

The big reveal of this blog is that when adjusting the correlation reported by Havas for the State urbanization rate the correlation mostly disappears:  The excess risk has been reduced to about 34% and the p-value is 0.36 (or, for the connoisseur, the observed difference between 5G and non-5G States was not significantly different). 

Nonetheless, although not statistically significant there is still an excess risk of 34% which lies between -28% and +147% with 95% certainty. It is straightforward to look into this a bit more. There is more State-wide data available that could potentially contribute to this difference. To explore this, I also linked the data for median household income in each State, the percentages of non-Hispanic whites, States’ medium age (which has a non-linear correlation with COVID-19 death rate), and the population density. Adjusting for all those factors results in a 48% (-19%, +147%) excess mortality in States with 5G, but this difference again was not statistically significant (p-value ~ 0.21). In fact, there are a couple of States that are clearly different from the rest (outliers; Utah, New York, District of Colombia and Hawaii). When these are removed from the analyses, the difference between 5G and non-5G States is less than 1% in favour of the 5G States (p-value ~ 0.97).

I suppose this is why Havas did not submit the work for peer-review. She must have known that the first reviewer to look at this would point this out. I read an argument somewhere that apparently “we have to start somewhere”. This is of course correct: we have to start somewhere, but don’t show it to anyone else until we have done a good job. I am sure Havas is aware of bias, confounding and multivariable statistical analyses, so there question is “Why did she decide to put this stuff online, if not to join Pall and Firstenberg and fuel the fire of the conspiracy?”. 

*

My blog could end here, but there is some more academic work that is easily done and will improve our inferences. We can use a method called ‘inverse propensity weighting’. There is a lot of literature on this, but in (very short) summary: each State is weighted by the inverse of the probability with which the mobile phone operators would have selected that State for 5G. The analysis is then balanced  in such a way as if the operators had allocated 5G randomly. That is really beneficial because it makes the study look somewhat like the gold standard for such tests: the randomized controlled experiments (at least with respect to the variables in the exposure propensity model, and yes it is a bit more complex. Point taken). Anyhow, I did this using all factors above as well as additionally ‘population size’ (because I thought maybe operators went for the largest States first). 

Because this now kind of a random experiment the difference can be directly compared without adjustment for other factors (like Havas did in the first Table in this blog). And wow, the effect has reduced to as little as +10% with a p-value of 0.75. 

A final further improvement is ‘doubly robust adjustment’, and these “best” results show there is actually no evidence of any causal effect from 5G anymore: The difference is less than 1% and the p-value is 0.975. 

With this, I think we can comfortably say that this conspiracy theory has been debunked.

These analyses can of course be done with better data and possibly better models, so that’s why the dataset and R script can be downloaded here and here. It is also quite straightforward to repeat these analyses for other countries. I am looking forward to your contributions: just email or DM me on twitter if you have done this.

*

Addendum

Someone combining the spatial data on COVID-19 and 5G and looking somewhat amateurish at the correlation was always going to happen, and it would have been straightforward to do it correctly. We suggested this about 2 weeks ago to the mobile phone operators. This would have addressed the issues we are facing now. Unfortunately, the operators didn’t trust independent research. In fact, they hired a ‘product defence’ consultancy agency to do a review for them – very disappointing (and hopefully a lesson learned)

CLASSIC: Medically unexplained symptoms, electricity, and the general population

This article was published on my previous blog ‘The Fun Police’ back in 2015. I may necessarily agree with everything in the article anymore, but it remains a post that may be of interest to some…

Occasionally, new papers get published about electrohypersensitivity or, more specifically, idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF). I have written about this before, and we still do not know whether it exists or not (although, I suppose, that depends a bit on who you are talking to).  Anyway, a new paper got published in the journal Bioelectromagnetics last month, entitled “Does electromagnetic hypersensitivity originate from nocebo responses? Indications from a qualitative study.” You can find the paper here <link> In a nutshell, it is a qualitative study in which forty self-diagnosed electrohypersensitive people got interviewed and asked whether they first thought exposure to electromagnetic fields is bad and then got ill, or whether they first got ill and then thought it was because of electromagnetic fields. The first sequence of events would be a classic nocebo response, or in other words people think they get ill from something and subsequently become ill when they think they are exposed (the opposite of the well-known placebo effect, so you will), while the second order of events supposedly indicates that the nocebo effect is not present and people do get ill from the exposure to the electromagnetic fields. Looking through the paper it turns out that for 25 of the 40 participants the EHS self-diagnosis was done over 2 years ago, so at that point one really needs to start wondering how helpful this study is: after all, it is trying to disentangle a set of events for which it is pretty difficult to establish exactly when they first occurred, and which most likely happened very close together in time. To be honest to the author, he (I think) open up the complete book of tricks and skills required to get the best information possible in this case. I am however, not very convinced that the approach itself is a very useful method to try and shed some light on this particular problem. The author concludes that his results do not point to the nocebo effect as an explanation for electrohypersensitivity because a majority of participants  sought, and failed to obtain, medical assistance, and as a result started questioning effects of electromagnetic fields in their environment on their health. I would say that’s rather dubious given the methodology used and the fact that of the 40, only 23 claimed to have never heard of electromagnetic fields before reaching the relevant stage in the attribution process, which seems very unlikely. Indeed, the results are pretty ambiguous and do not exclude the nocebo effect (in combination with their self-attribution of the cause).  I think what the author and I agree on is that electrohypersensitivity is a form of MUS (medically unexplained symptoms) which they attribute to electromagnetic fields, without much evidence that this is the case….but I guess this is what the majority of researchers in the field belief (I’d say 97%, but that would get this muddled up with climate change). As also previously mentioned by many people, that does not mean this is not an illness; it’s not great suffering from this (and that is an understatement). Luckily, it seems cognitive behavioural therapy can help. So anyway, in summary, we have not learned much news from this paper. What it does provide though is a table of the reported symptoms attributed to the electromagnetic field exposure from, primarily wifi routers, mobile phone base stations, mobile phones, DECT phones and electric home appliances, by this group of people. That list shows an interesting, and very close, resemblance to a list of general subjective health symptoms that everybody occasionally suffers from…some more than others. In a 1999 paper from Norway, researchers developed a scoring system to get a handle behind how often these kind of problems occur in the normal, lay population. You can find the paper here <link> [update 2021: this link does not work anymore. Unfortunately, I do not know what it was]. So let’s take a step back before going all crazy about causality. I have copied the table from the electrohypersensitivity group, calculated the percentages, and added the same (average men and women) from the normal, lay population, just to see if we are actually talking about a problem at all. They don’t all match up, but have a look below:

I don’t know what you think about this comparison, but given that this was a self-selected group of electrohypersensitive people, I was quite surprised how comparable the numbers are. What it really looks like is that this is a group of people not that dissimilar from the general population, who attribute things everybody is occasionally unconvenienced by to a specific exposure (electromagnetic fields in this case).  Having said that, as a group they could do with better sleep! This table does not cover the frequency of the symptoms, which may or may not be different from the normal population. Presumably, once the connection is made these occur more often if exposure is perceived (Although blinded trials have not shown any direct link). 

In conclusion, unfortunately we have not learned anything new from this paper except, maybe, that IEI-EMF are not that dissimilar from everyone else. And that, I would say, is quite informative….

CLASSIC: A medieval hermit, a religious nut and a mobile phone enter a pub……….

This article was originally published on my old blog OEHScience in 2014. Since, my knowledge about ‘Electrohypersensitivity’ has improved and my thoughts on the topic have changed. However, I believe the following article remains of interest. I would also very much appreciate your thoughts on this: there is a discussion section below the article…

Sometimes it is a very good idea to look at a problem from a completely different and sometimes unexpected angle. That’s what this post is all about.

It’s about idiopathic environmental intolerance, or, in medical terminology “I don’t feel well and my doctor has no idea what it is, but I am pretty sure it’s this thing I am exposed to”. Often, patients claim this is caused by low-level mixtures of chemicals they get exposed to such as plastics, pesticides, scented products, petroleum products and such and the symptoms are vague and non-specific (fatigue, headaches, inflammation of joints, and the lot…). Another exposure that is often mentioned is non-ionizing radiation; be it extremely low frequency from power distribution, RF from the use of mobile phones or nearby towers, or even from smartmeters and the likes (yup, in a way that dreaded dirty electricity story again). Strongly implying, maybe unjustifiable, that the suspected causal agent is well established the most well known subgroups of the idiopathic environmental intolerance spectrum are also known as multiple chemical sensitivity (MSC) and electro-hypersensitivity (EHS); depending on the suspected (or well, known…depending on who you talk to) exposure. But if we go back to the broader ‘all-inclusive’ definition, then in summary idiopathic environmental intolerance describes people getting ill but nobody really knows from what (it could even differ from case to case), except for the sufferers who “know” what triggers the effect, and will often experience symptoms when they see the cause, such as a mobile phone, being used in their vicinity. It may be quite difficult to do these studies because the exposures may differ from case to case, very low concentrations may already be enough for effects to be triggered (like with an allergic reaction), and there may be a delay between the exposure and measurable effects….for example. However, the problem is that in what is generally considered as the “gold standard” in these kinds of cases – blinded randomized controlled trials – patients respond as often and as strongly to placebos. So that’s a bit unfortunate. And then there are the facts that its incidence seems t increase with media attention and that for at least for a proportion of sufferers respond to some form or another of psychotherapy, implying that this may be of psychological, rather than biological/medical origin.
Anyway, I don’t know the correct answer here….I don’t even know enough about the subject to make a useful educated guess about causality. I know things can get quite heated when MSC and EHS are discussed, so I will stay away from any claims. Luckily, you may remember from the first sentence that this was not what this post was going to be about…it was going to be about looking at a problem from a completely different angle. It’s about idiopathic environmental intolerance though, so I just needed to colour in the background a bit…

I came across this 2012 study entitled “Taking refuge from modernity: 21st century hermits” by Boyd, Rubin and Wessely from King’s College London and published in The Journal of the Royal Society of Medicine. You can find the abstract and, if you have access to it the fulltext pdf here <link>. Instead of doing a randomized controlled trial, case-control, or other quantitative epidemiological study, or maybe go for the qualitative approach and interview sufferers, what struck them was that people suffering from MCS and EHS had stories that were not unique to recent times (more specifically, after the widespread introduction of electricity and, much later, mobile phones) but that they had striking similarities to stories from cases reported throughout history – just with a substitution of the purported causal agent. As it turns out, literature and historical texts have made reference to hermits and recluses who turned their backs on society for a multitude of reasons; including religious ones, personal events and societal dissatisfaction. The authors make the case that this may well imply that the underlying psychological processes in the isolation of oneself from society (eg the most severe sufferers from MCS and EHS) should, and I quote “be seen less as a unique and specific reaction to a hitherto unknown hazard, but more as part of a longer tradition of isolation from an impure world.” So to explore this they identified modern and historical cases to compare their views and beliefs. Pretty cool in my opinion, and an approach I did not see before…

They identified 6 EHS and MSC cases from the late 20th and early 21st century and four historical cases from the 3rd, 5th, 17th and early 20th centuries. Case descriptions are found in the paper, but in short it includes one MSC case, 2 EHS case, and 3 MSC&EHS cases, while the historical cases were Noah John Rondeau (1929-1950; dissatisfied with society), Roger Crab (1652-1657; dissatisfied with society and religious reasons), St Simeon stylite (415-459; religious reasons) and St Anthony (275-356; religious reasons).
Although the contemporary cases all reported that an important factor for seclusion was the experience of symptoms, they also indicated that disquiet about the ‘ill’ society that resulted from modernity was an important factor; as such, one saw sufferers as ‘an early warning system’. The historical cases similarly described ‘unsatisfaction with the world and its trends’, problems with the ‘slavery of industrialism’, and earlier that ‘the body of England has become a monster’ and St Antony, who derided the world where ‘everything is sold at its price,…
Both contemporary and historical cases also felt that they had no choice in making the decision to leave society, and that ‘nobody would like this is they had a choice’. While also they all felt they were in a constant struggle, whether against evil spirits (the more historical cases one would suppose), restriction of society laid out by those in power (everyone, really), or by ubiquitous chemicals or radiation (the contemporary cases).

Of course there are many methodological problems with this approach, most notably the small number of cases, but it is a very interesting, left-field approach to a question many people would like a definite answer to. I don’t know whether this could be helpful in other diseases, but I suspect it may well be something that only really works for EHS and MCS. The similarities between the MSC and EHS sufferers and the historical hermits are striking and may suggest some form of causality; if not for a biological/medical pathway.It may provide some leads for treatment of the most severe EHS and MSC cases, but definitely for more research.
Either way, luckily it’s also an interesting read – much more detailed then I have done here – so if you have a spare moment I would recommend it as a read (or, in a more contemporary way…if this was facebook I would have “liked” it).

Peer-review misuse and ‘white hat bias’ in RF research (…a follow-up from last week’s 3 blog series)

This blog post is part of a series of 4 posts. To directly link to other posts, click: post 1, post 2, post 3, post 4

Last week I published a series of 3 blog posts: the first post was a Letter to the Editor @MartinRoosli and I wrote to the International Journal of Environmental Research and Public Health, and the 2nd and 3rd posts were the anonymous peer reviews on which the the editors of the journal decided not to publish the Letter.

Since the publication of these 3 posts, interestingly we received some emails of others who had Letters or Comments discussing errors, limitations or weaknesses in “peer-reviewed” papers on health and cellphones similarly rejected in a similar fashion. Some of those in the same journal. Consequently, none of these were ever published (feel free to send them, and I will publish them here). We also received an email from a peer-reviewer of the IJERPH meta-analysis, who had recommended ‘rejection’ of the manuscript based on some of the same methodological issues as we pointed out in our Letter, but this recommendation was ignored.

This is a worrying development. The quality of the scientific process relies on debate of strengths and weaknesses of what are inevitably imperfect studies. My experience with the Letter with the experiences from others indicates a worrying attempt to try and stifle criticism and healthy scientific debate in the research of adverse effects of radiofrequency radiation and mobile phone use . In contrast, the Null studies in this area tend to receive commentaries and letters as well; something we know for a fact because to my knowledge these have all been published. This is a good thing. As scientists we need to defend our work from criticism, and where we cannot do this we need to take these into account in the re-interpretation of our findings. The more letters and commentaries therefore, the better.

The aim and result of the current situation is, of course, that to the untrained eye glancing over the literature, say that of an investigative journalist, the impression is created that there is clear evidence of adverse health effects from mobile phones. Studies showing health effects, regardless of quality, have no accompanying commentaries discussing weaknesses, while Null studies do include a set of these. From there, it is only a small step to infer that, because argument of large impacts on human health from non-ionizing radiation generally not triangulate with conclusions from scientific advisory boards and health councils, these narratives must have been silenced. In fact, the experiences above suggest the reverse may well be the case and that, in part, the dominant narrative warning of large population health impacts from mobile phones may be driven by ‘cherry picking’ of evidence and publication bias.

Interestingly, when copying across old posts from my previous blog it turned out I had already experienced a similar attempt at blocking science not agreeing with ‘the narrative of danger’ from being published in the closely-linked world of ‘Dirty Electricity’. I had completely forgotten about this, but it shows this abuse of the peer-review process has been going on for a while now. Have a look at that blog here.

This development negatively impacts on the quality of studies in a notoriously difficult field of environmental observational epidemiology. It is a well-known narrative that ‘real science’ in this area is distorted by industry influence, and although this may well have played a part, or may still do (at least to some extend), it is disheartening that activist-scientists are using similar strategies to those allegedly used by industry to try and silence discordant voices.

It seems unlikely we are talking about some coordinated effort here (but I may be wrong). It does however point to some underlying idea that this is acceptable practice because of the “greater good”. This is known as ‘white hat bias’.

..a purported “bias leading to the distortion of information in the service of what may be perceived to be righteous ends”, which consist of both cherry picking the evidence and publication bias…..the motivation behind this bias in terms of “righteous zeal, indignation toward certain aspects of industry”

I am sure activist-scientists see this differently, and there is a Comments section below this post to post your thoughts. It is, however, obvious that when looking at this objectively this not a good development for the field. Importantly, eventually this will lead to the downplaying or even ignoring of epidemiological findings compared to the input from other disciplines such as medicine, physics and engineering when public health policies related to RF/EMF environmental exposures are developed…

Back to the set of anonymous peer review reports. It is not overly difficult to form educated guesses of who these particular anonymous peer reviewers might have been, which further points to problems of ‘white hat bias’ distorting the science. The fact that the authors of the original meta-analysis on tumor risk and mobile phone use about which we wrote our Letter choose not to engage at all, despite having had an invitation to publish their response (which I have seen) alongside the Letter and peer review reports, is similarly not very encouraging in terms of open scientific debate and the future of the research area more generally.

(1 of 3). Response to: Choi et al. “Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis”

This blog post is part of a series of 4 posts. To directly link to other posts, click: post 1, post 2, post 3, post 4

Professor Martin Röösli (@MartinRoosli) and I wrote a Letter to the Editor of the International Journal of Environmental Research and Public Health about a recent systematic review and meta-analysis looking at mobile phone use and tumour risk. As is customary the authors of that paper were invited to respond to our concerns, which they did. Surprisingly, the journal then decided not to publish the Letter and the response to it. This is an incomprehensible editorial decision (although rejecting such letters seems to become more common, possibly because it costs journals money and often highlights problems with their peer-review), and I assume that the authors of the original paper were just as surprised and irritated; having spent a considerable amount of time in writing their response.

The journal did however also include the peer-reviews of the Letter and the response in their rejection email. To say that these were inappropriate and clear evidence of how the peer-review system can be abused by activist-scientist would be putting it mildly. Given this we appealed the editorial decision, but as is common in such situations this was declined without explanation.

So being unable to publish our Letter where it belongs, we decided to publish it here. Given that the reviews were anonymous (enabling them to get away with this) we decided to publish these as well in a 3-post set. I hope that the authors of the response will agree to have their response published as well, so that this will be a 4-post series.

In any case, below our Letter…

We welcome the updated systematic review and meta-analysis of case-control studies of mobile phone use and cancer by Choi et al., which was recently published in this journal [1]. Given the uncertainties that remain to surround the issue of radiofrequency radiation exposure and cancer risk, regular synthesis of available epidemiological evidence continues to be important, and the synthesis published by Choi et al. provides a timely update. However, Choi et al. have made several peculiar decisions in their synthesis which result in difficulties in the inferences that can be made, and which deserve further discussion.

Firstly, the main meta-analysis shown in Figure 2 in [1] combined case-control studies for different benign and malign tumours, including those of the head, but also non-Hodgkin’s lymphoma and leukaemia, and provides one meta-analytic summary of these. It is not common practice to combine different outcomes with different aetiologies in one meta-analytic summary [2] and, given the substantial heterogeneity observed, it is highly questionable if the common risk estimate for diseases with different aetiologies that Choi et al. try to combine in their meta-analysis does exist. [additional note, added in blog only: the issue here relates to combing different, arbitrary, cancers, and does not imply RF can only have an effect on one endpoint. For example, ‘all cancers’ is often studied]. It would be more appropriate to conduct separate meta-analyses by type of tumour, and Choi et al. have indeed done these as well. These results are provided in the Online Supplement (Table S3) and do not provide summary evidence of excess tumour risk for any particular individual tumour types.

Choi et al. further presented subgroup analyses of studies conducted by Hardell et al., studies by the INTERPHONE consortium, and a group of miscellaneous case-control studies. They identify interesting differences between those three subgroups, and conduct further analyses to explore possible reasons for the observed differences. Interestingly, Choi et al. fail to notice the most obvious conclusion from these subgroup analyses, in that both the INTERPHONE-related studies and miscellaneous studies are largely in agreement and do not point to an excess cancer risk from mobile phone use. Evidence of large excess cancer risks are almost exclusively based on the studies by the Hardell group; as already described in earliermeta-analyses[3, 4].In fact, relative excess risks of 90% (30%-170%) and 70% (4%-180%)  reported by the Hardell group (Table 1 and Figure 2) associated with any mobile phone use are implausible high, and do not triangulate [5] with evidence from other epidemiological sources, such as prospective cohort studies [6, 7]  and incidence trends [8]. Incidence trend analyses are generally considered a weak study design but in this specific case of a clear change of exposure of virtually the whole population, limited confounding factors that may change over time and reliable cancer registries, incidence trends are important for evidence evaluation and plausibility considerations.

Even when exposure-response associations are observed (Table 3), and the INTERPHONE studies and miscellaneous studies provide relative consistent estimates (Odds Ratios of 1.25 (0.96-1.62) and 1.73 (0.66-4.48), respectively) of some excess risk associated with a, arbitrary, cumulative call time of at least 1,000 hours, the evidence from the Hardell studies similarly provides an implausibly high Odds Ratio of 3.65 (1.69-7.85); out of line with all evidence from other sources. The INTERPHONE team have spent considerable efforts trying to evaluate whether observed increased and decreased risks could be the result of recall and selection bias [9–13] and a recent study found some indication for reverse causality as an explanation for seemingly protective effects from mobile phone use [14]. It is therefore surprising that Choi et al. have not similarly discussed the likelihood of bias away from the Null in the Hardell studies. disregarding the implausible risk reported by the Hardell group, a summary risk point estimate based on all other case-control studies for 1,000+ cumulative hours of use would be in the order of 1.30-1.50, which triangulates much better with other lines of research.

Choi et al. argue that a plausible explanation for the observed differences could be that the Hardell studies are of better quality than those in the other two groups, based on individual appraisal of each study using the Newcastle-Ottawa Scale and National Heart, Lung, and Blood Institute quality assessment tool of case-control studies (Tables S1 and S2). The differences in rating within and between the three groups of case-control studies are minimal, but Choi et al. rated the methodological quality Hardell studies a little higher quality mainly because they had very high response rates and because they mostly classified as having excellent, blinded, assessment of exposure compared to the INTERPHONE and miscellaneous studies. This seems to be an error or misunderstanding in the use of these criteria. First, achievement of a high participation rate is an asset in an epidemiological study. However, to achieve a participation rate of over 80% in a population based case-control study in Western Countries, as reported in the Hardell papers, is highly unusual nowadays. Regardless, one would expect that in a study with such high participation rates, the proportion of mobile phones users in controls should closely match the official subscriber statistics, which was not the case for the Hardell studies [4]. Thus, serious concerns remain about how these high participation rates have been achieved or calculated.

Secondly, the blinding concept as rated by Choi et al. is inappropriate. Exposure assessment in the INTERPHONE studies was conducted by trained interviewers, which have been susceptible to interviewer bias because they could indeed probably not be blinded to case-control status [15]. However, it is highly unlikely this would have resulted in higher bias compared to the Hardell studies, in which exposure assessment was based on questionnaire-based self-reporting, by cases and controls, of mobile phone use who, by definition, are not blinded to their disease status. Methodological work suggests that both face-to-face interviews and self-administered questionnaires are susceptible to various ‘mode of administration’ biases, but that exposure assessment based on self-administered questionnaires are generally more susceptible to recall bias [15]. As such, the methodology of the Hardell studies should have been classified as being of comparable quality to the other case-control studies in this review, at most.

Choi et al. further looked at source of funding as a possible explanation for observed differences, but provided erroneous funding information. Only the Hardell studies received direct funding from interest groups such as the telecom industry [16, 17] and pressure groups [18], but this was not reported by Choi et al. In contrast, INTERPHONE studies’ industry funding was through a well-established firewall model to avoid influence of the funders on the researchers. There is empirical evidence from human experimental studies that such a funding structure has not resulted in biased study results but in higher study quality, whereas direct funding by interest groups may produce biased results[19, 20]. Further, the three study groups only contribute to either ‘funded by industry’ or not (according to Choi et al.), which makes this analysis non-informative.

Given that observational epidemiological studies are susceptible to various biases, which can result in under as well as over-reporting of true effects, rigorous evaluation is needed to understand why the studies by the Hardell group provide different results than the majority of other case-control studies and with other groups of epidemiological literature. In the absence of direct evidence for any causes of these differences, triangulation of epidemiological studies susceptible to different types of biases [15], such as case-control studies, cohort studies and ecological studies of cancer incidence, as well as with evidence from animal and laboratory studies is warranted. Although some uncertainties remain, most notably for highest exposed users and for new GHz frequencies used in 5G, we can be reasonably sure that the evidence has converged to somewhere in the range of an absence of excess risk to a moderate excess risk for a subgroup of people with highest exposure. Important, over time, the evidence had reduced the uncertainty regarding the cancer risk of mobile phone use.

Funding

No external funding was obtained for this publication.

Author contributions

FdV drafted the first outline. MR and FdV collaborated on subsequent iterations, and both approved the final version.

Conflicts of Interest

The authors declare no Conflicts of Interest.

MR’s research is entirely funded by public or not-for-profit foundations. He has served as advisor to a number of national and international public advisory and research steering groups concerning the potential health effects of exposure to nonionizing radiation, including the World Health Organization, the International Agency for Research on Cancer, the International Commission on Non-Ionizing Radiation Protection, the Swiss Government (member of the working group “mobile phone and radiation” and chair of the expert group BERENIS), the German Radiation Protection Commission (member of the committee Non-ionizing Radiation (A6) and member of the working group 5G (A630)) and the Independent Expert Group of the Swedish Radiation Safety Authority. From 2011 to 2018, M.R. was an unpaid member of the foundation board of the Swiss Research Foundation for Electricity and Mobile Communication, a non-profit research foundation at ETH Zurich. Neither industry nor nongovernmental organizations are represented on the scientific board of the foundation.

FdV’s research is also funded by public or nonprofit organisations. He is partly funded by the National Institute for Health Research Applied Research Collaboration West (NIHR ARC West) at University Hospitals Bristol NHS Foundation Trust. He has done consulting for the Electric Power Research Institute (EPRI), and nonprofit organisation,  in the past, not related to the current publication. He is a member of the UK Government Independent Advisory  Committee on Medical Aspects of Radiation in the Environment (COMARE).

References

1. Choi Y, Moskowitz J, Myung S, Lee Y, Hong Y. Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis. Int J Environ Res Public Health. 2020;17:10.3390/ijerph17218079. https://www.mdpi.com/1660-4601/17/21/8079.

2. Borenstein M, Hedges L, Higgens J, HR R. When Does it Make Sense to Perform a Meta‐Analysis? In: Introduction to Meta‐Analysis. 1st edition. Chchester, UK: John Wiley & Sons, Ltd; 2009. p. 357–64.

3. Lagorio S, Röösli M. Mobile phone use and risk of intracranial tumors: A consistency analysis. Bioelectromagnetics. 2014;35:79–90.

4. Röösli M, Lagorio S, Schoemaker MJ, Schüz J, Feychting M. Brain and Salivary Gland Tumors and Mobile Phone Use: Evaluating the Evidence from Various Epidemiological Study Designs. Annu Rev Public Health. 2019;40:221–38.

5. Lawlor DA, Tilling K, Smith GD. Triangulation in aetiological epidemiology. Int J Epidemiol. 2016;45:1866–86.

6. Benson VS, Pirie K, Reeves Gillian K. GK, Beral V, Green J, Schuz J. Mobile phone use and risk of brain neoplasms and other cancers: Prospective study. Int J Epidemiol. 2013;42:792–802.

7. Frei P, Poulsen AH, Johansen C, Olsen JH, Steding-Jessen M, Schüz J. Use of mobile phones and risk of brain tumours: Update of Danish cohort study. BMJ. 2011;343:d6387.

8. Karipidis K, Elwood M, Benke G, Sanagou M, Tjong L, Croft RJ. Mobile phone use and incidence of brain tumour histological types, grading or anatomical location: A populationbased ecological study. BMJ Open. 2018;8:e024489.

9. Lahkola A, Salminen T, Auvinen A. Selection bias due to differential participation in a case-control study of mobile phone use and brain tumors. Ann Epidemiol. 2005;15:321–5.

10. Vrijheid M, Armstrong BK, Bédard D, Brown J, Deltour I, Iavarone I, et al. Recall bias in the assessment of exposure to mobile phones. J Expo Sci Environ Epidemiol. 2009;19:369–81.

11. Vrijheid M, Richardson L, Armstrong BK, Auvinen A, Berg G, Carroll M, et al. Quantifying the Impact of Selection Bias Caused by Nonparticipation in a Case-Control Study of Mobile Phone Use. Ann Epidemiol. 2009;19:33–41.

12. Vrijheid M, Cardis E, Armstrong BK, Auvinen A, Berg G, Blaasaas KG, et al. Validation of short term recall of mobile phone use for the Interphone study. Occup Environ Med. 2006;63:237–43.

13. Vrijheid M, Deltour I, Krewski D, Sanchez M, Cardis E. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk. J Expo Sci Environ Epidemiol. 2006;16:371–84.

14. Olsson A, Bouaoun L, Auvinen A, Feychting M, Johansen C, Mathiesen T, et al. Survival of glioma patients in relation to mobile phone use in Denmark, Finland and Sweden. J Neurooncol. 2019;141:139–49.

15. Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health (Bangkok). 2005;27:281–91.

16. Hardell L, Mild KH, Carlberg M. Case-control study on the use of cellular and cordless phones and the risk for malignant brain tumours. Int J Radiat Biol. 2002;78:931–6.

17. Hardell L, Hallquist A, Mild KH, Carlberg M, Påhlson A, Lilja A. Cellular and cordless telephones and the risk for brain tumours. Eur J Cancer Prev. 2002;11:377–86.

18. Hardell L, Carlberg M, Söderqvist F, Mild KH. Case-control study of the association between malignant brain tumours diagnosed between 2007 and 2009 and mobile and cordless phone use. Int J Oncol. 2013;43:1833–45.

19. Huss A, Egger M, Hug K, Huwiler-Müntener K, Röösli M. Source of funding and results of studies of health effects of mobile phone use: Systematic review of experimental studies. Environmental Health Perspectives. 2007.

20. van Nierop LE, Röösli M, Egger M, Huss A. Source of funding in experimental studies of mobile phone use on health: Update of systematic review. Comptes Rendus Phys. 2010;11:622–7.

‘Bending Science’ & the Dirty Electricity Industry

In principle the scientific method has a pretty relatively robust system based on peer-review to ensure that any problems in scientific papers are addressed before the paper is published. This methodology is not without its problems, but it is the best we have available; and it is hard to see what other system would solve this. Anyway, it works relatively well as a self-correcting scientific methodology.

One of the problems with it though, is that it is possible to block specific lines of thought – at least for a certain amount of time. For example, if you propose an alternative scientific explanation to a phenomenon with an ‘established’ paradigm it will be quite difficult to get this published (although, if the evidence is strong enough it will be published eventually) while, more problematic, there is abundant evidence of the distortion of evidence and attempts to distort the scientific, including peer-review, process by industry (most infamous of course, by the tobacco industry, but also in relation to carcinogenicity of chemicals used in industry and global warming, for example). 

Depending on who you talk to, this may or may not be the case in the assessment of the carcinogenicity of radiofrequency radiation (RF) from mobile phones, and it is EMF (electromagnetic fields) I’d like to talk about here.  

Not the ‘normal’ EMF characterized by the frequency and amplitude (and shape) of the waves, but a “new” metric that is supposedly the real exposure that causes cancer, and a number of other diseases, in humans. Indeed, it is ‘dirty electricity’; an exposure that is defined not by a clear and precise set of quantitative characteristics, but instead mainly by the fact it can be measured by a dirty electricity dosimeter. Basically, it is a form of RF, but measured by means of voltage changes over time within a certain frequency bandwidth. A better name, therefore, is high frequency voltage transients (superimposed on 50/60Hz fields). For those of you who keep a close eye on this blog, you may remember ‘dirty electricity’ since I have written about this before [direct link]. It is a niche within the EMF research community and, just to make this clear, by no means established or even accepted as a valid scientific hypothesis.

Its proponents put themselves against the established EMF research community which, they claim, is biased because the electric (and mobile phone) industries want to hide the real effects of EMF exposure on humans. This not unexpectedly has some traction within the electro- hypersensitivity (EHS) (or idiopathic environmental intolerance attributed to EMF) community, which is unfortunate. Whatever it is that is going on with respect to EHS in my opinion deserves more attention, but focussing the efforts on something as untested as the dirty electricity hypothesis seems like a bad idea. Similarly, there is a case of a middle school in the US where a perceived increased cancer risk amongst teachers is, instead of first looking at investigating the likelihood of a “normal” explanation first, researchers jumped immediately to ‘alternative scientific explanations’. A good analogue from the medical world is homeopathy, which similarly but at a much larger scale preys on vulnerable people without any evidence of efficacy above a normal placebo effect. Indeed, similarly it is possible to buy ‘dirty electricity’ measurement devices, ‘dirty electricity’ filters (to ‘clean’ your environment), and it is possible to buy ‘dirty electricity’ services that “solve your problem”. This seems primarily an American, Canadian, and I think Australian thing, but it is penetrating the UK as well. People make a lot of money on the back of dirty electricity, and as such it is just another industry.It is (still) relatively small, but it is an industry nonetheless. And with it come the issues generally associated with the bigger industries, such as the publication of research of, let’s say, varying quality to back up the industry’s raison d’etre, the use of PR and media to generate exposure and create a concern in the population, and – and this is what the rest of this post will be about – silence critics.

 I wrote a review about the epidemiology of dirty electricity which was published in 2010 [link to abstract] concluding, based on all available evidence at the time, that although it was an interesting concept there were so many problems with the published studies that it was extremely hard (probably ‘impossible’ is a better word) to say that dirty electricity was associated with increased risk of disease of adverse health effect. Last year it was 5 years ago that this was published and I thought it would be interesting to update the review and see if the proponents of dirty electricity had been able to provide better evidence of this being important.

Coincidentally, others had the same idea and I collaborated with Professor Olsen, a professor in electrical engineering at Washington State University with a nearly endless CV on the many aspects of electric engineering and EMF exposure measurement and assessment, on the updated review. The work got sponsored by the Electric Power Research Institute (EPRI) who are of course interested in this kind of work. This is, of course, important and this was highlighted specifically in our publication. At this stage I think it is important to highlight that EPRI specifically indicated that they did not want to see anything about our work until it was published. So in summary we highlighted the source of funding in any publications, but we were free to do as we wanted. Emphasizing this may seem a bit tedious, but it is quite important in what follows. 

So how does this relate to peer-review and the influence of industry? I’d like to just describe what happened next, since most of these things tend to stay behind closed doors. I am sure other research will have much worse experiences then we had but, well, I don’t know these…and it is my blog…but I’d be quite interested in hearing about those.

*

Anyway, so we conducted our systematic review of the exposure assessment and the epidemiology of ‘dirty electricity’; happy to be guided by what the evidence told us. It recently got published in the ‘Radiation & Health’ subsection of ‘Frontiers in Public Health’ and at the time of writing has about 1,000 downloads. If you are interested you can find it here as open access [link to full text]. I have much appreciation for the editor-in-chief of   ‘Radiation & Health’ who stuck with us throughout the whole peer-review process (more on this later) despite what his personal opinions on the topic may be (Note that I have no idea what these are, which is how it should be), but more about that later.

But it did not start with ‘Frontiers in Public Health’.  

The following will be quite familiar to everybody who has ever tried to have a scientific manuscript published, and I’ll call it Stage 1:  

“This decision was based on the editors’ evaluation of the merits of your manuscript….” 

“Only those papers believed to be in the top 20% in a particular discipline are sent for peer review….”

“Overall, I am not convinced that such a spartan review is warranted and worthy of consideration for publication…”

No hard feelings, this is all part of the process.

Stage 2 then happens subsequently in which the paper is externally reviewed, but rejected. This also generally happens regularly, but in this particular case there is something interesting that is starting to appear. Of, 2-3 reviewers the manuscript got favourable reviews from 2 reviewers but always got the minimum score possible from one reviewer (sometimes 2, depending on the number of reviewers). Given the stark contrast between the scores, we asked the editors about this, and basically if a manuscript receives one negative review it generally gets rejected. So here is a first step where it is possible to block the publication of a paper. Indeed, further discussion revealed that because the scientific proponents of the dirty electricity hypothesis are not that many individuals, an editor looking for appropriate reviewers and who types in ‘dirty electricity’ in, say, PubMed, will end up in the same, relatively small, pool of people.  So just the minimum score would be enough, but of course one needs some sort of rationale. This was generally speaking minimal, but more importantly, has nothing to do with the science. For example: 

“Although the authors acknowledge their funding source, their bias is obvious.  In many respects, it might be better to reject the paper.” “While the authors claim no conflict of interest, this study was funded by EPRI, which some would consider a conflict of interest.”

Nice, despite the fact the funder was clearly stated and we highlighted that the funder had no input in any aspects of the review (see the actual manuscript). However, still acceptable and it is important to make sure any funders and perceived biases are highlighted in a manuscript. Whether it is a reason for rejection is another though…

Anyway, it soon got a bit more personal and unfriendly. I don’t know if others ever review a paper like this, but I can’t say I was very impressed. Moreover, as a reviewer one should address specific scientific points so that the authors can address these if they are a genuine mistake, provide an explanation of why what they did is scientifically correct, or withdraw the manuscript if it really is an incurable problem.  Anyway, at this stage we move into more unscientific, and this time personal, comments:

“It is clear that the authors are unfamiliar…”

“This review appears to be biased..”

“It is clear that the authors are unfamiliar with the biological research within the area of extremely low frequency  (ELF) and radio frequency (RF)”

“They then make a calculation that demonstrates they do not understand dirty electricity.”

“It is clear that the authors are unfamiliar with electricity, the concept of linear and non-linear loads and grounding problems

The language here is not so much scientific as it is emotional, and clearly aimed at quickly suggesting to an editor who will mostly skim the reviews that the work is awful and should be rejected. Other ways of doing this is to use other emotive words, such as “flawed”, “false” etc. in abundance; and presto!:

“I find it deeply flawed with false statements, flawed assumptions and wrong calculations.  It seems to be written by someone who has only a superficial knowledge in this field and has a strong bias in favor of the electric utility. “

“I find this review to be scientifically inaccurate, biased, and deeply flawed and would recommend it not be accepted for publication.”  

“I don’t believe that publishing this flawed review is going to benefit science, policy makers, or the public”

Another nice one for the records, and which I would hope none of the readers of this blog (yes, you!) ever uses:

“The authors of this review article have never done any primary research on HFVT and are not considered experts in this field and would not be invited to review articles on this topic.” 

As a side note, I find the following comment quite telling, and, teaching epidemiology myself I would fail the student who wrote this in an exam:

“When an author allots effects from a variety of studies to “unconsidered confounding factors” it makes me question their objectivity.”

*

So yes, this worked amazingly, and some editors fell for this approach. Not the editor of Frontiers in Public Health: Radiation & Health though, who decided to go with the science on this one.

Of course, why change a winning strategy: 

“The authors definition of Dirty Electricity speaks of a complete misunderstanding.“

“The authors’ definition implies that they have little to no knowledge of electrical engineering, or more specifically high frequency voltage transients.” 

And my favourite:

“Generally the grammar is correct, but too often the language is in error. Some of the errors are so egregious that they raise the question of the authors understanding of electrical engineering and epidemiology.”

Even our thorough approach was questioned:

“The manuscript’s very length suggest they choose to “overpower” the reader obfuscation.” 

Unfortunately, this did not work this time around, and the paper remained in the peer-review process. This was quite a tedious process because a group of the reviewers would be selected, provide a review, and when all scientific points were addressed in our replies they would withdraw from the reviewing process; thus stopping the review process until another reviewer was found, delaying publication, and inconveniencing us as well as the editor. I consider this plain rude, but what it suggests to me is that we got the science right on this. Aside from delaying publication, there was another benefit to this approach for the ‘dirty electricity’ industry. As it turns out one can be as rude and personal as possible because, after withdrawing from the review process, all comments were erased (this is an unfortunate consequence of the Frontiers system). I won’t bother you with these since they follow the same pattern as above, but I’d like to highlight one gem of a comment: 

“EPRI has many far more qualified employees to write such a review but chose to hire De Vocht and Olsen.”

*

I hope you found my walkthrough of this particular peer-review process entertaining. This is what can happen if you do research that can hamper the profits of an industry. Of course this is nothing new, but it is quite interesting that even a small industry (and one that argues it fights against the INDUSTRY) uses the same tactics.  

You may have realised I did not tell you the conclusion of our systematic review. This was done on purpose, because in a way the actual conclusion does not really matter. We conducted this out of academic interest, so whether our conclusion was that this is indeed a dangerous exposure or whether this is nonsense, has no impact on either of us. For convenience , here is the link to the full paper [direct link]

A meal to (never) die for

Clean eating, raw food, paleo, and, I‘d imagine, quite a lot of other such diets and lifestyles  seem to be everywhere nowadays; whether it is a friend telling you that you just “have to do this, because…”, or whether it is in twitter, facebook, and tabloid “science updates”. Of course, the most irritating one of these is the ‘Instagram diet’, which can be best described as a lifestyle based on taking a photo of your best looking dishes- usually involving an avocado- and posting it online for your followers to admire; a preferable lifestyle option for C-list celebrities.

It seems you cannot open a newspaper, tabloid, or magazine, look at your twitter feed, or open facebook without someone having posted a story on what to eat to not die, what not to eat because you will die, which superfoods to add to your meal to…well…live longer, healthier, better, and stronger, and how to prevent getting old and/or getting Alzheimer’s by adjusting your diet (if you have followed other advice, otherwise you would have died well before this would have become an issue). Even though some of this is more or less (well no, it is actually) made up, many of this advice seems to be grounded in ridiculous over-interpretation of epidemiological evidence. Very often this is also the result of absurd extrapolation of results from studies in cells or animals to humans. Some of the ingredients in, say, the avocado, may contain a nutrient which was measured in a cell line to have some anti-cancer properties in situ. Indeed, cancer is usually the target of this type of “nutritional advice”. It is always best to link this type of advice to a disease with a complex aetiology and that may occur sometime for away in the future because, if it was some acute effect (say a poisoning) it would be easy to check, wouldn’t it. Also, it should be something people really worry about: cancer and Alzheimer’s disease are good candidates in that respect, for obvious reasons. Before going on, I would like to emphasize here that I don’t have a problem with nutritional epidemiology, or with dietary or nutritional advice; there are some great studies out there, as well as some good advice. The following just serves as an illustration about the nonsense this can result in when some of the worse, clickbait-type, advice is taken literally… …that, and assuming you make it all the way to the end, you can also make a great meal for family and friends! So, I have taken one of my favourite recipes, which is also not overly complicated, and went on an internet hunt for the supposed health benefits and risks for all the ingredients (and yes, I do most of the cooking at home…). Subsequently, I have re-written the recipe to demonstrate how great my chosen recipe is.

A meal to (never) die for

List of ingredients (3-4 people):

2 tablespoons of an anti-inflammatory sauce with additional benefits on genes linked to cancer. This is supplemented with beneficial fatty acids, vitamins E and K, and loaded with anti-oxidants; the latter are biologically active and help fight serious diseases, including heart disease and stroke. The sauce also does not cause weight gain and obesity, and may help to fight Alzheimer’s Disease (I know, told you!), rheumatoid arthritis, and type 2 Diabetes. Did I mention it has anti-cancer properties? It has the added benefit of having anti-bacterial properties. [source]

Two units of an anti-microbial plant, which regulates your blood sugar, improves your bone health (effectively preventing the development of osteoporosis), and prevents cancer. Unfortunately, it can lead to bad breath and unpleasant body odour, and can cause bloating, cramping and diarrhoea, as well as eye and mouth irritation. More seriously, one can also develop an intolerance and allergy. So this is the dare-devil in me…adding this to my food! [source]

1 tablespoon of an anti-cough medicine, which also lowers the blood pressure, and boosts the immune system. In fact, any leftovers can be used as a disinfectant to rid the home of dangerous air pollutants. It also improves your mood, but as a downside: it is also used in pesticides. Oh, this needs to be chopped. [source]

One large pinch of a pain killer, which also fights inflammation while simultaneously reducing blood cholesterol, triglyceride levels, and platelet aggregation and increasing the body’s ability to dissolve fibrin (so the cardiovascular benefits are off the scale!). If that is not enough already, it clears congestion, boosts immunity, but also stops the spread of prostate cancer! It prevents stomach ulcers, helps you to lose weight, and lowers your risk of Type 2 Diabetes.  [source]

75ml of a, and I quote, “cure-all”. In fact, it has been credited with helping the Roman army succeed. It helps soldiers to survive battle as well as alien climates encountered during campaigns. Quite a surprise that this is not a part of every meal really. This cure-all is also loaded with anti-ageing antioxidants, is cholesterol free, sodium free, and fat free. Without doubt it helps preventing heart disease, including lowering of cholesterol, and cancer.   [source

1 tablespoon of a powder that provides energy, lowers blood pressure, improves brain functions, treats depression and heals wounds. However, this is a dangerous substance, and should only be consumed in moderation! It is associated with obesity, diabetes, dementia, cardiovascular disease, macular degeneration, Alzheimer’s disease, increase blood glucose level, kidney ailments, gout, heart problems, hyperactivity, cancer and cavities. So this is bad stuff, and despite its benefits you may want to see if you can find a replacement for this….or just live dangerously.  [source]

200 ml of special liquid which is mineralizing, alkalizing, and can help repair the gut lining which eases joint pain, inflammation, eczema, digestive issues, prevents cellulite, and builds strong bones, hair and nails. [source

Crumble 100 grams of a medicine that improves heart health, fights arthritis, prevents osteoporosis, enhances memory, boosts the immune system, has anti- cellulite and anti-inflammatory properties, and improves your dental health. [source

Importantly, this is a superfood-based meal, so chop 200 grams of a low-calorie, high-fiber, zero-fat vegetable, which is also high in iron and vitamin K, and filled with antioxidants. It also has anti-inflammatory properties, lowers cholesterol levels, helps your visions and supports your immune system. High in calcium, it also prevents osteoporosis and maintains a healthy metabolism. Oh, and it detoxes the body (excellent, whatever that is). [source

And finally, the above is all supplemented by 400 grams of carbohydrates; the primary fuel source of your body, including a lot of fibers, which help fight chronic diseases, including obesity and Type 2 Diabetes, and promotes digestive health. The selenium in it protects your cells from molecular damage, while manganese, also included, regulates your blood sugar. [source]

So all in all, this is quite epic. But the benefits do not just stop here! No, further health benefits result from non-included ingredients! You may have noticed that this recipe includes no meat. Vegetarianism it turns out wards of disease, helps keep your weight down, increases the length of life, builds strong bones, reduces your risk of food-borne illnesses, eases the symptoms of menopause, increases your energy, reduces the likelihood of constipation, haemorrhoids and diverticulitis, and does not include toxic chemicals. It is also better for the environment, helps reduce famine, and you  will spare animals. [source].

So, we now have all the ingredients, and feel a lot better and healthier already. If I eat this regularly, which I actually do, it is highly unlikely I will die in the near future (ignoring the probability of an accident). I mean, I am protected from cancer, Type 2 Diabetes, obesity, osteoporosis, Alzheimer’s disease, microbial infection, and many others things. Although not very likely, I will probably not find myself on the battlefield in the near future; but if I do, I definitely have an edge on any of my enemies (so be warned if you were to critique this blog post; just saying….).…that’s pretty re-assuring. I am practically immortal…

.

.

.

Oh right, you were curious about the recipe itself?

Step 1: Heat the oil in a large frying pan. Add the onions, thyme and some seasoning. Sauté for 10 mins until softened, then add the chilli flakes, vinegar, sugar and stock. Increase the heat and cook for another 10 mins.

Step 2: Meanwhile, boil a large pan of water, add some salt and cook the spaghetti following pack instructions, adding the kale for the final 4 mins of cooking. Drain and return to the pot with a little of the cooking water. Tip in the onion mixture and half the cheese, and toss together. Serve topped with the remaining cheese.

Spaghetti with caramelised onion, kale & Gorgonzola. The recipe can be found here:  https://www.bbcgoodfood.com/recipes/spaghetti-caramelised-onion-kale-gorgonzola

Enjoy your next dinner!

Electrification and the diseases of other causes

I was recently asked to review a new book for an academic journal. The book, “The InvisibleRainbow”, is quite interesting and well written but does require you to believe in a conspiracy to hide the real detrimental effects of non-ionising radiation that was started from its invention; so yeah, there is that… (I will put the link to the actual review up here once published). So part of the evidence that shows that the introduction of electricity in society has resulted in the increase in the numbers of newly diagnosed cases of a variety of different diseases included in the book builds on work by Samual Milham, who batched these “Diseases of Civilization”; principally cancer, diabetes, and heart disease, but including some others as well. Dr Milham described his hypothesis in a book ‘Dirty Electricity. Electrification and thediseases of civilization’, which I believe is considered a classic in certain groups, as well in a number of scientific papers (main linklink2link 3). Together with Igor Burstyn, also a contributor to this blog, I wrote a refutation regarding one of these papers (link), but the epidemiological principles in that letter relate to the other papers dealing with electricity as the causal factor for the ‘diseases of civilization’ as well. Anyway, this blog is not aimed at discrediting that hypothesis per se, and I do have some sympathy for the difficulty of trying to investigate such a generic hypothesis in the absence of available funding to obtain the most appropriate data and do it properly (as a result, one is forced to rely on publicly available data), but I would like to discuss my interpretation of those data here. Hopefully, someone “in the know” will read this and will use the comments section to explain what is wrong with my inferences, and why therefore electricity is the cause of these ‘diseases of civilization’. I am not trying to be cynical or cause some angry never-ending spat here or, for example, on twitter; I am genuinely interested in the whys and hows, and I think so would quite a lot of other people…. 

The data I am using is provided in several tables in “The Invisible Rainbow”, which I copied across (and so can you if you buy the book; as said, if you find this topic of interest it is an interesting book to have), and describes the level of household electrification in all US states in the years 1931 and 1940, together with the state-level mortality rate (per 100,000 people) for cancer, diabetes and heart disease. I have recreated the figure in the book related to the cancer mortality rate below (for reference, this is Figure 6, page 252, but without names of the states added):

The interpretation of this figure, and similar figures for rural diabetes and heart disease mortality is that you can see a clear and positive correlation between the rate of rural cancer (which, incidentally, should be the Y-axis and not the X-axis) and the percentage of electrification (which, vice versa, should have been the X-axis), which shows that more exposure to electricity leads to a higher incidence of cancer in rural areas. Now there is an immediate an obvious problem with this interpretation, and that is that it typically takes a long time – decades – between exposure to a carcinogen and a detectable cancer to have grown (called the lag), so the 1940 electrification is not the relevant exposure here. This problem would be less of an issue, if at all, for rural heart disease and diabetes… Regardless, let’s ignore that for now and look at this in a bit more detail (I will only look at cancer in this blog; results for the outcomes are pretty much comparable). If we overlay the 1931 and 1940 data (and switch the axes around), we get the following figure:

Showing that from 1931 to 1940 household electrification has increased somewhat, as you would expect, and overall cancer incidence has increased as well. It also shows that this increase in the cancer rate was higher for areas with higher household electrification. So all in all that seems pretty conclusive, and is indeed interpreted as such.  This is however the comparison of two cross-sectional datasets, which is generally not considered very strong evidence. We can do better and look at the difference from 1931 to 1940. After all, if electricity indeed causes cancer then a change in the electrification rate should also result in a change in the cancer incidence rate (again, ignoring the lag). With the data provided in the book this is easily done. Surprisingly, at least to me, in some state the percentage of household electrification went down from 1931 to 1940. That seems unlikely to me, but let’s just assume the numbers are correct and have a look at only the states where electrification increased, to see if this confirms the hypothesis.

So that interesting. In the 9-year time period the percent household electrification, where this increased, ranged from 0.8% to just over 16% and the corresponding cancer mortality rate in these states increased from about 3 per 100,000 people to 52 per 100,000. Importantly for our story though, there is, if anything, a negative correlation between the increase in electrification and rural cancer mortality rate: the higher the increase in electrification, the lower the increase in cancer risk (on average). In other words, these data really do not suggest an association between electrification and rural cancer rates. We can do the same for those strange states where household electrification went backwards, and see pretty much the same picture, but in mirror format. Again, there is very little evidence of a correlation; the apparent decrease is only there because of the 2 (only 2!) states where both the electrification rate and the cancer mortality rate went down. If it wasn’t for those two states – New Jersey and especially Massachusetts – there would be no correlation at all.

From these two figures we can pretty much conclude that there is hardly a correlation, but definitely no evidence of a (causal) association between the two. In fact, it is possible to show this even clearer when we take the average difference in cancer rates of those states where the household electrification rate was within +/- 1%, which can be interpreted as the result of any other changes over that decade that are not electrification (for example, improvements in medical diagnostics, improvements in medical techniques, faster/easier transport to hospitals, but also maybe, for example, some other environmental exposure (we don’t know)), and subtract this from the measured difference in rural cancer mortality rates. What’s left then may be associated to electricity. So I did this, and the result is shown in the figure below:

The figure has 4 sections, with states where both the electrification rate and the rural cancer rate increased in the top-right and conversely where they both declined in the bottom-left (both after subtracting 24 (per 100,000 population)) from reported rates as described above. The grey line shows the remaining correlation, and I am sure you can also appreciate that that pretty much shows a horizontal line; from which we can conclude that there is no correlation, let alone a (causal) association between rural cancer mortality and household electrification (in fact, there is a small positive correlation, indicating that the cancer mortality rate increases on average by 0.03% per 10% additional electrification; in other words not relevant).  

Now, just in case someone (not you, obviously…)  were to point out that I may just have picker cancer because that was the only one where there was no association, because for example because of the lag, these are the figures for rural diabetes morality and rural heart disease mortality, respectively:

If we are to conclude anything from the above figures, I think we can pretty much say that heart disease mortality is not related to household electrification either, while the more a state electrifies households, the more the diabetes mortality rate decreases.  

The above all fits with the generally accepted alternative hypothesis, that improvements in medical diagnosis and registration have resulted in increased identification and correct registration of new cases of various diseases and hence an apparent increase in disease risk, but not necessarily mortality from those diseases. In a typical constructivist approach I suggest that the impact of new developments and technologies is best shown by the fact that the largest decrease in the cancer rate was observed for Massachusetts, known for its science and technology, as well as already having an >90% household electrification rate in 1930-1940, while using a similar approach I suggest that the benefits of medical therapies, which are related to electrification, is best shown by the decrease diabetes morality risk with increased electrification; as shown in the figure above.    Of course, there is also room for other (environmental, and other, most importantly lifestyle) exposures to play a role in this, and contribute to increased incidence rates of cancers, diabetes and heart disease observed in these data and continuing in national statistics. However, it is very unlikely electrification is one of those….