The EMF/RF exposure a public health researcher gets exposed to…

So I recently got my hands on a new GQ 390V2 EMF Multi-Field/Multi-Function Meter, which measures the magnetic field and electric field, and measures radiofrequency radiation specifically. It’s a relatively cheap meter, and nowhere near the kind of kit professionals use, but seems to ballpark correct (it’s comparable to the popular Trifield TF2) and is fun to play with.

This is not a commercial post, or even a comment on whether this particular device is any better or worse than alternatives. It just happens to be the one I have, and it seems to do the job (of playing around with it). If you have a specific interest in the measurement of EMF/RF, here are the official specs:

Although these kind of devices are not good enough for professional purposes, they are quite useful for source identification and for getting a reasonably accurate idea of your exposure. They seem to be quite popular because they are essentially plug-and-play, so people with some interest in the levels of RF/EMF exposure in their home, neighbourhood or office environment can start measuring about without requiring too much expertise. For obvious reasons, they are also popular with people suffering from electro-hypersensitivity (or, to be more precise given that the causal link with EMF/RF exposure is not established, idiopathic environmental intolerance attributed to EMF). You can identify sources of higher exposure levels and, if you want, reduce exposure levels or avoid such areas.

So anyway, I have one of these now, and given that my BSc/MSc training in occupational hygiene it should be no surprise that I like to measure things. This particular device has a data logger so measurement data can be stored and analysed afterwards (which is ace).

So today is the first day of using this device.

I have got quite a lot of other stuff to do as well, so the trial run is aimed to answering the question “What are the magnetic field (in milliGaus), electric field (in V/m) and RF (in mW/m2) exposures patterns of a Professor sat at a desk in his office during 3 hours of online meetings?” Quite niche….but it is a start.

So here are the results (MF in black, EF in green, and RF in blue):

So that all looks very reasonable (if you can see it, there is quite a lot in the figure). Average exposure is not an entirely informative exposure metric, but just for interest: the arithmetic mean (ie the average) magnetic field exposure over that whole period is 0.26 mG, for the electric field it is 1.43 V/m, and for RF it is 0.31 mW/m2 (or, interestingly, 31,404 pW/cm2).

Peak exposure, always of interest (especially if one is worried about EMF exposures) were 0.30 mG, 87.10 V/m and 50.4 mW/m2, respectively.

So how does this compare to guidelines? The ICNIRP 2020 reference levels for exposure, averaged over 30 min and the whole body, to electromagnetic fields from 100 kHz to 300 GHz are 27.7 V/m for the general public (61 for occupational) and 2 W/m2 (10 W/m2 for occupational exposures) — Note that these refer to rms values rather than means or peaks I showed, and that they are also specific for 30-400 Mhz (with reference levels for 400-2000 mHz variable with for example for incident power = ‘frequency in Mhz’/40). More info about these reference levels here.

Bottom line, the levels in my office are quite nice. 1-2 peaks higher than the reference levels, but remember that the latter are averaged over 30 minutes.

There are also a some unofficial limit values, which came up in a recent twitter discussion (here). For example, there is something called the  the Building Biology Precautionary Guidelines for Sleeping Areas (based on the BioInitiative Report, so hhhmmm….) which say (link):

  • AC magnetic fields .2 – 1 mG
  • AC electric fields .3 – 1.5 V/m
  • Body voltage less than 10 – 100 mV
  • Radio Frequency radiation less than 10 μW/m²

So not too bad either, except for RF (although that seems impossible in a situation with mobile phone or wifi anywhere in the vicinity).

And then there are the ‘International Guidelines on Non-Ionising Radiation’. This is a bit of a misnomer in that this is essentially guidelines proposed by the UK Electrosensitives’ organisation (ES-UK). Needless to say they are not too keen on EMF/RF exposure, and their proposed guidelines are indeed very low, and indeed my personal exposure was well above these:

These guidelines are based on something called the ‘EUROPAEM EMF Guideline 2016’ (reference), which in turn is also based on the Bioinitiative Report. There are different ways of describing this report, but essentially it’s the result you get when you list every reported biological and health effect without taking study quality into account. The validity of these guidelines is questionable, at best, so I am not overly worried my workplace exceeds those; most of the world does……

Anyway, these data provide some insight into what the magnetic field (in milliGaus), electric field (in V/m) and RF (in mW/m2) exposures patterns of a Professor sat at a desk in his office during 3 hours of online meetings are…and how these compare with useful and less useful exposure guidelines.

More such measurement campaigns to come. If nothing else, than just for fun…..

Mobile phone use and colorectal cancer risk: the latest thing.

Through the years, the radiofrequency radiation involved in the use of mobile phones has been linked to a plethora of negative effects on human health and wellbeing. The effect which, for better or worse, makes it into the news and on social media most often is, not surprisingly, cancer. Many studies aimed at elucidating mobile phones’ cancer-causing potential have been conducted, but to date the results continue to be ambiguous. The World Health Organization’s International Agency for Research on Cancer (IARC) currently classifies radiofrequency radiation as 2B, or ‘possibly carcinogenic to humans’ , which is shares with many other exposures (list). Since its classification in 2011 there has been much debate about this, with people arguing either way, but despite new research (including in animals, but that will be another post) 2B continues to be about right.

The specific cancers that mobile phone use has been linked to varies a bit with time, but has included gliomas, meningiomas, schwannomas, cancers of the thyroid, parotid gland, breast, testis and I probably missed some.

A more recent one is that it causes colorectal cancer. This hypothesis is based on the observation that whereas an increase in colorectal cancer incidence is observed in young people in recent decades, especially in developed areas of the world, this is not observed for older people. While obvious explanations would be that this is the result of lifestyle and diet, the inductive reasoning of some people resulted in them proposing the radiofrequency radiation from mobile phones as a cause. Or, in other words, “young people put their phones in the back pockets of their jeans, but old people don’t. The closer to a phone the higher the radiation exposure, so this would result in increase in colorectal cancer”. This hypothesis was covered by Microwave News in 2019 (Colorectal Cancer Soaring in Young Adults; Are Smartphones in the Mix?
Epidemiologist De-Kun Li Wants To Know
).

An alternative hypothesis has been put forward more recently as a result of a case-control study, and links ‘blue light’, or ‘artificial light at night’, rather than the radiation itself to colorectal cancer, and is hypothesised to work through its impact on melatonin production (link to paper). This hypothesis seems more plausible to me compared to the radiation one, and also ties in better with other evidence about light-at-night, shiftwork, sleep disturbance and health effects.

Anyway, if we go back to the first hypothesis and radiofrequency radiation were to be an important factor, then maybe we can quickly look at this. Not in a scientific causal inference kind-of-way, but in a quick correlation kind of way?

Indeed, data on colorectal cancer incidence are available from the IARC GLOBOCAN website (link) (more info on the GLOBOCAN numbers here, it is not straightforward) and mobile phone subscription rates per 100 people are available from ITU (link). We do need to take temporality into account, so I linked the 2020 national incidence data to the 2010 national subscription rates. That seems about right, given that the latency time for colorectal cancer is about 5-10 years (reference). Without too much efforts I could do this for 181 countries, which seems a reasonable enough number to look at some correlations.

The obvious first look is the correlation between national mobile phone subscription rates (per 100 people) and colorectal incidence. This can be seen in the figure below, with the colors indication different global regions:

So yeah, that looks like there is a correlation in support of the hypothesis. Indeed, the correlation coefficient is 0.57 (95%CI 0.47, 0.67; p>0.0001).

Upon further eye-balling the graph however, it becomes clear that we may be looking at differences between countries in terms of development, which is associated with various lifestyle factors and patterns but also with quality of cancer registry etcetera. So as a an indicator of such differences the ‘Human Development Index’ is also provided by GLOBOCAN, and indeed this is similarly correlated with colorectal cancer incidence….but, crucially, also with mobile phone subscription rates (note that I have removed country names for clarity):

So the next step is to to conduct a multivariable regression so that the effect of mobile phones penetration rates, as a proxy for “mobile phone-induced RF exposure”, is adjusted for differences in development, as a proxy for loads of other things including lifestyle, diet and cancer registry quality, between countries.

When we do that, we find that colorectal cancer incidence in 2020 remains correlated with human development, with each unit increase (note that the scale of the actual index is 0-1) associated with an increase of about 38 (95%CI 32, 44) incident cases per 100,000 people (p<0.0001). The correlation with mobile phone subscription rate has disappeared however (p~0.13), but importantly the point estimate now indicates a negative correlation of about a -0.2 (95%CI -0.04, 0.01) decrease rather than positive as we saw before. So very little evidence of an association between the two.

There are still considerable differences between the 181, but we can use the current data to look at more comparable subsets of countries by only including countries from the same geographical region. I have run these models, and the results are presented in the Table below for Region with at least 10 countries:

Regionpoint estimatep-value
Northern Europe-0.080.20
Southern Europe-0.020.69
Eastern Europe-0.080.25
South-eastern Asia-0.010.71
Western Asia-0.030.41
Eastern Africa0.050.25
 Western Africa0.010.76
Caribbean-0.030.26
South America0.380.00

These are remarkably consistent for most Regions, indicating no evidence of a correlation (if anything, there is a stronger indication of a negative correlation).

Of course your eyes would immediately be drawn to South America! What is going on there? As it turns out, development and mobile phone subscription rates are so strongly correlated (r~0.9) that the multivariable model goes wrong. It’s a bit technical, but the strong correlation screws up the adjusted effects so much that in order for the positive effect to be shown the correlation with development had to be reversed. Something not only highly implausible, but also the opposite of what the data show when plotted. So despite some possible initial enthusiasm from some readers, we will have to ignore that analysis. It does show you how tricky the analysis of data is though, even with relatively simple things such as done here!

So in conclusion, what can we say about this hypothesis of RF radiation as a cause for colorectal cancer? Not that much I am afraid. This is country-level data only, for one exposure and one outcome year only (2010 and 2020, respectively), which was not (easily) available for different age categories separately.

However, there is some, if limited, information in these analyses. It has not made the plausibility of this particular hypothesis relating RF radiation to increased colorectal cancer more likely, and that in itself is valuable. What we now need to do is wait for a real scientific analysis, ideally with more appropriate data….

…’Twas ever thus.

UPDATE 07/05/2021: Coincidentally, today a study was published in the international peer-reviewed journal “Gut” with the title “Sugar-sweetened beverage intake in adulthood and adolescence and risk of early-onset colorectal cancer among women” (direct link) . The title is quite self-explanatory, and provides an alternative, and more plausible in my eyes, hypothesis for the observed differences in colorectal cancer incidence between younger and older segments of the population.

Update on Current Knowledge of RF safety 2021 (The National Register of RF Workers annual held in collaboration with Cambridge Wireless on the 28th April)

I presented an update on the epidemiology of radiofrequency radiation (with some links to the 5G debate) to the The National Register of RF Workers on April 28 2021. Unfortunately, given the scope of the topic and the allocated timeslot, the update had to be very broad brush. All information about the event, including presentations of other speakers can be found here.

The whole seminar can be watched online on youtube:

The slides of my presentation are also available from the first link, but for the purpose of curation I have also added them to to my blog.

If you are interested have a look at the slides, which can be downloaded from the below link. Feel free to comment in the Comments section below.

(5 of 3) All well that ends well…(Letter to the Editor re Choi et al.)

In a series of blog posts I highlighted how it was possible to block a Letter to the Editor Professor Martin Roosli and I wrote regarding a recent systematic review and meta-analysis of mobile phone use and risk of a bunch of different cancer types clubbed. We had some concerns about the methodology in the review, the way they interpreted some of the data, and pointed out missing data on Conflicts-of-Interest that led to erroneous conclusions on impact of CoI.

For reference, here are the links to the original paper and the blog posts:

original Choi et al review

Post of our Letter

Peer Review 1

Peer Review 2

Discussion of ‘white had bias in RF/cellphone health research’

Surprisingly, after we thought this whole saga had reached its end, the editors of the International Journal of Environmental Research and Public Health (IJERPH), got in touch again. Having originally rejected our Letter, they had received further a further Letter (or Letters, I don’t know) also expressing concerns with what was done in this review. The Journal had therefore decided to publish our Letter after all, with the other(s) to follow. The final version of our Letter and the peer reviews can now be found here (note this is slightly modified from the one posted on this blog previously):

Comment on Choi, Y.-J., et al. Cellular Phone Use and Risk of Tumors: Systematic Review and Meta-Analysis. Int. J. Environ. Res. Public Health 2020, 17, 8079

It’s great to see that in the end scientific debate was victorious. It remains a shame however, that it had to take several months, an initial rejection, and at least one more Letter highlighting concerns with the Choi et al review, to convince the editors of IJERPH that this was the correct approach to science.

Reflections on provocation study for electrosensitivity

Over the past months in discussions with a variety of people suffering from electrosensitivity (ES) or in a broader sense advocates of ES, it was regularly pointed out to me that a provocation study I considered to be methodologically pretty good was in fact riddled with problems. However, when I asked why I never received any follow-up. I still wanted to know, so I put that specific question to twitter….twitter always knows the answer.

Some interesting discussions followed, the conclusions of which I thought I’d discuss below.

However, first the paper, by researchers from my Alma mater, itself for reference:

van Moorselaar, ,Slottje, Heller, van Strien, Kromhout, Murbach, Kuster, Vermeulen, and Huss. Effects of personalised exposure on self-rated electromagnetic hypersensitivity and sensibility – A double-blind randomised controlled trial. Environment International 2017; 99: 255-262 <direct link>

In short, in double-blind testing in the home environment of each participant the researchers tested whether people with self-diagnosed ES were able to detect exposure to a specific type of EMF exposure they found to be immediately sensitive to in unblinded sessions conducted prior to testing. The researchers concluded that participants were not able to detect exposure under double-blinded conditions.

You can read the details yourself, but some of the strengths were that each participant was only tested for exposure they were sensitive to (‘personalised exposure’), in their home environment to minimize stress, and that 10 repeated measures were each time only conducted once a participant indicated they were ready for the next session and did not feel any effects from the prior measurement anymore.

But back to twitter.

An important criticism which was immediately put forward was that ES is a complex condition, and for most suffers effects would only appear later, up to days, after exposure. That is a good point, but the study specifically aimed to include only those people who indicated they would ‘feel’ exposure within minutes. This might be a small subgroup of all ES sufferers, but obviously if immediate reactions could be picked up it would be in this group. So this would be a limitation of external validity, but not a problem with the study design.

A second criticism that was brought up, and which comes up for all such provocation studies, is that ES can only be self-diagnosed. In the absence of an objective diagnostic measure there is, therefore, no way of knowing whether all participants are truly ES. This is an important point, and something that cannot be directly addressed; although the researchers conducted non-blinded sessions with participants prior to the actual tests to try and confirm that each participant was ES and did respond to a particular EMF exposure.

A third point that was brought up was that ES does not work like an ‘on/of switch’ and that in conducting repeated measures effects from exposure could extend into subsequent measurements, thereby biasing the results. In other words, the ‘washout period’ was not sufficiently long.

I contacted Dr Anke Huss, the senior researcher of this study, about this, and asked if she could have a quick look at the proportions of participants who correctly identified whether they were exposed or not in each of the 10 repeated measurements. I realised that the washout period would not be relevant for the first measurement, and plausibly that had that period been too short that participant would indicate they had been exposed more often with subsequent sessions.

Of course this was not a massively novel insight, and indeed Dr Huss and her colleagues had already looked at this and put this in the supplementary material of the paper (And I had forgotten it was there). She sent me the corresponding figure:

As you can see from the above figure, just under 50% of participants correctly identified the exposure condition in the first session; as would be expected by chance. There is also not a clear pattern in the subsequent sessions.

The researchers’ conclusion that “double blind testing showed they were not able to detect exposure better than chance” seems a valid one, providing little evidence that this subgroup of people with self-diagnosed ES could feel they were exposed.

In interpreting the above, it is important that although this seems a very strong study, for which still no important problems have been identified (but there is a comment section below this article), it does little to prove or disprove whether the symptoms of electrosensitivity really are caused by exposure to electromagnetic radiation. The issues with external validity and self-diagnosis mentioned above are valid and important points which, hopefully, will be addressed in future research using different epidemiological methods.

CLASSIC (back by popular demand): 5G and COVID-19: Fact or Fiction?

Below you will find an article I posted previously for my FunPolice blog which, as you may know, I had to stop because ‘.eu’ extensions were not allowed anymore in the UK post-Brexit (one of the smaller issues, I admit). This one is back by popular demand since, surprisingly, this conspiracy theory is still a thing (Feb ’21)…..

Since the COVID-19 outbreak conspiracy theories have been going around social media about links between COVID-19 and the introduction of the  fifth generation of wireless communications technologies (i.e. 5G). Some even claim that exposure to 5G is the real cause of the pandemic; either because it causes COVID-19 or because the pandemic is just a cover-up by governments and industry to hide that they have switched on 5G. As ridiculous as that sounds, this has been gaining ground and has been directly linked to the burning down of communication masts (5G or not…) in the UK and elsewhere. There is an interesting discussion as to why this conspiracy theory has become popular <update: link now gone>. In terms of scientific evidence, there is none that 5G causes COVID-19 and the ‘it’s a 5G cover-up by the world government’ is too silly to discuss. There are some studies out there, mainly based on isolated cells and some small animal studies, that suggest radiofrequency radiation (from mobile phones) can impact on the immune system. The strength of the scientific basis or this is best summarized by Professor Carpenter; well-known in ‘EMF world’:

Another well-known name in ‘EMF world’, Prof Leszczynski, highlighted Martin L. Pall and Arthur R. Firstenberg as two of the main culprits that resulted, amongst other misunderstandings of the scientific evidence, also fuelled the flames of this conspiracy <link>. Moreover, and this is an important point, these individuals and groups moved an entirely reasonable question about some of the gaps in knowledge and enquiries for more research in this area towards the crackpot science bin that also holds antivaxxers, chemtrails, homeopathy and the likes.  It is relatively easy to understand why this theory was always going to be highly implausible. Some of the countries hardest hit by COVID-19 were Iran, Spain and France, neither of which as far as I know have 5G. Unfortunately, all this social media hype drew in some others who, given their training, really should know better. Dr Maria Havas, who previously peddled another failed idea of ‘dirty electricity’, is one of these. She recently published ‘Is there an association between covid-19 cases/deaths and 5G in the United States?’ on her blog (here) (update Feb 21: it seems the page has now been deleted from her website). And of course, this got enthusiastically shared around twitter (and possibly other social media). As many of these studies, it is based on simple ecological (i.e. at group level) correlations of two variables: in this particular case a direct comparison of the number of COVID-19 cases and deaths in US States where 5G has been activated with US States that don’t have 5G (yet, I suppose). And the conclusion seems quite straightforward, as shown in a Table copied from her blog:

The same number of tests have been done in both sets of States, but he number of case in US States with 5G is almost twice as high as States without 5G and the number of deaths is even 126% higher. She also conducted a T-test to show that this difference was statistically significant (interestingly, this only holds for a 1-tailed test, indicating that she completely discounts the possibility of 5G having a positive effect). Let’s have a look at the raw data on which this is based (Excel table also copied from her blog):

Now I have only been to a couple of States in the United States, but eye-balling this Figure suggests to me that 5G was not introduced ‘at random’ across the US. Again, I haven’t been there, but Alabama, Mississippi and Kansas just seem different from, say, Florida, New York and California. That also makes sense from a business point of view, because why would you go through the efforts of investing in building up a new technological network in a place that is largely agricultural and where barely anybody lives? These kind of ecological correlation analyses are always quite difficult to interpret, in that it is not too difficult to come up with other reasons why you would observe this correlation. On twitter, my first guess – given the above idea of where you would introduce such a new technology – was the above correlation was confounded by ‘urbanization’.

Magda Havas was kind enough to link to the primary sources that she used (here and here), so it was straightforward to obtain the same data (well, not completely the same: the COVID-19 numbers had been updated so I will be using data from a couple of days later). That gave me this Figure for COVID-19 related deaths (I am not doing tests or cases because these are too unreliable and directly result from various policies rather than the disease itself), which is pretty much identical to the Figure above:

The mean number of deaths in States with 5G was 136 and in those without 5G 60. This gives a ratio of 127%; nearly identical to the original data.  *So starting from pretty much the same point of departure, I run a basic log-rate model (because it is numbers of deaths per million citizens this is the correct model specification. I don’t think Magda Havas did that in her T-test above though), and get an 82% higher death rate in States with 5G compared to those without (2- tailed p-value ~ 0.06). This is somewhat lower than the straightforward comparison of averages, but as mentioned above based on a more appropriate model…and it still shows a serious excess mortality risk. I hypothesized that this correlation was confounded by something else and thought that ‘urbanization’ was a likely factor. Urbanization rates for US States are easy to obtain (don’t worry I will provide the dataset and R script at the end of this blog for you to play with), and I ran that model.  

The big reveal of this blog is that when adjusting the correlation reported by Havas for the State urbanization rate the correlation mostly disappears:  The excess risk has been reduced to about 34% and the p-value is 0.36 (or, for the connoisseur, the observed difference between 5G and non-5G States was not significantly different). 

Nonetheless, although not statistically significant there is still an excess risk of 34% which lies between -28% and +147% with 95% certainty. It is straightforward to look into this a bit more. There is more State-wide data available that could potentially contribute to this difference. To explore this, I also linked the data for median household income in each State, the percentages of non-Hispanic whites, States’ medium age (which has a non-linear correlation with COVID-19 death rate), and the population density. Adjusting for all those factors results in a 48% (-19%, +147%) excess mortality in States with 5G, but this difference again was not statistically significant (p-value ~ 0.21). In fact, there are a couple of States that are clearly different from the rest (outliers; Utah, New York, District of Colombia and Hawaii). When these are removed from the analyses, the difference between 5G and non-5G States is less than 1% in favour of the 5G States (p-value ~ 0.97).

I suppose this is why Havas did not submit the work for peer-review. She must have known that the first reviewer to look at this would point this out. I read an argument somewhere that apparently “we have to start somewhere”. This is of course correct: we have to start somewhere, but don’t show it to anyone else until we have done a good job. I am sure Havas is aware of bias, confounding and multivariable statistical analyses, so there question is “Why did she decide to put this stuff online, if not to join Pall and Firstenberg and fuel the fire of the conspiracy?”. 

*

My blog could end here, but there is some more academic work that is easily done and will improve our inferences. We can use a method called ‘inverse propensity weighting’. There is a lot of literature on this, but in (very short) summary: each State is weighted by the inverse of the probability with which the mobile phone operators would have selected that State for 5G. The analysis is then balanced  in such a way as if the operators had allocated 5G randomly. That is really beneficial because it makes the study look somewhat like the gold standard for such tests: the randomized controlled experiments (at least with respect to the variables in the exposure propensity model, and yes it is a bit more complex. Point taken). Anyhow, I did this using all factors above as well as additionally ‘population size’ (because I thought maybe operators went for the largest States first). 

Because this now kind of a random experiment the difference can be directly compared without adjustment for other factors (like Havas did in the first Table in this blog). And wow, the effect has reduced to as little as +10% with a p-value of 0.75. 

A final further improvement is ‘doubly robust adjustment’, and these “best” results show there is actually no evidence of any causal effect from 5G anymore: The difference is less than 1% and the p-value is 0.975. 

With this, I think we can comfortably say that this conspiracy theory has been debunked.

These analyses can of course be done with better data and possibly better models, so that’s why the dataset and R script can be downloaded here and here. It is also quite straightforward to repeat these analyses for other countries. I am looking forward to your contributions: just email or DM me on twitter if you have done this.

*

Addendum

Someone combining the spatial data on COVID-19 and 5G and looking somewhat amateurish at the correlation was always going to happen, and it would have been straightforward to do it correctly. We suggested this about 2 weeks ago to the mobile phone operators. This would have addressed the issues we are facing now. Unfortunately, the operators didn’t trust independent research. In fact, they hired a ‘product defence’ consultancy agency to do a review for them – very disappointing (and hopefully a lesson learned)

CLASSIC: Medically unexplained symptoms, electricity, and the general population

This article was published on my previous blog ‘The Fun Police’ back in 2015. I may necessarily agree with everything in the article anymore, but it remains a post that may be of interest to some…

Occasionally, new papers get published about electrohypersensitivity or, more specifically, idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF). I have written about this before, and we still do not know whether it exists or not (although, I suppose, that depends a bit on who you are talking to).  Anyway, a new paper got published in the journal Bioelectromagnetics last month, entitled “Does electromagnetic hypersensitivity originate from nocebo responses? Indications from a qualitative study.” You can find the paper here <link> In a nutshell, it is a qualitative study in which forty self-diagnosed electrohypersensitive people got interviewed and asked whether they first thought exposure to electromagnetic fields is bad and then got ill, or whether they first got ill and then thought it was because of electromagnetic fields. The first sequence of events would be a classic nocebo response, or in other words people think they get ill from something and subsequently become ill when they think they are exposed (the opposite of the well-known placebo effect, so you will), while the second order of events supposedly indicates that the nocebo effect is not present and people do get ill from the exposure to the electromagnetic fields. Looking through the paper it turns out that for 25 of the 40 participants the EHS self-diagnosis was done over 2 years ago, so at that point one really needs to start wondering how helpful this study is: after all, it is trying to disentangle a set of events for which it is pretty difficult to establish exactly when they first occurred, and which most likely happened very close together in time. To be honest to the author, he (I think) open up the complete book of tricks and skills required to get the best information possible in this case. I am however, not very convinced that the approach itself is a very useful method to try and shed some light on this particular problem. The author concludes that his results do not point to the nocebo effect as an explanation for electrohypersensitivity because a majority of participants  sought, and failed to obtain, medical assistance, and as a result started questioning effects of electromagnetic fields in their environment on their health. I would say that’s rather dubious given the methodology used and the fact that of the 40, only 23 claimed to have never heard of electromagnetic fields before reaching the relevant stage in the attribution process, which seems very unlikely. Indeed, the results are pretty ambiguous and do not exclude the nocebo effect (in combination with their self-attribution of the cause).  I think what the author and I agree on is that electrohypersensitivity is a form of MUS (medically unexplained symptoms) which they attribute to electromagnetic fields, without much evidence that this is the case….but I guess this is what the majority of researchers in the field belief (I’d say 97%, but that would get this muddled up with climate change). As also previously mentioned by many people, that does not mean this is not an illness; it’s not great suffering from this (and that is an understatement). Luckily, it seems cognitive behavioural therapy can help. So anyway, in summary, we have not learned much news from this paper. What it does provide though is a table of the reported symptoms attributed to the electromagnetic field exposure from, primarily wifi routers, mobile phone base stations, mobile phones, DECT phones and electric home appliances, by this group of people. That list shows an interesting, and very close, resemblance to a list of general subjective health symptoms that everybody occasionally suffers from…some more than others. In a 1999 paper from Norway, researchers developed a scoring system to get a handle behind how often these kind of problems occur in the normal, lay population. You can find the paper here <link> [update 2021: this link does not work anymore. Unfortunately, I do not know what it was]. So let’s take a step back before going all crazy about causality. I have copied the table from the electrohypersensitivity group, calculated the percentages, and added the same (average men and women) from the normal, lay population, just to see if we are actually talking about a problem at all. They don’t all match up, but have a look below:

I don’t know what you think about this comparison, but given that this was a self-selected group of electrohypersensitive people, I was quite surprised how comparable the numbers are. What it really looks like is that this is a group of people not that dissimilar from the general population, who attribute things everybody is occasionally unconvenienced by to a specific exposure (electromagnetic fields in this case).  Having said that, as a group they could do with better sleep! This table does not cover the frequency of the symptoms, which may or may not be different from the normal population. Presumably, once the connection is made these occur more often if exposure is perceived (Although blinded trials have not shown any direct link). 

In conclusion, unfortunately we have not learned anything new from this paper except, maybe, that IEI-EMF are not that dissimilar from everyone else. And that, I would say, is quite informative….

CLASSIC: Should we trust low to moderate increased risks in observational epidemiology, like ever…?

The below article was originally published in 2015, and because I think it may still be of interested has been transferred to this new blog.

A relatively short post this month, and it is also dealing with something we all know. However, sometimes it is important to reiterate all the stuff everybody already knows. So that they remember them, and you know…everybody actually knows them. So welcome to the wonderful world of residual confounding. Just to explain, in case epidemiology is not the driving force of your life, a confounder is a factor that is correlated to the exposure of interest and also to the disease you are interested in (and yes, is not on the causal pathway and stuff). So in graphical form this looks like:

To give you an example; people who smoke tobacco on average are also more likely to drink alcohol and both cause cancer (and smoking does not cause alcohol consumption or vice versa). So you can imagine that if you do a study on the effect of alcohol consumption on cancer risk, you need to also ask people about smoking, since there will be more people who smoke tobacco (which also causes cancer as you may know) in the group of people who drink alcohol. If you don’t ask this, then part of the cancer cases you think have been caused by alcohol are actually caused by tobacco and the proportion of this gets worse on average for people drinking more (because of that they are correlated).  Anyway, this implies it is important that all confounding factors are taken into account when you do an epidemiological study. This is however, not as easy as it sounds. Not only do you need to think about asking participants that (so you need to have thought about this before you started collecting any data), the fact that something could be a confounding factor should also be known when you are supposed to ask about it (on account of you having to think about it before starting to collect data). For (retrospective) observational epidemiology then, there is an extra problem in that if the information about the confounder was not collected, say, twenty years back, how are you going to estimate its effect at all (one solution, is through modelling, such as what we used here: <link>)? If the effect of the exposure of interest on the disease is really big then this is not necessarily a massive problem (if you are only interested in whether something is a true risk factor). For example, if you work on a building site then you working at heights is correlated to you being exposed to dust, and both can cause premature death. However, if you are interested in the risk of falling from heights and mortality, measuring dust exposure, despite it being a confounder, is not that important (If you think this is a stupid example, I am not entirely unsympathetic…and would very much welcome a better example in the comments below!). But most of these big risk factors are kind of already known; what if we are studying things with low or moderate increased risks? And by that I mean those with less than two-fold increased risks (or odds ratios or relative risks below 2); say for example the health risks such as air pollution, the studies of whether electromagnetic fields may cause cancer, basically any study on cancer risk and single food ingredients, etcetera? I did not choose these examples at a whim; they have low-to-moderate risks and the potential for confounding is very large (for example from socio-economic status, other nutrients, exercise,….).     For myself initially, I thought I’d just run some simulation studies to familiarize myself again with the impact of this (eg what if exercise wasn’t measured in a study on cancer risk and, say, drinking of green tea?). It kind of drove the point home of how much of an issue this residual confounding really is, so I thought to share it with you.I simulated a case-control study of 1000 people (500ish cases and 500ish controls) and for the sake of argument did not assume there was any measurement error. Now what if the exposure of interest was not a cause of the disease (i.e. OR=1), but it was correlated with another factor that was – but this factor was not taken into account in the statistical model? I simulated different correlations between the two factors (range 10% to 80% (100% being perfect correlation)) and also changed the odds ratio of the unmeasured factor from 1 (no effect either) to 2.2 (more than two-fold increased risk). I then plotted the odds ratio of the (unrelated!) risk factor that I got from the statistical logistic model. Have a look at the figure below! The x-axis is the OR of the unmeasured confounding factor, the coloured lines are for the different correlations, and the black line is the true OR (i.e this is 1, because it was not truly a risk factor).

Now I think this figure is pretty worrying! If the correlation between both is moderate (40%, the green line) you start to see an increased risk where non exists from about an OR of only 1.4 of the unmeasured confounder; and the effect gets much stronger quickly with higher correlations. For example, a moderate correlation of 60% is really very common, and so is an OR of 1.5 (the yellow line), and this would give you a wrong point estimate for the effect of 1.3; or a 30% increased risk. As a comparison for that 30%, the increased risk of long-term exposure to outdoor sulphur (PM2.5 specifically) and mortality is 14% (95% confidence interval 6%-23%) per 200 ng/m3 (reference to the ESCAPE study: Beelen et al. 2015). So the results on this simulation are quite realistic… Now more importantly, for detection of a statistical significant risk, the lower limit of the 95% confidence interval should be above 1. I have done this for my simulated study in the figure below:

In other words, for moderate correlations (>0.6) you only need a pretty small unmeasured confounder of about OR=1.4 (40%) to conclude that the exposure you were interested in (and remember, which does not actually cause anything) causes the disease. Even with a correlation of only 40% (the grey line) a missed confounder with an OR of about 1.8 (still moderate really) will have you conclude that you just found a (new!) causal factor for the disease! So is there a point to this? There is of course….it’s a warning that if we find odds ratios below about 2, we should really spend a lot more time thinking about residual confounding. In fact, I think this should be a mandatory section in the Discussion section of observational epidemiology papers. This is already done in most papers, but usually the focus is really on the effect we observed (let’s face it, a P-value below 5% will be more likely to get published, unfortunately), with little description of other causes and residual confounding. In fact, and I am proud to say that I used these words in a paper once, we should be more “epistemologically modest” in observation epidemiology and don’t make such big claims. The real message as such is of course that, next time you publish a paper, or next time you read a Daily Mail paper about what causes cancer this time, that you think about confounding… …and that you think about The Fun Police’s blog!…. Next time you attend a seminar or conference, and someone presents this new finding with a low or moderate risk, please stand up, raise your hand and say “You should have a look at this blog by this guy…..

CLASSIC: A medieval hermit, a religious nut and a mobile phone enter a pub……….

This article was originally published on my old blog OEHScience in 2014. Since, my knowledge about ‘Electrohypersensitivity’ has improved and my thoughts on the topic have changed. However, I believe the following article remains of interest. I would also very much appreciate your thoughts on this: there is a discussion section below the article…

Sometimes it is a very good idea to look at a problem from a completely different and sometimes unexpected angle. That’s what this post is all about.

It’s about idiopathic environmental intolerance, or, in medical terminology “I don’t feel well and my doctor has no idea what it is, but I am pretty sure it’s this thing I am exposed to”. Often, patients claim this is caused by low-level mixtures of chemicals they get exposed to such as plastics, pesticides, scented products, petroleum products and such and the symptoms are vague and non-specific (fatigue, headaches, inflammation of joints, and the lot…). Another exposure that is often mentioned is non-ionizing radiation; be it extremely low frequency from power distribution, RF from the use of mobile phones or nearby towers, or even from smartmeters and the likes (yup, in a way that dreaded dirty electricity story again). Strongly implying, maybe unjustifiable, that the suspected causal agent is well established the most well known subgroups of the idiopathic environmental intolerance spectrum are also known as multiple chemical sensitivity (MSC) and electro-hypersensitivity (EHS); depending on the suspected (or well, known…depending on who you talk to) exposure. But if we go back to the broader ‘all-inclusive’ definition, then in summary idiopathic environmental intolerance describes people getting ill but nobody really knows from what (it could even differ from case to case), except for the sufferers who “know” what triggers the effect, and will often experience symptoms when they see the cause, such as a mobile phone, being used in their vicinity. It may be quite difficult to do these studies because the exposures may differ from case to case, very low concentrations may already be enough for effects to be triggered (like with an allergic reaction), and there may be a delay between the exposure and measurable effects….for example. However, the problem is that in what is generally considered as the “gold standard” in these kinds of cases – blinded randomized controlled trials – patients respond as often and as strongly to placebos. So that’s a bit unfortunate. And then there are the facts that its incidence seems t increase with media attention and that for at least for a proportion of sufferers respond to some form or another of psychotherapy, implying that this may be of psychological, rather than biological/medical origin.
Anyway, I don’t know the correct answer here….I don’t even know enough about the subject to make a useful educated guess about causality. I know things can get quite heated when MSC and EHS are discussed, so I will stay away from any claims. Luckily, you may remember from the first sentence that this was not what this post was going to be about…it was going to be about looking at a problem from a completely different angle. It’s about idiopathic environmental intolerance though, so I just needed to colour in the background a bit…

I came across this 2012 study entitled “Taking refuge from modernity: 21st century hermits” by Boyd, Rubin and Wessely from King’s College London and published in The Journal of the Royal Society of Medicine. You can find the abstract and, if you have access to it the fulltext pdf here <link>. Instead of doing a randomized controlled trial, case-control, or other quantitative epidemiological study, or maybe go for the qualitative approach and interview sufferers, what struck them was that people suffering from MCS and EHS had stories that were not unique to recent times (more specifically, after the widespread introduction of electricity and, much later, mobile phones) but that they had striking similarities to stories from cases reported throughout history – just with a substitution of the purported causal agent. As it turns out, literature and historical texts have made reference to hermits and recluses who turned their backs on society for a multitude of reasons; including religious ones, personal events and societal dissatisfaction. The authors make the case that this may well imply that the underlying psychological processes in the isolation of oneself from society (eg the most severe sufferers from MCS and EHS) should, and I quote “be seen less as a unique and specific reaction to a hitherto unknown hazard, but more as part of a longer tradition of isolation from an impure world.” So to explore this they identified modern and historical cases to compare their views and beliefs. Pretty cool in my opinion, and an approach I did not see before…

They identified 6 EHS and MSC cases from the late 20th and early 21st century and four historical cases from the 3rd, 5th, 17th and early 20th centuries. Case descriptions are found in the paper, but in short it includes one MSC case, 2 EHS case, and 3 MSC&EHS cases, while the historical cases were Noah John Rondeau (1929-1950; dissatisfied with society), Roger Crab (1652-1657; dissatisfied with society and religious reasons), St Simeon stylite (415-459; religious reasons) and St Anthony (275-356; religious reasons).
Although the contemporary cases all reported that an important factor for seclusion was the experience of symptoms, they also indicated that disquiet about the ‘ill’ society that resulted from modernity was an important factor; as such, one saw sufferers as ‘an early warning system’. The historical cases similarly described ‘unsatisfaction with the world and its trends’, problems with the ‘slavery of industrialism’, and earlier that ‘the body of England has become a monster’ and St Antony, who derided the world where ‘everything is sold at its price,…
Both contemporary and historical cases also felt that they had no choice in making the decision to leave society, and that ‘nobody would like this is they had a choice’. While also they all felt they were in a constant struggle, whether against evil spirits (the more historical cases one would suppose), restriction of society laid out by those in power (everyone, really), or by ubiquitous chemicals or radiation (the contemporary cases).

Of course there are many methodological problems with this approach, most notably the small number of cases, but it is a very interesting, left-field approach to a question many people would like a definite answer to. I don’t know whether this could be helpful in other diseases, but I suspect it may well be something that only really works for EHS and MCS. The similarities between the MSC and EHS sufferers and the historical hermits are striking and may suggest some form of causality; if not for a biological/medical pathway.It may provide some leads for treatment of the most severe EHS and MSC cases, but definitely for more research.
Either way, luckily it’s also an interesting read – much more detailed then I have done here – so if you have a spare moment I would recommend it as a read (or, in a more contemporary way…if this was facebook I would have “liked” it).