‘Bending Science’ & the Dirty Electricity Industry

In principle the scientific method has a pretty relatively robust system based on peer-review to ensure that any problems in scientific papers are addressed before the paper is published. This methodology is not without its problems, but it is the best we have available; and it is hard to see what other system would solve this. Anyway, it works relatively well as a self-correcting scientific methodology.

One of the problems with it though, is that it is possible to block specific lines of thought – at least for a certain amount of time. For example, if you propose an alternative scientific explanation to a phenomenon with an ‘established’ paradigm it will be quite difficult to get this published (although, if the evidence is strong enough it will be published eventually) while, more problematic, there is abundant evidence of the distortion of evidence and attempts to distort the scientific, including peer-review, process by industry (most infamous of course, by the tobacco industry, but also in relation to carcinogenicity of chemicals used in industry and global warming, for example). 

Depending on who you talk to, this may or may not be the case in the assessment of the carcinogenicity of radiofrequency radiation (RF) from mobile phones, and it is EMF (electromagnetic fields) I’d like to talk about here.  

Not the ‘normal’ EMF characterized by the frequency and amplitude (and shape) of the waves, but a “new” metric that is supposedly the real exposure that causes cancer, and a number of other diseases, in humans. Indeed, it is ‘dirty electricity’; an exposure that is defined not by a clear and precise set of quantitative characteristics, but instead mainly by the fact it can be measured by a dirty electricity dosimeter. Basically, it is a form of RF, but measured by means of voltage changes over time within a certain frequency bandwidth. A better name, therefore, is high frequency voltage transients (superimposed on 50/60Hz fields). For those of you who keep a close eye on this blog, you may remember ‘dirty electricity’ since I have written about this before [direct link]. It is a niche within the EMF research community and, just to make this clear, by no means established or even accepted as a valid scientific hypothesis.

Its proponents put themselves against the established EMF research community which, they claim, is biased because the electric (and mobile phone) industries want to hide the real effects of EMF exposure on humans. This not unexpectedly has some traction within the electro- hypersensitivity (EHS) (or idiopathic environmental intolerance attributed to EMF) community, which is unfortunate. Whatever it is that is going on with respect to EHS in my opinion deserves more attention, but focussing the efforts on something as untested as the dirty electricity hypothesis seems like a bad idea. Similarly, there is a case of a middle school in the US where a perceived increased cancer risk amongst teachers is, instead of first looking at investigating the likelihood of a “normal” explanation first, researchers jumped immediately to ‘alternative scientific explanations’. A good analogue from the medical world is homeopathy, which similarly but at a much larger scale preys on vulnerable people without any evidence of efficacy above a normal placebo effect. Indeed, similarly it is possible to buy ‘dirty electricity’ measurement devices, ‘dirty electricity’ filters (to ‘clean’ your environment), and it is possible to buy ‘dirty electricity’ services that “solve your problem”. This seems primarily an American, Canadian, and I think Australian thing, but it is penetrating the UK as well. People make a lot of money on the back of dirty electricity, and as such it is just another industry.It is (still) relatively small, but it is an industry nonetheless. And with it come the issues generally associated with the bigger industries, such as the publication of research of, let’s say, varying quality to back up the industry’s raison d’etre, the use of PR and media to generate exposure and create a concern in the population, and – and this is what the rest of this post will be about – silence critics.

 I wrote a review about the epidemiology of dirty electricity which was published in 2010 [link to abstract] concluding, based on all available evidence at the time, that although it was an interesting concept there were so many problems with the published studies that it was extremely hard (probably ‘impossible’ is a better word) to say that dirty electricity was associated with increased risk of disease of adverse health effect. Last year it was 5 years ago that this was published and I thought it would be interesting to update the review and see if the proponents of dirty electricity had been able to provide better evidence of this being important.

Coincidentally, others had the same idea and I collaborated with Professor Olsen, a professor in electrical engineering at Washington State University with a nearly endless CV on the many aspects of electric engineering and EMF exposure measurement and assessment, on the updated review. The work got sponsored by the Electric Power Research Institute (EPRI) who are of course interested in this kind of work. This is, of course, important and this was highlighted specifically in our publication. At this stage I think it is important to highlight that EPRI specifically indicated that they did not want to see anything about our work until it was published. So in summary we highlighted the source of funding in any publications, but we were free to do as we wanted. Emphasizing this may seem a bit tedious, but it is quite important in what follows. 

So how does this relate to peer-review and the influence of industry? I’d like to just describe what happened next, since most of these things tend to stay behind closed doors. I am sure other research will have much worse experiences then we had but, well, I don’t know these…and it is my blog…but I’d be quite interested in hearing about those.

*

Anyway, so we conducted our systematic review of the exposure assessment and the epidemiology of ‘dirty electricity’; happy to be guided by what the evidence told us. It recently got published in the ‘Radiation & Health’ subsection of ‘Frontiers in Public Health’ and at the time of writing has about 1,000 downloads. If you are interested you can find it here as open access [link to full text]. I have much appreciation for the editor-in-chief of   ‘Radiation & Health’ who stuck with us throughout the whole peer-review process (more on this later) despite what his personal opinions on the topic may be (Note that I have no idea what these are, which is how it should be), but more about that later.

But it did not start with ‘Frontiers in Public Health’.  

The following will be quite familiar to everybody who has ever tried to have a scientific manuscript published, and I’ll call it Stage 1:  

“This decision was based on the editors’ evaluation of the merits of your manuscript….” 

“Only those papers believed to be in the top 20% in a particular discipline are sent for peer review….”

“Overall, I am not convinced that such a spartan review is warranted and worthy of consideration for publication…”

No hard feelings, this is all part of the process.

Stage 2 then happens subsequently in which the paper is externally reviewed, but rejected. This also generally happens regularly, but in this particular case there is something interesting that is starting to appear. Of, 2-3 reviewers the manuscript got favourable reviews from 2 reviewers but always got the minimum score possible from one reviewer (sometimes 2, depending on the number of reviewers). Given the stark contrast between the scores, we asked the editors about this, and basically if a manuscript receives one negative review it generally gets rejected. So here is a first step where it is possible to block the publication of a paper. Indeed, further discussion revealed that because the scientific proponents of the dirty electricity hypothesis are not that many individuals, an editor looking for appropriate reviewers and who types in ‘dirty electricity’ in, say, PubMed, will end up in the same, relatively small, pool of people.  So just the minimum score would be enough, but of course one needs some sort of rationale. This was generally speaking minimal, but more importantly, has nothing to do with the science. For example: 

“Although the authors acknowledge their funding source, their bias is obvious.  In many respects, it might be better to reject the paper.” “While the authors claim no conflict of interest, this study was funded by EPRI, which some would consider a conflict of interest.”

Nice, despite the fact the funder was clearly stated and we highlighted that the funder had no input in any aspects of the review (see the actual manuscript). However, still acceptable and it is important to make sure any funders and perceived biases are highlighted in a manuscript. Whether it is a reason for rejection is another though…

Anyway, it soon got a bit more personal and unfriendly. I don’t know if others ever review a paper like this, but I can’t say I was very impressed. Moreover, as a reviewer one should address specific scientific points so that the authors can address these if they are a genuine mistake, provide an explanation of why what they did is scientifically correct, or withdraw the manuscript if it really is an incurable problem.  Anyway, at this stage we move into more unscientific, and this time personal, comments:

“It is clear that the authors are unfamiliar…”

“This review appears to be biased..”

“It is clear that the authors are unfamiliar with the biological research within the area of extremely low frequency  (ELF) and radio frequency (RF)”

“They then make a calculation that demonstrates they do not understand dirty electricity.”

“It is clear that the authors are unfamiliar with electricity, the concept of linear and non-linear loads and grounding problems

The language here is not so much scientific as it is emotional, and clearly aimed at quickly suggesting to an editor who will mostly skim the reviews that the work is awful and should be rejected. Other ways of doing this is to use other emotive words, such as “flawed”, “false” etc. in abundance; and presto!:

“I find it deeply flawed with false statements, flawed assumptions and wrong calculations.  It seems to be written by someone who has only a superficial knowledge in this field and has a strong bias in favor of the electric utility. “

“I find this review to be scientifically inaccurate, biased, and deeply flawed and would recommend it not be accepted for publication.”  

“I don’t believe that publishing this flawed review is going to benefit science, policy makers, or the public”

Another nice one for the records, and which I would hope none of the readers of this blog (yes, you!) ever uses:

“The authors of this review article have never done any primary research on HFVT and are not considered experts in this field and would not be invited to review articles on this topic.” 

As a side note, I find the following comment quite telling, and, teaching epidemiology myself I would fail the student who wrote this in an exam:

“When an author allots effects from a variety of studies to “unconsidered confounding factors” it makes me question their objectivity.”

*

So yes, this worked amazingly, and some editors fell for this approach. Not the editor of Frontiers in Public Health: Radiation & Health though, who decided to go with the science on this one.

Of course, why change a winning strategy: 

“The authors definition of Dirty Electricity speaks of a complete misunderstanding.“

“The authors’ definition implies that they have little to no knowledge of electrical engineering, or more specifically high frequency voltage transients.” 

And my favourite:

“Generally the grammar is correct, but too often the language is in error. Some of the errors are so egregious that they raise the question of the authors understanding of electrical engineering and epidemiology.”

Even our thorough approach was questioned:

“The manuscript’s very length suggest they choose to “overpower” the reader obfuscation.” 

Unfortunately, this did not work this time around, and the paper remained in the peer-review process. This was quite a tedious process because a group of the reviewers would be selected, provide a review, and when all scientific points were addressed in our replies they would withdraw from the reviewing process; thus stopping the review process until another reviewer was found, delaying publication, and inconveniencing us as well as the editor. I consider this plain rude, but what it suggests to me is that we got the science right on this. Aside from delaying publication, there was another benefit to this approach for the ‘dirty electricity’ industry. As it turns out one can be as rude and personal as possible because, after withdrawing from the review process, all comments were erased (this is an unfortunate consequence of the Frontiers system). I won’t bother you with these since they follow the same pattern as above, but I’d like to highlight one gem of a comment: 

“EPRI has many far more qualified employees to write such a review but chose to hire De Vocht and Olsen.”

*

I hope you found my walkthrough of this particular peer-review process entertaining. This is what can happen if you do research that can hamper the profits of an industry. Of course this is nothing new, but it is quite interesting that even a small industry (and one that argues it fights against the INDUSTRY) uses the same tactics.  

You may have realised I did not tell you the conclusion of our systematic review. This was done on purpose, because in a way the actual conclusion does not really matter. We conducted this out of academic interest, so whether our conclusion was that this is indeed a dangerous exposure or whether this is nonsense, has no impact on either of us. For convenience , here is the link to the full paper [direct link]

1 Comment

Leave a Comment