Showing posts with label light. Show all posts
Showing posts with label light. Show all posts

Thursday, March 19, 2020

Health Effects From 5G Networks

5G is a high bandwidth cellular service that is coming to our area, so our local government is holding hearings for community input.  One can argue that the new antennas are ugly, but those that are vehemently opposed to the upgrade on arguments of purported ill health effects don't have evidence on their side.

I wrote an op ed piece for our local paper on the topic, but the corona virus is getting all the attention so my piece might not appear.  I am pasting a copy below for those of you who may be interested in the topic or who are concerned about their health.  In a nut shell, don't worry!

My piece (submitted to the Moscow-Pullman daily News):


Important policy decisions about 5G need to be informed by the science.  Rather than debating the science, we should use the scientific consensus, or better yet, consilience.  But how can the average citizen determine what is true?  What is the scientific consensus?  Which “expert” should we believe? 



The most reliable experts are those whose own scientific careers are dedicated to research areas that have bearing on the topic.  Highly regarded scientists produce knowledge that forms the foundations on which future researchers or technologists build.  The least reliable sources are those that cheery pick data to support their desired conclusions, claim that the consensus view is wrong without proof, or call upon conspiracy theories.



Why do I discount claims that 5G has adverse health effects?



First, I apply the smell test to see if the claims make sense.  In high-density population centers, all human-made sources of electromagnetic waves add up to a mere 1/1000 the light intensity of the sun (also an electromagnetic wave) at the earth’s surface.  Man-made electromagnetic sources don’t have that much oomph.  Or consider cells in our body that are kept at a toasty 98.6o F or so.  The energy imparted to cells and molecules due to thermal buffeting at this temperature is huge compared with the energy of electromagnetic waves produces by technology.  How huge?  Like a freight train barreling down the tracks compared to a bee leisurely searching for nectar.  How can 5G have adverse health effects if other ambient influences are so huge in comparison?



Next, we can go to the literature.  However, individual papers can be unreliable, and many of the studies report only on correlations.  But, correlation does not prove causation, as can be Illustrated with examples such as the near-perfect correlation with the diagnosis of autism and organic food sales; or the more humorous one of deaths due to falling televisions being correlated with undergraduate enrollment at US universities.  Talking an arithmetic average of the results of such correlational studies also makes little sense.  The task of interpreting the literature as a whole is compounded by the fact that journals are biased against null results; “a black hole is found at the center of our galaxy,” is a much more exciting headline that is more likely to be published than “researchers cannot find any black holes.”



The best summary of the literature can be found in meta studies, which aggregate the results from many publications to extract a more reliable connection between cause and effect.  These studies start by setting criteria for selecting a paper for inclusion, such as requiring a minimum sample size to improve statistics, demanding double-blind studies to remove bias, and excluding work based on surveys in which variables are not well controlled.  These criteria must be chosen BEFORE the researcher looks at any specific paper to avoid introducing a selection bias that favors a particular result.  Such studies show no adverse health effects of 5G.



Finally, I look for experiments that control the cause and observe the effect directly.   Hyperelectrosensitivity, a purported sensitivity to electromagnetic waves, is simple to test in double-blind experiments.  In such studies, subjects are exposed to electromagnetic stimuli at random times and their reactions recorded.  Both the subjects and the researchers are unaware of the timing of the stimulus to avoid cues that are perceived by the subjects and prevent the scientists from applying their own biases.   There is no observed correlation between the presence of an electromagnetic wave and the subjects’ reaction.  But show the subject a cell phone, and they react.  No well-controlled double-blind studies shows hyperelectrosensitivity (see tinyurl.com/rumdkvv).



In response to my letter of February 7th, George Bedirian points to Americans for Responsible Technology (ART) as “a science-based grassroots organization.” Their website cherry picks publications that support ART’s position and ignores the rest.  One of their founding principles is, “We reject the rush to roll out 5G technology across America.”  No individuals are associated with the website and no reasoned arguments are offered.  The only input accepted from a visitor is a monetary donation.  This is not a science-based organization.



The real benefits of 5G in telemedicine, information and entertainment far exceed the health risks, which are almost certainly non-existent.

Friday, August 16, 2013

One of my strangest papers


Regardless of whether or not its conclusions turn out to be true, this is a very interesting thought provoking paper...


 
 "We are pleased to inform you that your manuscript has been accepted for publication as a Regular Article in Physical Review A."  Even after decades of research, these words never cease to brighten my spirits.

I admit that a recent paper by Shoresh Shafei and me, "The paradox of the many-state catastrophe of fundamental limits and the three-state conjecture," is a bit odd.  We submitted it to Physical Review A, the premier physics journal, with the attitude, "heck, what have we got to loose by submitting it to a journal that will most likely reject it?"  Papers that are so far off the beaten path usually don't fare well.  That's why it is so satisfying that it got accepted so enthusiastically with no resistance.

The paper was reviewed by two individuals, who both liked it.  In the words Reviewer 1, "I really enjoyed reading this manuscript, which is well written and reads well. It is the last (for the time being I guess) of a long series of papers on the subject by the main author, who knows the subject perfectly well. In spite of some length, and a tendency to repeat concepts which have been already made clear, I must say, again, that I find the manuscript agreeable." The reviewer is right that this paper might be the end of one particular line of work, at least for the time being, which seeks to understand certain fundamental issues in what has become an applied research field.  The reviewer was also right that the paper was a bit wordy.  I have developed this bad habit from a continuous misunderstanding of our work by many of my colleagues.  Perhaps this wordiness served its purpose.

The second reviewer, opened his/her review with, "Regardless of whether or not its conclusions turn out to be true, this is a very interesting thought provoking paper on fundamental limits of nonlinear susceptibilities.  It should be useful for those seeking to design optimal nonlinear optical materials.  It is well thought out and has computer-modeling evidence to support it."  Reviewer 2 captures my thoughts on the paper.  The work brings up ideas that challenge our past results and leads us into unknown territory.  This type of work is not so common these days in mainstream fields such as mine.

The reviews go on to suggest changes, which we diligently implemented, leading to the paper being accepted for publication.  A copy of the pre-edited version can be found on the physics archives.  We will post the final version on the archives when the paper appears for publication.

So what's so strange about this research?  An answer requires a short introduction.

There is a quantity called the hyperpolarizability, which quantifies the strength of interaction between light and materials.  For those of you interested in a more in-depth explanation, please check out the tutorial on NLOsource.com.

The concept of a hyperpolarizability was originally applied to molecules, but it also applies to quantum dots, multiple quantum wells, quantum wires -- pretty much anything.  Since practical devices are based on it, making it as big as possible is often the goal.  Since the hyperpolarizability is fundamental to nonlinear light-matter interactions, it can give insights into basic science.

Given the importance of the hyperpolarizability, I calculated its fundamental limits back in 1999 and published the results in Physics Review Letters.  Aside from a few early emails expressing mild interest, the work remained largely un-noticed until a critical comment was penned by Champagne and Kirtman that appeared a few years later in PRL along with my response.  The process of writing my response got me thinking again about limits, which gave me ideas that led to a series of papers that both vindicated my approach but raised additional questions.

Any quantum system is represented by a spectrum of states, each having a characteristic energy.  Based on intuition, I guessed that at the limit, only three states contribute to the hyperpolarizability.  This was later called the three-level ansatz (the German for a guess).  There were still too many parameters remaining, and if they could have arbitrary values, there would be no limit.  Next, I used the sum rules, which relate these parameters to each other, to further simplify the equations.  The sum rules are neat because they come directly from the Schrodinger Equation without any approximations; and, they must hold for any system.

The combination of sum rules and the three-level ansatz lead to a limit, which turned out not to be a single number, but a function of the number of electrons in the system, N, and the energy of the first excited state E10.  This too made lots of sense because the limit must depend on the size of the system, which is related to N and E10.  The hyperpolarizability is like an area.  It is nonsensical to ask for the limit of area, but determining the largest possible area as a function of perimeter leads to insights about geometry.

Luckily, I had just edited a book on nonlinear optical materials, which contained a tabulation of all the molecules that had been measured before the book appeared in print.  A plot of a comparison of these molecules with the limit showed that they all obeyed the theory.  So far so good.  However, the best molecules were  a factor of 30 below the limit.  Molecules are hard to make, and they come in many shapes and flavors.  This gap suggested that there may be whole classes of molecules that are yet to be discovered that could fill the void.  Alternatively, it might be that no stable molecules with the required structure exist.  This set off an explosion of work in my group that was funded by the National Science Foundation for almost a decade, and still going strong.

To make a long story short, it is possible to "make" all sorts of quantum systems as theoretical models.  We can be like gods, holding the nuclei in positions that they would normally refuse to occupy in the real world.  This allows us to see how electrons would behave in all sorts of weird configurations.  We designed "electromagnetic bottles" that coax the electrons into highly peculiar orbits by simply adjusting a couple parameters.  No matter that the parameters we need might exceed the energy capacity of the world over the next decade.  We can know the result.  We even allowed electrons to interact with each other in the most bizarre ways that would make them blush.  Quantum mechanics and electromagnetic theory can predict the behavior of nature to unprecedented accuracies of many decimal places, so we can be confident that our musings correspond to a reality, however unpractical.

These investigations found that the best systems, independent of the approach in making them, yield a hyperpolarizability of 0.70899, where the limit is 1.  In every case, when the quantum system is at this extreme, we find that it is described by only three states.  Somehow, the three-level asnatz is always obeyed.  We also did what are called Monte Carlo studies, where instead of calculating the hyperpolarizability from the Schrodinger Equation, we determine all the parameters by randomly picking them under the constraint that they obey the sum rules.  Trying millions of runs, the largest values that we got  were 1, consistent with the prediction of our limit theory, and the three-level ansatz continued to be verified when the hyperpolarizability was 1.

All quantum systems must obey the sum rules, but, these equations are obeyed by more general systems than are described by the Schrodinger Equation.  We therefore hypothesized that the values between 0.7089 and 1 were the domain of exotic Hamiltonians governing phenomena that have not yet been discovered. We are busily pursuing this idea, but more on that later.

Aside from this curious 30% gap, the theory seems to correctly predict an upper bound and the calculations verify the three-level ansatz.  This state of affairs left many questions unanswered, but things looked to be self consistent.

The three-level ansatz is the key.  It is a guess that always seems to hold, but has never been rigorously proven.  Our attempts to prove the three-level ansatz basically boiled down to showing that when an M-level model is reduced to an (M-1)-level model, the hyperpolarizability gets larger.  Since the two-level model was previously proven to be unphysical, by induction, the three-level model would remain standing as the model that yields the maximum.

Shoresh took a different approach.  He started with a four-level model and varied the parameters using sliders in a popular program called Mathematica.  Like an audiophile adjusting the levels on an equalizer, he watched how the hyperpolarizability ebbed and waned with various choices of parameters.  He demanded that the equations obey the sum rules, but did not restrict the number of states. To his surprise, he found that when the second and third states become degenerate (of the same energy), the hyperpolarizability is 1.28, breaking the limit.  He continued to add states and found a pattern that for a system with M states, if all the states save the ground state and highest-energy state are degenerate, the hyperpolarizability is bigger than 1 and gets bigger and bigger as more states are added.  For a system with an infinite number of states, the hyperpolarizability becomes infinite.  This behavior, we dubbed, the many-state catastrophe.

The many-state catastrophe has many implications.  First, it invalidates the three-level ansatz by counterexample; the larges hyperpolarizability is not given by a three-state system.  Secondly, it shows that there is no limit. However, both of these results run counter to all observations.  As we have seen over and over, there is an observed limit and three-states always dominate the response at the limit.  Granted, we have sampled less than 1,000,000,000 systems, so perhaps we have missed the cases that invalidate our theory.

Having an infinitely-degenerate system that leads to infinite hyperpolarizability is clearly unphysical.  It appears that the three-level ansatz, though quite simple, somehow acts to restrict the space of all possible quantum systems that obey the sum rules to the ones that are physical.  How it can possibly do this blows my mind.

The three-level ansatz has problems because it leads to a 30% overestimate to what is observed, but that's pretty close for a guess.  There are also other mathematical issues with the theory that I will not explain here; but nevertheless, the theory appears to be highly predicative and has been successfully applied to many studies.  For example, the theory has found a new paradigm for making better molecules. 

In a sense, we have been spending lots of time coloring, but can't tell what fraction of the page we've covered.


Given that the many-state catastrophe invalidates the three-level ansatz, but is found to always hold brings up the possibility that it is true for all real systems, but unprovable unless we can find a different general constraint that limits systems to being real ones.  Such an additional constraint would then allow us to prove the three-level ansatz as its consequence.  However, given the generality of the problem, finding such a constraint may not be possible.  We have been testing various classes of quantum systems, so in some ways, we have been coloring in the regions corresponding to real systems.  If we can somehow show that we have colored in the whole region corresponding to all possible REAL systems, and the three-level ansatz always holds, that would also constitute a proof.  In a sense, we have been spending lots of time coloring, but can't tell what fraction of the page we've covered.

Our work stirs up mathematical inconsistencies and proposes ideas that are unproven and might even be wrong; but the  implications are tantalizing.  Mathematicians most likely can point to defects that invalidate much of what we have done, while technologist may complain that we have not really made a practical advance.  The fact that nature seems to behave according to the predictions of our theory is an indication that we are at least on the right path and the fact that the theory may not be derivable by deduction from known physics is thrilling.

In summary, the many-state catastrophe leads us to propose that the three-level ansatz is the correct constraint to enforce nature's will by restricting the sum rules to the realm of the real world.  If the three-level ansatz, which has so far been observed to be correct in the real world is not provable, then it may be a fundamental principle.  The chances of this being the case is highly improbably, but the quest for trying to find the proof will undoubtedly uncover many treasures.

I agree with the reviewer that this stuff is interesting even if it is wrong.


Friday, June 1, 2012

Fame for its own sake


After giving a talk in Milan, my host, another visitor from the US, and I had lunch. Discussions meandered from our work to our profession and then to the topic of how trailblazers often do not get credit for their discoveries. A book that I am now reading, "The Infinity Puzzle," describes this phenomena in the development of the standard model in particle physics.

In 1783, the geologist John Michell wrote a letter to Henry Cavendish proposing the existence of a super-massive body whose gravity was so great that even light could not escape. The letter was published in the Transactions of the Royal Society in 1784, yet this incredible human mind is not generally recognized for the very reason that it should be - it was way ahead of its time.

People who end up getting credit for a discovery usually live at a time when others are around to appreciate the work. Being a good communicator also helps. Our conversation at the outdoor cafe culminated in an interesting question. Would it be better to enjoy fame and fortune in this life for work that is posthumously found to be wrong; or, to make a discovery that is only appreciated long after we are gone?

I would rather play a part in the development of a new paradigm of thought that is right than to get credit for a transient fad that ends up being wrong. What would you prefer?