Monday, January 31, 2011

Heavy Matters

I am now sitting at my desk and preparing for class, leafing through an old heavy dust-covered book with the simple title Gravitation., by Misner, Thorne and Wheeler. As an undergraduate, I had great aspirations that included reading this 1200-page tome. As is the case with many of my books, it sat on my shelf for decades without notice.

It all started a few minutes ago, when I recalled that this book clearly explained differential forms, a topic that I plan to introduce in my class today. So, I climbed my library ladder and hauled it to my desk. Paging through the book, I marveled at its beauty, both in presentation style and illustrations. I still consider it a difficult read, but I am now better equipped to understand the physics.

This wonderful book not only took the authors and their helpers many person-years of effort, but it covers a range of topics that have been developed by the most brilliant minds of the past century. I am thankful to the efforts of all those who contributed to this field, and as a result, enriched my life.

The dedication says it best:

We dedicate this book to our fellow citizens who, for love of truth, take from their own wants by taxes and gifts, and now and then send forth one of themselves as dedicated servant, to forward the search into the mysteries and marvelous simplicities of this strange and beautiful Universe, our home.

Thursday, January 27, 2011

Me as a reviewer and a complainer about modern-day publication practices

In the past, I have complained about reviewers who have evaluated my papers. However, it is a time-consuming job with almost no rewards, so I do appreciate their efforts. It is a service that we are all expected to provide. Given the time others have spent on my papers, I feel obligated to return the favor.

The most rewarding reviews are those from which I learn. I spent this morning reviewing a paper by some very distinguished scientists in my field. While I admit to the possibility that I may be wrong, I believe that there are serious issues with their paper, which will require attention before it is suitable for publication.

As usual, one activity leads my mind jumping around to other thoughts. This paper is an example of one in a series that is trying to simultaneously correct errors in the literature while introducing new science. This got me thinking again about the curse of information overload.

I am concerned that modern-day science, with the huge number of venues available for disseminating research results, is producing too much information along with lots of junk. The signal to noise levels are dropping while the whole system is bursting at the seems. People are becoming more specialized and less aware of other work. Since researchers are being judged on numbers of publications and citations, they overload existing top journals with so many papers that editors often cut good papers based on arbitrary guidelines. More second-tier journals are popping up to meet the growing demands by authors.

It's getting difficult for me to find useful information in the literature. For example, when searching electronically using very specific keywords, I get too many irrelevant hits that take forever to sort. It is also frustrating to have done what I believe to be great work in the past, only for it to be ignored for 20 years. Even more annoying is seeing the same identical topic of my research appearing many years later in Nature or Physical Review Letters with no citations to my papers. It is even more irksome when the modern work is but a subset of my original research, yet gets lots of recognition.

Sometimes, I send these modern-day authors reprints of my older papers. Some will respond apologetically pleading ignorance of my research, then continue not to cite my work. Others ignore my emails. These are indicators of a system that is not serving its purpose in producing research that serves society.

My review reminded me of the past era of more responsibility in publishing. Perhaps I view the past with unfounded fondness. However, I can atest to the fact that the authors of the manuscript that I have just reviewed are interested in the seeking truth. I therefore feel confident that they will carefully consider my comments and will only move forward with a revised manuscript if they are certain that they can make a real contribution to the field.

Below, I include a copy of my review for all to read. I, of course, will not reveal the identity of the authors, nor the journal to which this paper has been submitted. I take the risk of being exposed as the reviewer, but, I am sure that they will have already guessed my identity based on the flavor of my review; and, I will not deny being the reviewer if asked. Having gotten this off my chest, I need to get back to writing a proposal and grading homework. Perhaps I can then squeeze in a few moments to think about physics, and achieve the bliss that accompanies such thoughts.

And now, finally, the review:

The authors do some combinatorial wizardry to determine the coefficients of the various orders of the nonlinear birefringence. I am not willing to check all of the math, but from what I have checked, I trust that this is done correctly. However, I have a serious concern that may invalidate the approach, as I describe below.

The fundamental property of a material is its nonlinear susceptibility, not the nonlinear birefringence. The nonlinear susceptibility is what governs the physics of light/light interactions while the nonlinear birefringence is the quantity measured. They are related through the constitutive relationship D = epsilon E = E + 4 pi P. In the process of relating the two, one takes a square root of a power series in the field with the susceptibilities as coefficients. The crux of what I believe to be the fallacy of this paper is that n_m is related only to chi^(n+1). In the process of doing the expansion of the square root, one gets cross terms that are products of various lower-orders of the nonlinear susceptibilities that coincidentally may look like expressions that one sees in cascading calculations. The authors have in effect only expanded the square root to the first term. I believe that if the calculations are done properly, then it may be impossible to define unique constants of proportionality. However, under certain approximations, it may be able to define unique constants in the spirit of the authors' original intention.

A second problem along these lines is the neglect of the imaginary parts of the susceptibility. While experiments are off-resonance, there is always a small imaginary part. The cross-terms that I mention above can include products of imaginary parts that give a real response. Since it is possible that effects due to the imaginary part may get large for higher-order susceptibilities, they also need to be considered in the calculation. The fact that in practice, higher-order susceptibilities are by necessity more resonantly enhanced is a problem with applying this theory to real experiments at ultra-high intensities, and should be mentioned.

Nonlinear dichroism can also lead to polarization rotation, a common way of measuring the nonlinear birefringence. This contribution might also be large in practical experiments. While a good experimentalist would take this into account, I am concerned that a blind application of your results could lead to the unintended consequence of more junk in the literature. Thus, if not accounted for specifically, I would suggest adding at least a cautionary note.

I believe that the above issue needs to be carefully addressed before the manuscript is reconsidered for publication.

As a more minor point, In the introduction, the authors mention that the proportionality constant depends on the number of eigenmodes. I usually associate this factor with the number of degenerate frequencies. Is it true that if the frequencies are the same in a pump-probe geometry one gets the factor of 2/3? I thought that if the prorogation directions are different in the non-collinear polarization geometry, the effect shows up in the tensor properties of the susceptibility. That is, if the polarizations are different, then one is probing that particular component of the nonlinear susceptibility tensor. In any case, the meaning of eigenmode needs to be clarified or the sentence needs to be reworded. The use of the expression "eigenmode" in the rest of the paper may also need to be reconsidered.

I find the issue of cross terms to be a major one. If the authors choose to argue that the results hold in some limiting cases, that might diminish the relevance of the paper in loss of generality. If the calculation includes the terms that I believe to be missing, the coefficients will no longer be well defined. In either case, I believe that this paper may need major revision before it is suitable for publication. If I am wrong in my assessment, then the paper may be suitable for publication after minor revisions. In this scenario, it would be useful if the authors provided a more detailed explanation of the relationship between the nonlinear birefringence and the nonlinear susceptibility.

Wednesday, January 26, 2011

An exhausting paper is finally done

On August 8th, 2010, I reported that we finally completed a piece of work that over the last few years found us oscillating from moment to moment between peaks of elation and troughs of depression. We have finally competed the manuscript, and it will be submitted for publication shortly. My email to my coauthors says it all:

Dear Coauthors,

I am attaching the final version of our long-awaited paper. I believe that it is finally ready to go out for review.

I want to praise Shengting and Xavi for doing a marvelous job of doing the hard work of applying group theory to a complex molecule that ended up leading to a simple but powerful result. I also thank each of you for your hard work at various stages of this project.

For me, this was the most exhausting piece of work ever, but I think that the results were worth the wait. If there are no objections, I am directing Xavi to submit the paper to Physical Review A and to update the paper on the archives.

If we ever meet again in the same geographic location, we deserve to have a party!

Friday, January 21, 2011

The end of civilization

At some point in the middle of the 20th century, science made a critical transition: it had become impossible for one human brain to understand all of physics. The brilliant physicist Hans Bethe commented in his memoirs on that sad day when he realized that he no longer could comprehend it all.

As the knowledge base and complexity of our society grows, we become more dependent upon specialization. Computer programmers write software for computers designed by hardware engineers who use chips made by companies that buy silicon from mines whose employee retirement plans are managed by portfolio gurus that use computer software, etc. The inter-dependencies loop around our society - leading to all sorts of feedback

If one genius cannot understand all of physics, it is only natural to conclude that it is impossible for any one individual to understand everything. Given the impossibility of fully understanding the complexity of our sociotechnological system, well-intentioned policies may do more harm then good. Since laws and institutions are required to keep society running smoothly, it is a delicate balancing act to protect freedom of ideas and innovation while simultaneously designing constraints that do not lead to the unraveling of civilization.

As a physicist whose work relies heavily on quantum mechanics, I am comfortable with the notion that it is impossible to control all factors simultaneously. It is impossible to force a simple single particle to both have a specific location and momentum. By fixing one parameter, the other one becomes uncertain. In a complex system such as a society of billions of interacting individuals, the phenomena of chaos also becomes possible. I worry that control imposed by a few well-intentioned policymakers can lead to grave consequences. The modern practice of ideology-based legislation, which intentionally ignores facts, could make matters worse.

The sub-prime mortgage crisis, which I discussed in a previous post, is an example of several interacting factors that led to a financial crisis. I also argued that the consequences could have been the same even if all parties acted within the law (which they didn't).

Potential problems go well beyond the financial market. The complexity of our society makes it vulnerable to chaotic fluctuations and to outright collapse. Writing in the September issue of Scientific American, Danny Hillis points out the May 6, 2010 computer glitch that caused the Dow to Plummet 1,000 points, and then recovering by the end of the day. On November 19th, 2009, a single circuit board in a computer in Salt Lake City resulted in a cascade of failures that prevented air tragic control computers from communicating with each other in North America - resulting in hundreds of flight cancellations. In the blackout of 2003, power lines near untrimmed trees shorted, causing a power shutdown, which due to a faulty computer at one power plant lead to a cascade that shutdown 100 power plants in the Northeastern part of the United States and Ontario, affecting 55,000,000 people. These events were not the result of malevolence or human greed, but due to system complexity.

Our society and physical infrastructure are evolving as a result of our actions. Computer networks are growing, and the interdependence between systems is increasing. This growth is leading to advances that allow us to track flights and scan bar codes on our smart phones . These same underlying technologies could lead to catastrophic collapse. The truth of the matter is that we have created a monster that we cannot control. It is a monster that serves us, but one that could lead to our demise.

Our chaotic system cannot be effectively controlled from above, nor am I a Luddite espousing simpler tech-free times. We must recognize that along with progress, there will be painful setbacks. As civilization becomes more complex, there are greater chances of setbacks. The answer lies not in reactionary measures to legislate, which can itself stifle innovation and drive instabilities, but in low-tech redundancy. For example, being independent of the telecommunications network, ham radios are an excellent alternative to cell phones.

I am confident in the human spirit's ability to innovate, which if unrestrained, will undoubtedly lead to evermore wondrous ideas and technologies. In my opinion, the end of our civilization will not be brought upon us by the hand of a fluctuation, but in the bliss of complacency and ignorance.

Differential Geometry

I have been enjoying the last couple of days preparing for my Statistical Mechanics class. My new angle is to throw in a bit of differential geometry to expose students to its beauty as well as to give them a new and powerful tool to solve problems.

This morning, I was going over my notes on canonical transformations - a topic that I also covered in Classical Mechanics last semester. One concept that I still find a tad uncomfortable is in the manipulations of the quantities q, the coordinate, and q dot, its associated velocity. Central to the calculus of variations is the assumption that these two variables are independent. I can logically understand how this is the case, but my gut protests.

One of my colleagues, who shares an interest in the beauty of differential geometry, gave me the book Applied Differential Geometry, by William L. Burke. I noticed recently the most magnificent dedication on its inner cover, which reads, "To all those who, like me, have wondered how in the hell you can change q dot without changing q."

I can't wait to dive into this wonderful book; but, it will have to wait until I meet a looming proposal deadline and have prepared/delivered an invited talk in a few days. I must get back to preparing for my class, which meets in 45 minutes. Mornings before class are one of my favorite times...

Monday, January 17, 2011

An intellectual creation that exceeds its creator

A few posts back, I discussed Monte Carlo studies that shed light on many unexplained observations. For those of you interested in seeing a preprint, it can be found on the physics archives at: http://lanl.arxiv.org/PS_cache/arxiv/pdf/1101/1101.1041v1.pdf. The manuscript is now under review at the Journal of the Optical Society of America B.

I am still excited by our observations because of the breadth of understanding that has resulted. While this paper may not be appreciated by a large number of people, it is certainly on my list of top 5 papers that I have published over my career. Ironically, while my most highly-cited papers report solid science that has been useful to many other researchers, I prefer to judge my work on the degree to which it opens my mind; and, the awe/wonder that it elicits. Ideas that paint the universe with the broadest of brushes are king. This recent work is an intellectual creation that, like a brilliant child, has grown beyond its creators. I marvel at all that it continues to teach us as well as its ability to inspire new lines of research.

The abstract and conclusion says it all:

ABSTRACT: Studies aimed at understanding the global properties of the hyperpolarizabilities have focused on identifying universal properties when the hyperpolarizabilities are at the fundamental limit. These studies have taken two complimentary approaches: (1) Monte Carlo techniques that statistically probe the full parameter space of the Schrodinger Equation using the sum rules as a constraint; and, (2) numerical optimization studies of the first and second hyperpolarizability where models of the scalar and vector potentials are parameterized and the optimized parameters determined, from which universal properties are investigated. Here, we employ an energy spectrum constraint on the Monte Carlo method to bridge the divide between these two approaches. The results suggest an explanation for the origin of the factor of 20-30 gap between the best molecules and the fundamental limits and establishes the basis for the three-level ansatz.

CONCLUSION

Classifying Monte Carlo simulations using an energy spectrum function resolves several long-standing questions. First, our work shows the centrality of energy spacing in determining the intrinsic nonlinear response. While a broad range of transition moments are observed in atoms and molecules, the energy spacing - as characterized by the energy parameter, E, varies little between systems. Indeed, the importance of the energy parameter in attaining larger hyperpolarizabilities has been demonstrated in several experimental studies.[6, 37]

...

Monte Carlo calculation using the energy classification scheme have bridged the divide between Monte Carlo simulations and potential energy optimization studies. The power of the Monte-Carlo technique lies in the fact that all possible Hilbert spaces are probed, leading to very broad and fundamental relationships. Using energy classifications allows the parameter space to be reduced to subsets that describe atoms and molecules. Future refinements may lead to more specific design guidelines for making improved molecules for a variety of applications. The potential for discovering new fundamental science with this approach is of equal importance.

Friday, January 14, 2011

The Mortgage Crisis

I am on the way home from Case Western. My colloquium went well and my visit with faculty members was stimulating and informative. Travel continues to be exhausting. My flight from Detroit to Cleveland three days ago was delayed for three hours due to a blizzard. Getting in late and having to get up early each day compounded my weariness. This morning I got up at 4:30 am EST, or 1:30 am in Pullman. The flight from Cleveland to Cincinnati went smoothly. As I ran the length of the terminal to make my tight connection, the gate clerk in Cincinnati informed me that my seat to Seattle was upgraded to business class; so, I enjoyed a good meal and a long snooze. Hopefully, I will be ready for my ice hockey game tonight in Moscow.

There must be some principle of nature that forbids air travel to go smoothly. Upon landing on time in Seattle, I learned that my flight to Pullman was delayed. So, I decided to pull out my laptop and complete a posting about my thoughts on the mortgage crisis, made poignant by my colleagues in Cleveland who told me a story of a friend who purchased an $800,000 home for about $250,000.

My thesis is that the sub-prime mortgage crisis, which has led the world into a great recession, was caused by both good intentions as well as opportunistic greed.

The simple fact is that homeowners will default on a mortgage if they can't afford to make payments. Conservatives blame the Community Reinvestment Act, which encouraged lending to a group of borrowers who did not qualify for traditional loans. Liberals blame greedily-exuberant mortgage brokers who approved unqualified borrowers. And what of the stupidity of people who borrow more than they can afford? I would also pin some of the blame on Ginnie and Fannie Mae, who provided a ready supply of cash to organizations that sold mortgages. When the housing bubble burst, Ginnie and Fannie Mae held almost 20% of the sub prime loans. Many parties were responsible.

Then came securitization of mortgages. Financial institutions bought mortgages and bundled them together in the same way that an equity fund bundles stocks and sells shares to investors. Under the assumption that housing prices would not drop, the rating agencies gave these bundled mortgage products their highest ratings. If a borrower defaulted, the bank owned the home, which it could sell to offset losses. What could be safer?

The consumer demanded high rates of money market interest in a low inflation economy that could not realistically meet these demands. The specialists responded by being highly creative. Exacerbated by the combination of the government's good intentions, exuberant sales of mortgages, and consumer greed, this led to an unsustainable and unstable state.

The whole game unraveled when housing prices started to drop. At this point, many mortgages went under water (the home is worth less than what the borrower owes the bank), making it impossible for banks to recover losses. But perhaps worst of all, the bundled mortgages combined both good and bad loans, so it was difficult to ascertain the risks of the whole package. This lack of information made it difficult to sort things out and made the banks overly cautious. This extreme caution brought lending to a trickle. That's when the government stepped in with the bailout.

Repercussions continue to propagate through the housing market. Homeowners with underwater mortgages find it more cost effective to default than to sell their home at a loss. Even homeowners that can afford to make payments choose strategic default. Mortgage Bankers Association of America president John Courson criticized homeowners for this practice. Ironically, the MBAA strategically defaulted on their $79 Million headquarters building, saving tons of cash by moving 5 blocks away to another building.

The direction of finger pointing depends on one's ideology. As another example of supreme irony, consider the Obama administration's criticism of predatory lending -- the idea that mortgages were given to people who could not afford the payments for a big profit to the lender. While making such accusations, the administration announced the first-time home buyers program that provides incentives to people who would not normally qualify for a loan to buy a home. While the intentions may be noble, the net result may be the same; borrowers who can not afford the monthly payment.

I would argue that even if all parties acted within the law (which they didn't), the net result would have been similar. The problem is that our society is highly complex. The answer is not in legislating detail. It is tempting to outlaw specific financial instruments; but, I claim that people will find new ways to make a buck with ever more dire consequences. Stifling creativity may prevent specific problems but could stifle true innovation that moves our civilization forward. The trick is to mitigate the downside without suppressing progress. Sadly, the world is too complex for any human or even collection of geniuses to take all factors into account and produce constraints without unintended consequences. We must accept the downs as a necessary consequence of progress.

My flight is scheduled to depart shortly, so I have to run. In my next post I will continue along these lines to speculate about the inevitable demise of our civilization. Have a good day...

Saturday, January 8, 2011

Genius and insanity

As I prepare for my colloquium that I will be giving at Case Western Reserve University, I have been thinking more broadly about smart materials. The morphing materials that I see in the far future are made of many integrated Photomechanical Optical Devices (PODs). As more nonlinear units are interconnected to enable interactions, the system becomes more intelligent - being able to process more information at greater levels of sophistication. At some point, one can imagine the system going through a transition to high-level intelligence, popularly referred to as emergence.

Interacting nonlinear systems are also known to become chaotic under certain conditions. As the complexity of a nonlinear system increases, so does its propensity for becoming chaotic. Highly intelligent humans are often quirky, and many geniuses are known to have been insane. This appears to be a universal quality of intelligence, whether its basis is in the interaction of neurons, electronic components, or PODs. While I often wonder if our creations will ultimately result in our doom, being an eternal optimist, I believe that our intellect will allow us to anticipate and mitigate disasters - provided that ideologues and politicians do not stand in the way.

Now that the semester is about to begin, my life is becoming chaotic. I have manuscripts to write, papers to review, proposals to write, deal with a plagiarist in my capacity as a journal editor, and classes to prepare; not to mention doing research and trying to generate an interesting thought in the midst of the mayhem. I don't know how I will get through the semester. The good news is that several papers have been accepted or are being returned with minor revisions. Life moves on...

Saturday, January 1, 2011

A new beginning

The 21st century was ushered in by the ambitious humane genome project, which was completed in 2003 - well ahead of schedule. The end of the 21st century's first decade saw the creation of artificial life.

In the September 2010 issue of Scientific American, Arthur Caplan, of the University of Pennsylvania, writes:

"J. Craig Venter announced in May that he and his colleagues had made a new living bacterium from a genome they decoded, artificially rebuilt and then stuck into the cored-out remains of the bacterium Mycoplasma. When the hybrid bug began to reproduce, it became the first artificial organism, putting to rest the ancient and tenacious conceit that only a deity or some special power can create the sparks of life."

He concluded the short article with,

"Some people may feel that creating new organisms somehow imperils the dignity of life. I don't think it does. At bottom this is a triumph of knowledge. We can confirm the value we place on life when we understand better how it works."

Here here!

Since 2003, the cost of sequencing a human genome has fallen by a factor of a thousand, making inexpensive gene sequencing of individuals possible by the end of this decade. As a result, a future physician may be able run a simple test at the office that guides the synthesis of designer medications that are optimized to an individual's biochemistry to combat life-threatening diseases.

Technology has an impressive track record for improving our lives with better health and freedom from drudgery, unleashing the human spirit for nobler pursuits.

May your New Year be filled with health, prosperity and the deep satisfaction of meaningful pursuits.