Tuesday, November 30, 2010

Self deception and academic honesty

The scientific method is a powerful paradigm for finding the truth. Still, it is very easy to get fooled, especially in fields where the data is noisy or the samples are unreliable. Polymer physics is one of these fields.

A polymer is like a bowl of spaghetti with intertwined strands of varying lengths. Between the strands are spaces, whose sizes and shapes are randomly distributed. The distribution of these voids can be mathematically modeled by what is called, not surprizingly, a distribution function.

The distribution function depends on the sample's history and how it was originally made. Imagine a bowl of hot steaming pasta that is slowly cooled until it forms a solid rubbery clump. The resulting structure will be relatively compact with few air spaces. On the other hand, toss the hot pasta in the air and let it land in an utra-cold bowl so that it freezes before it is able to settle down. In this case, one will find gaps of various sizes between the strands. Similarly, the properties of a polymer will depend on how it is made as well as its thermal history. Thus, even when two samples are made the same way, if one of them is subsequently heated and cooled, it can change its properties dramatically. To make things worse, polymer samples are notorious for not having the same properties from place to place within the material. This is called inhomogeneity.

Thus, when a measurement does not give an expected result, it is easy to attribute it to inhomogeneity or to issues with how the sample was made. In our lab, we try to reproduce the same measurements many times on different samples to make sure that we are not being fooled by an anomaly. However, when doing huge numbers of measurements on materials that are known to vary from sample to sample, it is far too easy to keep the ones that we believe are right and to throw away the data that does not conform to our expectations.

As I have mentioned in numerous posts in the past, our lab discovered many years ago that some polymeric materials self heal after being burned with a laser. This work was the topic of a masters thesis and a Ph.D. dissertation. Presently, there are two students working in my lab on this project under funding from the U.S. Air Force. After doing a series of very careful experiments, one of my students noticed a pattern that suggested that the polymers might not be self healing.

My initial response was that we had to verify the result. I suggested that the student try several variations of the experiment so that we could narrow down the problem, and indeed, we found that in the redesign of the experiment a year ago, the method of insuring that the laser was stable had been redesigned with a subtle flaw that allowed the laser power to slightly drift without detection. What appeared to be self healing was due to a variation of the laser power. When the laser power happened to be stable, we saw no self healing, which we attributed to a bad sample, and when the laser power varied in just the right way, the sample appeared to heal. Once we had taken all this into account in the whole set of data, self healing was no longer being observed.

This lead my mind racing, going over all our past results, wondering if it was possible that we had been fooling ourselves all along. I was picturing the humiliation of informing the funding agencies of our grand error and returning the unspent funds. As a result, I would lose support of two graduate students and a month of summer salary for myself. But, my greatest concern was for the student who had spent two years of his life working on the project -- being left with no dissertation. The thought was most devastating in light of the fact that he is one of the best students in our department, both bright and hardworking.

After settling myself, I began to carefully assess our body of work. The fact that so many independent experiments confirmed self healing (using other apparatus without this problem or totally different techniques), made me convinced that this was a temporary setback. Thinking it through, I realized that another change that we had made was in the way we were preparing our samples, so I had the student fix his experiment to properly stabilize the laser, and to dig up the older samples for experimentation.

Last evening, I got the excited email that he was again observing self healing.

This incident reminds me of how science is a dynamic and churning process; where mistakes are made and corrected. Like a fractal, this dynamic is found at the level of the individual, who adjusts to problems on a daily basis, and in the scientific community as a whole, where the work of many labs and scientists takes a seemingly random walk that eventually zooms in on the truth.

Science is like an adventure. If the path were well defined, not only would the process be dull, but the whole enterprise would be worthless. Too often, the public and even university administrators believe that discovery is a linear process. At our university, we are required to write a memo each year requesting an extension for a student if he or she has been a PhD candidate for more than 3 years. And, we are faulted for students who take a longer time to graduate. This mentality implies that the PhD degree is the result of a fixed amount effort and does not take into account the unexpected that scientists have learned to expect.

I have no idea where my scientific adventure will take me next, but I know that it will exceed my expectations. More important to me are the trials and tribulations of the process of discovery. It is truly a most exciting adventure.

Saturday, November 20, 2010

Simple Scaling and too much to do in too little time

This was a good week. Nathan successfully defended his dissertation with flying colors and our manuscript for Advanced Materials, a high impact journal, was accepted. Our paper is a comment on another paper that previously appeared in Advanced Materials.

One central theme of our work that uses fundamental limits and sum rules to understand the nonlinear-optical response is the idea of scale invariance. The Schrodinger Equation has the property that the shape of the wavefunction does not change when the width of the potential energy function is decreased by a scaling factor b and the depth of the well is simultaneously increased by b squared. Under such a transformation, the wavefunction is compressed by the factor b, but otherwise, the shape remains the same. We call this simple scaling.

The nonlinear-optical quantity of interest to many applications is called the hyperpolarizability. Making the hyperpolarizability as large as possible is an ongoing area of intense research activity. To nobody's surprise, a larger molecule will generally have a larger hyperpolarizability. To better understand what makes a material tick, we have defined a quantity called the intrinsic hyperpolarizability, which is simply the ratio of the hyperpolarizability of a quantum system, divided by the fundamental limit. Interestingly, the intrinsic hyperpolarizability is invariant under simple scaling, so large and small molecules that are related to each other by simple scaling will have the same intrinsic hyperpolarizability.

In examining all the molecules that had been studied for nonlinear-optical applications over a 3 decade period, we found that the large range of hyperpolarizability values could mostly be accounted for by simple scaling. Thus, researchers were making molecules larger and larger but the best intrinsic hyperpolarizabilities remained static at about 0.03 - suggesting that it would be possible to make a factor of 30 improvement; but to get there would undoubtedly requrie a major paradigm shift.

Several years ago, I published a paper that showed how to calculate a related quantity called the intrinsic two-photon absorption (TPA) cross-section. More recently, Javier Perez-Moreno and I published a paper that introduced a rough rule of thumb for determining the intrinsic TPA cross-section - simply divide it by the square of the number of electrons. My earlier paper showed how to determine the number of electrons.

To my delight, my paper on TPA gets lots of citations, not because of what I believe is the beautiful physics of the work, but because scientists refer to my method of counting the number of effective electrons - a quite trivial (and approximate) procedure. To my horror, when comparing molecules, most researchers then go on to divide by the number of electrons rather than by the square of the number of electrons as suggested by Javier's work. This leads to a flawed comparison between molecules.

A recent paper in Advanced Materials reported on TPA cross section measurements of a new class of dendrimers, molecules with ever-branching pieces much like veins and arteries. They referred to my paper when calculating the number of electrons; but, as is usually the case, they divided by the number of electrons and found that the new dendrimer class was an order of magnitude better than the best existing dendrimers - an impressive improvement.

In our comment on this paper, we reanalyzed the data using the N squared rule and found that the new materials were in fact two orders of magnitude better. In addition, the dendrimers within each class, though of vastly differing sizes, all had approximately the same intrinsic TPA cross section. Thus, we were able to show that the authors had made an even more important discovery than they had realized. Usually, comments on a paper point out a negative flaw, leading to strong rebuttals and counter-rebuttals. In this case, all parties were winners.

Unfortunately, these small successes were overshadowed by a pile of work. After the Thanksgiving break, I am going to an NSF meeting in Hawaii (I hate to travel and I hate hot and humid places), where I will be reporting on the results of our projects. I need to prepare a glitzy poster as well as an oral presentation. This, on top of being hopelessly behind in preparing problem sets/solutions, grading, and catching up on lectures for my graduate mechanics class. In addition, I need to write a pile of recommendations and read a 350 page dissertation; the defense will take place early Monday morning.

Smack in the middle of this stressful week, after months of waiting for an estimate, a flooring contractor handed us an estimate and told us that he could get new floors in before Thanksgiving. Things moved fast, requiring us to immediately move large and heavy furniture back and forth between two rooms, which included taking down built-in cabinets and then replacing them, as well painting all the walls. After spending three solid days on manual labor (actually a satisfying break from work), my time pressures have become critical. I cringe at the accumulating piles of manuscripts waiting to be written and the papers that I need to review for journals.

So, how did I handle the stress? I squandered a couple hours writing about my frustrations on this blog. And now, back to work...

Thursday, November 18, 2010

Looking back

I was recently asked by a colleague to send him a short history of my work. While I do not feel particularly successful as measured by standard metrics, I felt satisfaction in the accomplishments of my career. More to the point, I recalled the thrills involved in the process of seeking understanding.

One such thrilling moment came in the late 1980s while I was working in a dark lab, illuminated only by the colorful lights on the panel of electronics and the pristine fine lines of the laser beam that bounced around the experiment. The idea of my work was simple. I wanted to understand how molecules embedded in a polymer were able to reorient. So, I passed a beam of light through the material to probe the molecular orientation and applied an electric field to coax the molecules to reorient.

On a microscopic scale, the polymer is like a bowl of cold spaghetti and the molecules like embedded toothpicks. The applied electric field acts only on the molecules while the polymer restricts the movement of the molecules. Before running the experiment, I turned up the voltage by hand, and watched the bar graph on my lock-in amplifier panel swell in response. This simple action may seem no more significant that watching the glowing bar grow when turning up the volume of a stereo; but, the realization that the bar graph represents the orientation of the tiny molecules sent chills down my spine. I spent several minutes turning the voltage up and down, picturing the rotation of the little toothpicks, and the stretching of the spaghetti.

I eventually got back to work and recorded reams of data, analyzed it, and developed a quantitative description of the process. The polymer acted as a large spring whose stiffness depends on temperature. The work eventually eventually appeared in journal, and formed the basis of future developments in our field. However, when I look at the plot in my publication that very coldly states, "figure so and so shows a plot of the intensity as a function of applied voltage," the image of the tiny molecules responding to my rude intervention brings back the thrill of doing my work.

Almost every piece of work tells a similar story. Recalling the work is like visiting with old friends and family and recalling the happiness of those moments that have been frozen in our memories. But, I delight in the fact that I continue to form new memories doing new experiments and developing new theories with my crew of students and colleagues. The life of science is truly privileged, and I am thankful that I live in a time when this selfish pursuit of passion is used by others in the future to the benefit of society.

I close with a short and terse summary of my work. Hopeful, I will have time in the future to share with you the stories that go along with each piece of work. For now, this is a bookmark to remind me of my past.

At Bell labs, with Ken Singer and Sohn, I developed the thermodynamic model of poling (now called the SKS model), which has since been vastly improved upon by Dalton and Jen. I also measured the electroscopic coefficient in corona-poled side-chain dye doped polymers to demonstrate that large poling fields were possible. During my time at Bell Labs, I also developed, with Carl Dirk, the missing state analysis, which is used to determine the importance of excited state contributions to the second hyperpolarizability as well as proposed and demonstrated that centrosymmetric molecules, such as the squaraines, should have the largest second hyperpolarizability. Also at that time, I showed experimentally and modeled theoretically the various mechanisms that contribute to the electrooptic effect (first and second order). The reorientational mechanism was used by W. E. Moerner to develop polymeric materials with large photorefractive effect. In addition, I showed that the tensor properties of the second-order susceptibility could be controlled by applying uniaxial stress to a polymer while poling it with an electric field.

At WSU, my group - in collaboration with Carl Dirk at UTEP and Unchul Paek of Bell Labs, was the first to fabricate single-mode polymer optical fiber doped with nonlinear-optical molecules, which have a large intensity-dependent refractive index. Later, we demonstrated a nonlinear directional coupler in duel-core fibers. In separate work, we used dye doped fibers with a large photo-mechanical effect to demonstrate the first all-optical circuit where sensing, logic, information transmission and actuation were all performed optically in one system to stabilize the position of a mirror to within 3 nm. This same system is found to be mechanically and optically multistable. After this proof of concept, we showed that this system could be miniaturized into a small section of fiber that combines all device functions into a single discrete device that can be easily integrated with many others. This work suggests that it may be possible to make ultra-smart morphing materials.

In other work using optical fiber, we demonstrated that we could write (and erase/rewrite) a hologram in a fiber waveguide and use it for image correction with phase conjugation. Similar fibers were used to demonstrate nonlinear optical processes using twisted light (i.e. a beam with orbital angular momentum) and showed the advantages of using such light to measure the nonlinear-optical properties of a material as well as its use for optical limiting applications.

More recent work has focused on using sum rules to build a broad understanding of the nonlinear-optical response. This work was motivated by my calculations that show that there is a fundamental limit of the nonlinear-optical response. This has lead to the concept of scale invariance and intrinsic hyperpolarizabilities, which can be used to directly compare the nonlinear-optical response of molecules of differing shapes and sizes. More importantly, these concepts have lead us to theoretical studies that have suggested new paradigms for large-non-linearity molecules - which have been experimentally demonstrated. Also, this work has shown that quantum systems whose nonlinear-optical response is at the quantum limit share certain common universal properties.

Our most recent work is focused on understanding our discovery of self-healing in dye-doped polymers. We find that when certain molecules are embedded in a polymer, the system recovers after being damaged by a high-intensity light source. These same molecules degrade irreversibly in liquid solution. This work has applications in making materials that withstand higher intensities.

Tuesday, November 16, 2010

Illogical Scientists

I know that I am often illogical about lots of things; it's a trait that makes us human. However, this does not excuse sloppy thinking by scientists. Scientists often complain about the innumeracy of the general population, but we are the most egregious offenders by associating undue meaning to the numbers, such as our reliance on the h-index to quantify the performance of an individual researcher with a single number.

The h-index is a measure based on citations to a researcher's work. The Wikipedia page on the h-index gives a description of how it is computed, what it purportedly measures, and criticisms of its use. No one would disagree that there are many factors that affect the h-index, making it a highly inaccurate metric. So, why do so many people use it? I believe that it is sheer intellectual laziness. What could be simpler than using a single number for ranking a group of individuals?

One of my beefs with the h-index is that it favors scientists that manage large research groups that publish lots of papers and produce lots of PhD s who then get jobs doing similar work and generate more citations for their research adviser. Is it healthy to reward good managers of science for being good scientists? Perhaps.

Scientists are well aware of the importance of their own h-index on career advancement. It is all too common for reviewers to point out that some important references are missing in a manuscript. As you might guess, reviewers often push for their own publications to be cited. Placing such a high degree of importance on a single indicator has a perverse effect on the scientific enterprise.

Perhaps I am selfish, but I live to think and to produce new ideas. It makes me happy. I want to be in the trenches, scribbling equations with my mechanical pencil on smooth white sheets of paper and tinkering in the lab with neat equipment that I have designed and build with my own hands - all in the company of students who are excitedly doing the same. I understand that there needs to be a balance between managing a lab and doing the work. A PI who spends all the time working in the lab is not passing along his or her expertise to others. It is also inefficient to be involved with all the mundane activities that go along with doing research. More ideas can be investigated when the students have a large degree of autonomy, so while difficult, I try to keep a healthy distance.

On the other extreme is the professor who writes lots of huge grant proposals and uses his or her mega funding to hire an army of postdocs who generate a large chunk of the ideas, advise the graduate students, and implement the work. Such an individual is doing a service to society by concentrating a critical mass of intellectual capital to solve important problems. It is an efficient division of labor where grantsmanship brings in the funds and the most capable people do the work.

I want to make it clear that I am not criticizing the big research groups. They provide an important service to the scientific community of producing the next generation of scientists and adding substantially to the knowledge base. Given that some of my work is amenable to large-scale science, I often question my decision to limit the size of my research group. Am I letting down my employer by bringing in a few hundred thousand dollars of funding per year rather than millions? My decision to operate on a smaller scale is based on my conviction that my science produces value to both the university and to society. Arguably, my group produces more publications and more citations per research dollars than many others. Imagine if a researcher's productivity were measured in citations per dollar of funding.

If truth be told, I would be dissatisfied working in a field with hundreds of researchers who are working on a tiny piece of a puzzle. Rather, I derive satisfaction from thinking about things from a unique and broad perspective. As a result, I have far fewer colleagues doing similar work and my students, as a result, will have fewer job opportunities - unless our work leads to a breakthrough. But, I justify our work in my conviction that such work has the potential for making a large impact in the long term.

However, I am concerned that the emphasis on large research groups garnering huge research grants will squeeze out the smaller groups, where breakthroughs in new thinking are usually generated. Given the present-day economic challenges, researchers with larger grants will undoubtedly be held in even higher regard by their cash-strapped administration.

A second perversion of the scientific enterprise is recognition for simply publishing in a prestigious journal. The quality of a paper should be judged on its own merits, not by its venue. The viscous cycle is reinforced by "me too" papers that cite high-visibility journals to imply that the author's work is in an important research area.

Consider the work of Victor Veselago who in 1967 investigated the properties of materials with a negative refractive index. His work got little attention (and few citations for decades) until the recent explosion of research on meta materials, results of which are routinely published in Nature and in Science with sleek color graphics and exciting titles. Amongst the present-day superstars are their lesser-known brethren such as Veselago who are laying the foundations for future revolutions.

Recently, the program director of an agency that funds my work sent me the following email,

I am collecting information to prepare for my upcoming internal review and I am collecting interesting accomplishments for highlights. This request does not substitute the formal annual progress report specified in the grant document that is due at your grant anniversary date of each year. Please provide a (no more than) one page summary of the significant accomplishments and publications (such as Science or Nature publications) in your program...

Clearly, this program director deems it necessary to justify to his superiors his funding choices not necessarily based on the quality of the work, but on where the work is published. What I find more distressing is that scientists in increasing numbers believe that a publication in these high profile journals carries greater importance.

Many years ago, one of my colleagues suggested that I should package my work in a way to make it publishable in Science or Nature. I did not chose science as a career to focus on marketing. Rather, I am motivated on a daily basis by the promise of understanding something new about nature and to share my understanding with others who are similarly driven. Nothing beats the eager faces in the classroom, arguing with me about my rendition of nature's hidden treasures. Nor the excited chatter of my graduate students as they report on new results or insights. Even day-to-day difficulties bring to light the pleasures of seeking the truth. At the end of the day, the negative aspects of the scientific culture fade into the background, where they belong. But, on occasion, my bliss is interrupted by that unpleasant pang of realization that scientists hold positions that are not supported by reason. We should know better. Shame on us.