Friday, December 24, 2010
Michael Cohen had instilled in me a deep appreciation for truly understanding physics. Just as Michael Cohen felt that he would never attain the depth of understanding commanded by his adviser, THE Richard Feynman , I too feel that I will never approach the physical intuition of Michael Cohen. It is fortunate that the singularities that we call great physicists are born with abilities far superior to their contemporaries.
After more than three decades have passed, I recall little from my undergraduate flirtation with GR. However, some of the mathematical formalism of differential forms has taunted me for much of my career. I recall Jeffrey Cohen mentioning a paper on the topic of the properties of a black hole in some complex geometry that took forty pages of derivations in an article that appeared in The Physical Review. Using the trickery of differential forms, he was able to solve the problem in just a few steps.
The trick was to formulate the problem in a coordinate independent way, then to project the results into the coordinate system that reflected the symmetry of the problem. In contrast, the Physical Review paper used the inelegant brute-force approach of picking the coordinate system up front, and then by necessity painstakingly plodding through all the messy mathematics.
Given the complexity of the problems that we work on as a matter of daily routine in our research, I am always looking for simplifying tools. In teaching various classes, my intention is to sneak in a little bit of differential forms to wet the appetites of my acolytes and to teach my old brain some new tricks. Furthermore, the geometric interpretation of the mathematics adds a deeper layer of understanding.
In the upcoming spring semester, I am teaching graduate statistical mechanics for my first time. As usual, preparing for a new course if filled with grand excitement. You can imagine my elation when I realized that a homework assignment in the textbook could be done with ease using differential geometry. Since then, it has been difficult for me to think about anything else.
The problem is a simple one that normally requires a bit of math. The student is to show that the 6N-dimensional volume element in phase space for an N-particle system is invariant under a canonical transformation. To put this into simple English, the problem seeks to show that a transformation of coordinates does not change the nature of the results. Be reformulating the problem so that the volume element is represented as a wedge product of what are called one-forms, the volume element is shown to be the same when the so-called Poisson bracket yields unity -- the requirement of a canonical transformation. Thus, the problem is solved without the need for messy mathematics.
This realization makes me feel like a kid at Christmas. Ironically, tonight is Christmas Eve, the focal point of my family's celebration. My father has made what may be his last trip to Pullman from Philadelphia. He is 94 and still lives on his own, drives a car, and prepares meals for senior citizens at the Ukrainian Cultural Center in Fox Chase, Pennsylvania. Though still vigorous, his body betrays the telltale signs of wear and tear due to old age. Both of my children are home for the holidays, and all the fragrance from the traditional Ukrainian foods simmering on the stove and in the oven permeate the house. As I write this post, my wife is busily making last-minute preparations.
It is fortunate for me that my family values my passion for physics, and allows me to occasionally be a recluse. Just a few minutes ago, my wife called out a query about my whereabouts. I simply answered, "I am excited about something." Though she undoubtedly had some mundane duty for me to perform, she immediately signaled her understanding of my state of mind, and left me alone. I am truly fortunate to be living with someone who shares in my passions.
The intensity and meaningfulness of spirituality that I derive from physics far exceeds all others, including the times in my distant past when I had embraced religion. As my family turns in for the night, I continue to sit at my desk, full of excitement in my new-found understanding, and looking forward to sharing this understanding with my family and my students. It is a truly privileged life that allows me to rekindles the child-like wonder of Christmas on a daily basis.
Wednesday, December 22, 2010
This morning, I got an email from a colleague who is many years my senior. Though retired, he is still actively pursuing research that is moving him into my areas of expertise. He was particularly interested in the suitability of using three-level models. He had recalled me stating in the past that such models are highly inaccurate, but more recently, saw some of my papers that relied on three-level models. He was also curious about the resurgence of an old classical model of the nonlinear-optical response proposed by Miller, and whether or not such models were meaningful. Below was my response.
It seems that research goes through cycles, and I have already observed at least one period! Let me address the three-level model first. One of the problems is that there are two definitions of a three-level model, and I am guilty of not being careful enough in stressing the distinction in my papers. The three-level model can be either a first approximation or it can be an exact theoretical construct, as I describe below.
Most researchers who apply the three-level model do so in the spirit of a first approximation. At certain wavelengths and for certain molecules, this may be a good approximation. However, many researchers apply this approximation with impunity. They get away with it since almost everybody does it because of its simplicity. In analogy, using the two-level model for beta can be highly inaccurate but it has formed the basis of organic NLO for decades. We have been working on a paper for several years, which shows that a three-level model can predict the linear and NLO response of AF455. In that paper, we show that the absolute TPA spectrum, beta, and linear absorption are all described by a three-level model without any adjustable parameters. Each parameter in the model is separately measured. We also have a physical interpretation of why such a simple model works. This may be a rare case where the three-level model works.
There are many papers that have shown that more states are required to get the correct magnitude of the nonlinear response. On the other end of the spectrum are the quantum calculations, which sometimes use more than 100 states to predict a single number. Given that there are many ways for 100 states to lead to the same number, I also take exception with using such models to gain an understanding of the origin of the NLO response. The bottom line is that the NLO response is a complex phenomenon.
In a paper with my former student Perez-Moreno, we showed that even when on the two-photon resonance in a two-photon absorption experiment, higher excited states can also make significant contributions. This is most likely the source of the comment from Eric.
Then there is the three-level ansatz, which I originally introduced when calculating the fundamental limits: when the nonlinear response of a molecule is at the fundamental limit, only three state contribute. Thus, to be at the limit, there can be no transitions to other states, otherwise, oscillator strength would be sucked away from the dominant states, and the nonlinear response would be suboptimal. That is the topic of my paper in Nonlinear Optics Quantum Optics.
Because of this state of affairs, I think that the signal-to-noise ratio is very low in NLO research. Thus, my research has migrated away from studying specific molecules to trying to understand properties of quantum systems with nonlinearities that are near the fundamental limit. This work has lead to the identification of certain universal properties of a quantum system at the limit, which hints at ways of making better molecules.
Now your main point. The Miller formula is based on a classical oscillator, i.e. Equations 13 and 14 in the paper that you have attached. Thus, it misses all of the intricacies in the SOS expressions, such as resonances at many wavelengths and the presence of continuum states. Therefore I think that its applicability is limited. Any dispersion model with a couple of parameters would do equally well at describing the data. If the goal of the research is to come up with a method to approximate the dispersion of a simple system such as a diatomic molecule, then this approach may be acceptable. But, such work does not lead to a fundamental understanding of the origin of the NLO response at the level that you are seeking.
My interest in the NLO response of simple quantum systems covers air molecules since there are relatively simple to analyze. Let me know more specifically what you have in mind, and perhaps we can work on this together.
With regards to the Physics Reports paper, my activation barrier is in preparing the outline. I get the feeling that the paper will be very different once we actually get around to doing a literature review. But, I have now added this to my near-term to-do list. My new-year's resolution once again is to give you an outline.
I too wish you and your family a Merry Christmas and a prosperous and healthy New Year.
Scientists continue to learn from each other over their lifetimes through the scientific literature, conferences, and correspondence. In the age of the internet, we are connected to each other literally at the speed of light. Compare our times to those of the great scientists of the seventeenth through the nineteenth century, who had to often wait months to get a response to a letter. While there is a downside to technology, such as the wildfire propagation of errors and misinformation, having access to the world from my desk makes me appreciate the age in which I live.
Wednesday, December 15, 2010
Thanks for thinking about awards that would be appropriate for my research area. While I appreciate your confidence that I am a reasonable candidate, I frankly dislike the concept of an award in the sciences and would rather not be nominated. There are many reasons for my distaste for prizes: they distort the motivation for research; they stifle interaction between researchers who worry too much about getting credit for results; they encourage form and marketing over substance; and they consume resources that would be better spent on science. I therefore ask that I not be nominated for an award.
Thanks again for being supportive of our work.
Mark G. Kuzyk
In my older age, I recognize that it is a great privilege to have an employer who supports my addiction to physics, and that awards reflect well on the institution. I truly appreciate my position at Washington State University, where I am encouraged to feed my passions and to infect others around me.
Tuesday, December 14, 2010
Over a decade ago, while on sabbatical in the fall semester, I finally had some time to sit peacefully with paper and pencil in an effort to answer that burning question, "Is there a limit to the nonlinear-optical response?" Many people had made hand-waving estimates based on all sorts of assumptions. My goal was to use rigorous calculations without assumptions to get a result that would universally hold for any quantum system.
The precess itself was exhilarating. I had many false starts based on false assumptions and mathematical errors. When I was finally on the right track, the calculation was messy and tedious. As I plodded along, the equations slowly got simpler and simpler, shedding off this term and that. Along the way, I had several terms with infinities, a sure sign of trouble; but, I persevered. As the equations simplified, I noticed with excitement that the infinite terms canceled. Finally, I was left with a simple but beautiful equation. I stared at it with admiration. This was perhaps the first time in my life that I felt I had made a truly fundamental discovery. At that moment, I felt that my life was complete.
However, an interesting result is not always sufficient for a publication. I needed to connect this work with reality. So, I used tabulations of measurements to show that all molecules that had ever been measured fell below my calculated limit. I then submitted my paper to the best physics journal, Physical Review Letters, and waited for what would certainly be accolades from the reviewers. Instead, I got mixed reviews, but in the end, the paper got accepted and published. I had expected that my paper would cause a sensation, but after a couple of nice emails from leaders in the field, it got little notice. Instead, some chemists approached my work with animosity. Who was I to say that there was a limit to what was possible?
At that point, I moved on to other projects, which occupied my time. A couple years later, two developments got me back into the game of investigating the ramifications of the sum rules and fundamental limits. First, I had found an error in my program that I had used to plot the curve representing the fundamental limit. (My theory was correct.) After correcting the plot, I found that the best known molecules fell a factor of 30 short of the fundamental limit. This gap gave researchers a milestone to beat, and even today, researchers that refer to my original papers do so on the basis that it shows that there is room for improvement. The second development was that two quantum chemists wrote a comment on my PRL paper. While I believe that I successfully answered their criticisms in my rebuttal (which also appeared in PRL), the more important consequence was that it got me thinking about new ideas. At the same time, a Canadian group nano-engineered a material that breached the factor-or-thirty gap. In a press release from their university, they made the first reference to The Kuzyk Gap. So, my name got associated with the theory not by academicians but by Madison-Avenue types.
The history of my work has taken many turns. The next big leap resulted from meeting David Watkins at the Wine Bar in Pullman. He was the brother-in-law of the mother of one of my daughter's friends. Over a couple bottles of red wine, it quickly became apparent that David, a mathematician, was an expert in the calculations that I wanted to implement. In fact, he wrote a textbook on the topic. The basic idea was that we would try to make toy models of quantum systems to understand what properties lead to a large nonlinear response. This work led to our proposal that conjugation of modulation (basically, making speed bumps in molecules to trip up the electrons) was the way to optimize the nonlinear response. Later, in work with my collaborators in Belgium and in China, we used this design principle to demonstrate a molecule with a world-record nonlinear-optical response. This work got all sorts of recognition worldwhide.
To put all of this in perspective, my calculations of the limits are very general, and they apply to any quantum system. All of the molecules ever made are but a negligible fraction of the total. The work with real molecules and the calculations using toy models don't even scratch the surface of possibilities. A few years ago, I bought my son a laptop computer with the understanding that he would apply his newly-acquired skills to do some modeling for me.
The idea was simple, yet powerful. He would use Monte Carlo techniques to try to sample the whole universe of possibilities by randomly picking the properties of a quantum system under the constraints of the sum rules. By repeating this process millions of times, he could build a picture of the essential features of a quantum system that leads to a hyperpolarizability at the fundamental limit. This led to a whole set of new results as well as confirmed the validity of my models. The problem with the Monte Carlo approach is that it gives such general results that it is difficult to connect them to real systems.
We started a project more recently to classify the Monte Carlo simulations according to the energy-level spacing of the system. For example, molecules, on average, have an energy spectrum that becomes more dense at higher energies. In an atom, the energy of state n is proportional to the reciprocal of n squared, while in a molecule, it might vary as the reciprocal of n cubed. Being very busy this semester, I had put off writing the paper. But now that I am writing the paper and thinking deeply about the results, I am finding that this approach is making many profound connections with lots of our previous work. I have also found, with great relief, that it appears that a decade ago, I was more clever than I had imagined.
In those calculations, I made one assumption, which we have not been able to prove but appears to be correct. The assumption can be stated as follows: when a quantum system has a hyperpolarizability at the limit, only three states contribute. This assumption was not arbitrary, but based on intuition, which I argued as follows. A two-state system optimizes the polarizability without approximation. The result is based on the simple fact that the effect gets diluted when shared between multiple states. The hyperpolarizability is a much more complex quantity, and such an argument does not obviously hold. In addition, the sum rules - the holey grail of quantum mechanics - demand that at least three states are required. Putting these two facts together made me settle on three-states because dilution effects are minimized while the sum rules are obeyed. This is referred to as a the three-level ansatz. In German, an ansatz is basically a guess. It is common for physicists to make such guesses, then checking if the consequences are consistent with experiment.
In our most recent Monte Carlo simulations, the three-level ansatz is seen to be obeyed in all energy classes. Furthermore, as the energy classes are smoothly varied from decreasing to increasing energy density, the nonlinearities behave in a way that is predicted by our models. What is even more astonishing is that this behavior is observed even for system with more than three-levels. So, results that were calculated for the specific case of molecules with large nonlinearities also seem to hold for systems with 80 states. Furthermore, the present work resolves puzzles that arose in our toy models and sheds light on the reasons underlying the factor of thirty gap between theory and experiments.
It is unusual for one piece of work to resolve so many issues. Ironically, Shoresh did this work many months ago and has been bugging me to work on the manuscript. I few days ago, I felt this to be a solid piece of work that needed to be published before we moved on to the really interesting research. In the process of bringing all the results together for publication, I have experienced a moment of clarity that unifies all of the seemingly sloppy pieces. It was a moment to savor and to share on my blog.
But alas, I must get back to working on the manuscript and preparing for my lectures for next semester. Perhaps when I look back to this moment, I will chuckle at my naivety. The fact that we can look back at simpler times attests to our steady progress, jumping from one wrung to another on the ladder of knowledge and understanding. The calisthenics alone make the process fulfilling, but moments such as this one are rare and precious, deserving of quiet celebration.
Wednesday, December 8, 2010
The article on “Meteors in the Telescope” by Allan M. MacRobert in the July 2005 issue of Sky and telescope jogged a memory that I may have had taken a photograph of a meteor through my 6” reflector about 30 years prior. This got me frantically looking through piles and piles of family photos that were randomly tossed in several shoe boxes. After an evening of searching (and of course, pausing occasionally to marvel at pictures of me with hair or being awed by the incredible cuteness of my children), I finally found it! I was disappointed to see that the 3” by 4” photo showed signs of heavy abuse.
This had been my very first attempt at taking a photograph through a telescope, and Saturn was my target. It took me quite some time to center the planet in the telescope. Then I faced the frustration of mounting my camera and aligning it with the eyepiece. To make matters worse, the telescope was incredibly wobbly. However, nothing was worse than trying to focus the image through the finder of my 35mm camera. Finally, my brand new 6” reflector from Edmund Scientific was ready to go, so I pulled the trigger release. It took me all evening to get one shot, and I was mighty tired in school the next day from sleep deprivation.
After getting my pictures developed, I found an out-of-focus and overexposed image of the planet. However, I was proud of having caught what looked like a meteor. So, I taped the photo to the refrigerator. The years of re-taping the photo after falling under the fridge many times led to the fingerprints, tape marks, and sticky areas. Years of exposure to sunlight caused the colors to fade into a magenta hue with splashes of purple.
I recall having seen a flash of light in the camera’s finder on taking the shot. This could have been a memory that I conveniently added later upon getting the photo. For posterity, I scanned the legendary photo it into my computer at high resolution and carefully inspected it for clues. Could this have been an artifact?
Upon casual inspection, I believe that I indeed had caught a meteor. The head is clearly round, so it’s not a ghost image of Saturn. Furthermore, the small dim ghost of Saturn just below the image is clear evidence that the telescope jittered during the shot. Similarly, there is a small ghost of the meteor, just below and to the left of it - implying that it was not an artifact added at the film lab. The fact that the meteor’s ghost is displaced to the left implies that the meteor had moved after the ghost image was recorded. Though the image of the meteor looks suspiciously like a comet, I do not recall seeing one at the eyepiece. Given its brightness, I am sure that I would not have missed it. And the ghost image suggests that the object recorded was moving pretty quickly.
So, I believe that in my first attempt at astro-photography through a telescope, I nabbed a meteor in its tracks. I have never seen such a marvelously delicate tail in any meteor photographs (note that I did no processing after scanning the image aside from adjusting the color slightly to match the original print). Has anyone ever seen a similar image? I would be interested in hearing from anyone with similar pictures, or anyone with insights that support or refute my claim.
Almost a decade ago, I got back into astro imaging, though for the last few years, I have been too busy to use any of my telescopes. The advantage of being older is having the capital to buy decent equipment. For reference, I include a more recent shot (from 2004) that I took with a webcam coupled to my telescope. Telescopes have come of long way, not to mention the ubiquitous use of CCD cameras rather than film. I should get out more often!
Sunday, December 5, 2010
While on its surface, Jeff Peckman's initiative is not unreasonable (more about that later), a quick perusal of his website (see http://extracampaign.org/) revels that he believes that extraterrestrials routinely visit the earth and that the government and its minions are involved in a conspiracy to keep the public in the dark. Rather than preparing for the possibility of an ET visit, Peckman's intention was to use a mandated extraterrestrial affairs commission to expose his belief in an ET cover-up.
I am often asked by non scientists if I believe in extraterrestrial life. From the scientific viewpoint, my answer is that there is no evidence for ET life. However, the existence of life beyond the earth is a testable hypothesis and therefore fits comfortably within the realm of science.
In the middle of the 20th century, Miller and Urey recreated in a glass vessel what were believed to be the conditions on the younger earth: water, methane ammonia and hydrogen in the presence of electrical sparks to simulate lightening. In a recent (2008) reanalysis of the data reported by Johnson et. al. in Science, these simple experiments showed that 22 amino acids - the basis of life on earth - were formed. Many other experiments since then have shown that amino acids can be formed under a variety of conditions both on the earth and in space. Thus, on the basis of our current knowledge of chemistry, astro-biology and extrasolar planetary systems, the existence of life is not only plausible but highly likely.
While there may be many worlds out there that have simple life forms, only a tiny fraction of them would evolve intelligent life forms. Even so, the numbers of stellar systems are astronomically large, so it is likely that alien intelligent life forms exist.
Some of my friends might argue that a belief in alien life forms is akin to a belief in God. Not so. I would argue that these two beliefs are significantly distinct. My arguments about the plausibility of ET life were based on our knowledge of terrestrial life, general science, and simple extrapolation. However, there is no scientific evidence for God. In fact, the Judeo-Christian God prohibits humans from putting Him to the test. On many levels, God can only exist through belief not through the scientific method. This does not imply that God cannot exist, only that there is no rational method for proof.
Given what we know, it is unlikely that we are alone in the universe.
Tuesday, November 30, 2010
A polymer is like a bowl of spaghetti with intertwined strands of varying lengths. Between the strands are spaces, whose sizes and shapes are randomly distributed. The distribution of these voids can be mathematically modeled by what is called, not surprizingly, a distribution function.
The distribution function depends on the sample's history and how it was originally made. Imagine a bowl of hot steaming pasta that is slowly cooled until it forms a solid rubbery clump. The resulting structure will be relatively compact with few air spaces. On the other hand, toss the hot pasta in the air and let it land in an utra-cold bowl so that it freezes before it is able to settle down. In this case, one will find gaps of various sizes between the strands. Similarly, the properties of a polymer will depend on how it is made as well as its thermal history. Thus, even when two samples are made the same way, if one of them is subsequently heated and cooled, it can change its properties dramatically. To make things worse, polymer samples are notorious for not having the same properties from place to place within the material. This is called inhomogeneity.
Thus, when a measurement does not give an expected result, it is easy to attribute it to inhomogeneity or to issues with how the sample was made. In our lab, we try to reproduce the same measurements many times on different samples to make sure that we are not being fooled by an anomaly. However, when doing huge numbers of measurements on materials that are known to vary from sample to sample, it is far too easy to keep the ones that we believe are right and to throw away the data that does not conform to our expectations.
As I have mentioned in numerous posts in the past, our lab discovered many years ago that some polymeric materials self heal after being burned with a laser. This work was the topic of a masters thesis and a Ph.D. dissertation. Presently, there are two students working in my lab on this project under funding from the U.S. Air Force. After doing a series of very careful experiments, one of my students noticed a pattern that suggested that the polymers might not be self healing.
My initial response was that we had to verify the result. I suggested that the student try several variations of the experiment so that we could narrow down the problem, and indeed, we found that in the redesign of the experiment a year ago, the method of insuring that the laser was stable had been redesigned with a subtle flaw that allowed the laser power to slightly drift without detection. What appeared to be self healing was due to a variation of the laser power. When the laser power happened to be stable, we saw no self healing, which we attributed to a bad sample, and when the laser power varied in just the right way, the sample appeared to heal. Once we had taken all this into account in the whole set of data, self healing was no longer being observed.
This lead my mind racing, going over all our past results, wondering if it was possible that we had been fooling ourselves all along. I was picturing the humiliation of informing the funding agencies of our grand error and returning the unspent funds. As a result, I would lose support of two graduate students and a month of summer salary for myself. But, my greatest concern was for the student who had spent two years of his life working on the project -- being left with no dissertation. The thought was most devastating in light of the fact that he is one of the best students in our department, both bright and hardworking.
After settling myself, I began to carefully assess our body of work. The fact that so many independent experiments confirmed self healing (using other apparatus without this problem or totally different techniques), made me convinced that this was a temporary setback. Thinking it through, I realized that another change that we had made was in the way we were preparing our samples, so I had the student fix his experiment to properly stabilize the laser, and to dig up the older samples for experimentation.
Last evening, I got the excited email that he was again observing self healing.
This incident reminds me of how science is a dynamic and churning process; where mistakes are made and corrected. Like a fractal, this dynamic is found at the level of the individual, who adjusts to problems on a daily basis, and in the scientific community as a whole, where the work of many labs and scientists takes a seemingly random walk that eventually zooms in on the truth.
Science is like an adventure. If the path were well defined, not only would the process be dull, but the whole enterprise would be worthless. Too often, the public and even university administrators believe that discovery is a linear process. At our university, we are required to write a memo each year requesting an extension for a student if he or she has been a PhD candidate for more than 3 years. And, we are faulted for students who take a longer time to graduate. This mentality implies that the PhD degree is the result of a fixed amount effort and does not take into account the unexpected that scientists have learned to expect.
I have no idea where my scientific adventure will take me next, but I know that it will exceed my expectations. More important to me are the trials and tribulations of the process of discovery. It is truly a most exciting adventure.
Saturday, November 20, 2010
One central theme of our work that uses fundamental limits and sum rules to understand the nonlinear-optical response is the idea of scale invariance. The Schrodinger Equation has the property that the shape of the wavefunction does not change when the width of the potential energy function is decreased by a scaling factor b and the depth of the well is simultaneously increased by b squared. Under such a transformation, the wavefunction is compressed by the factor b, but otherwise, the shape remains the same. We call this simple scaling.
The nonlinear-optical quantity of interest to many applications is called the hyperpolarizability. Making the hyperpolarizability as large as possible is an ongoing area of intense research activity. To nobody's surprise, a larger molecule will generally have a larger hyperpolarizability. To better understand what makes a material tick, we have defined a quantity called the intrinsic hyperpolarizability, which is simply the ratio of the hyperpolarizability of a quantum system, divided by the fundamental limit. Interestingly, the intrinsic hyperpolarizability is invariant under simple scaling, so large and small molecules that are related to each other by simple scaling will have the same intrinsic hyperpolarizability.
In examining all the molecules that had been studied for nonlinear-optical applications over a 3 decade period, we found that the large range of hyperpolarizability values could mostly be accounted for by simple scaling. Thus, researchers were making molecules larger and larger but the best intrinsic hyperpolarizabilities remained static at about 0.03 - suggesting that it would be possible to make a factor of 30 improvement; but to get there would undoubtedly requrie a major paradigm shift.
Several years ago, I published a paper that showed how to calculate a related quantity called the intrinsic two-photon absorption (TPA) cross-section. More recently, Javier Perez-Moreno and I published a paper that introduced a rough rule of thumb for determining the intrinsic TPA cross-section - simply divide it by the square of the number of electrons. My earlier paper showed how to determine the number of electrons.
To my delight, my paper on TPA gets lots of citations, not because of what I believe is the beautiful physics of the work, but because scientists refer to my method of counting the number of effective electrons - a quite trivial (and approximate) procedure. To my horror, when comparing molecules, most researchers then go on to divide by the number of electrons rather than by the square of the number of electrons as suggested by Javier's work. This leads to a flawed comparison between molecules.
A recent paper in Advanced Materials reported on TPA cross section measurements of a new class of dendrimers, molecules with ever-branching pieces much like veins and arteries. They referred to my paper when calculating the number of electrons; but, as is usually the case, they divided by the number of electrons and found that the new dendrimer class was an order of magnitude better than the best existing dendrimers - an impressive improvement.
In our comment on this paper, we reanalyzed the data using the N squared rule and found that the new materials were in fact two orders of magnitude better. In addition, the dendrimers within each class, though of vastly differing sizes, all had approximately the same intrinsic TPA cross section. Thus, we were able to show that the authors had made an even more important discovery than they had realized. Usually, comments on a paper point out a negative flaw, leading to strong rebuttals and counter-rebuttals. In this case, all parties were winners.
Unfortunately, these small successes were overshadowed by a pile of work. After the Thanksgiving break, I am going to an NSF meeting in Hawaii (I hate to travel and I hate hot and humid places), where I will be reporting on the results of our projects. I need to prepare a glitzy poster as well as an oral presentation. This, on top of being hopelessly behind in preparing problem sets/solutions, grading, and catching up on lectures for my graduate mechanics class. In addition, I need to write a pile of recommendations and read a 350 page dissertation; the defense will take place early Monday morning.
Smack in the middle of this stressful week, after months of waiting for an estimate, a flooring contractor handed us an estimate and told us that he could get new floors in before Thanksgiving. Things moved fast, requiring us to immediately move large and heavy furniture back and forth between two rooms, which included taking down built-in cabinets and then replacing them, as well painting all the walls. After spending three solid days on manual labor (actually a satisfying break from work), my time pressures have become critical. I cringe at the accumulating piles of manuscripts waiting to be written and the papers that I need to review for journals.
So, how did I handle the stress? I squandered a couple hours writing about my frustrations on this blog. And now, back to work...
Thursday, November 18, 2010
One such thrilling moment came in the late 1980s while I was working in a dark lab, illuminated only by the colorful lights on the panel of electronics and the pristine fine lines of the laser beam that bounced around the experiment. The idea of my work was simple. I wanted to understand how molecules embedded in a polymer were able to reorient. So, I passed a beam of light through the material to probe the molecular orientation and applied an electric field to coax the molecules to reorient.
On a microscopic scale, the polymer is like a bowl of cold spaghetti and the molecules like embedded toothpicks. The applied electric field acts only on the molecules while the polymer restricts the movement of the molecules. Before running the experiment, I turned up the voltage by hand, and watched the bar graph on my lock-in amplifier panel swell in response. This simple action may seem no more significant that watching the glowing bar grow when turning up the volume of a stereo; but, the realization that the bar graph represents the orientation of the tiny molecules sent chills down my spine. I spent several minutes turning the voltage up and down, picturing the rotation of the little toothpicks, and the stretching of the spaghetti.
I eventually got back to work and recorded reams of data, analyzed it, and developed a quantitative description of the process. The polymer acted as a large spring whose stiffness depends on temperature. The work eventually eventually appeared in journal, and formed the basis of future developments in our field. However, when I look at the plot in my publication that very coldly states, "figure so and so shows a plot of the intensity as a function of applied voltage," the image of the tiny molecules responding to my rude intervention brings back the thrill of doing my work.
Almost every piece of work tells a similar story. Recalling the work is like visiting with old friends and family and recalling the happiness of those moments that have been frozen in our memories. But, I delight in the fact that I continue to form new memories doing new experiments and developing new theories with my crew of students and colleagues. The life of science is truly privileged, and I am thankful that I live in a time when this selfish pursuit of passion is used by others in the future to the benefit of society.
I close with a short and terse summary of my work. Hopeful, I will have time in the future to share with you the stories that go along with each piece of work. For now, this is a bookmark to remind me of my past.
At Bell labs, with Ken Singer and Sohn, I developed the thermodynamic model of poling (now called the SKS model), which has since been vastly improved upon by Dalton and Jen. I also measured the electroscopic coefficient in corona-poled side-chain dye doped polymers to demonstrate that large poling fields were possible. During my time at Bell Labs, I also developed, with Carl Dirk, the missing state analysis, which is used to determine the importance of excited state contributions to the second hyperpolarizability as well as proposed and demonstrated that centrosymmetric molecules, such as the squaraines, should have the largest second hyperpolarizability. Also at that time, I showed experimentally and modeled theoretically the various mechanisms that contribute to the electrooptic effect (first and second order). The reorientational mechanism was used by W. E. Moerner to develop polymeric materials with large photorefractive effect. In addition, I showed that the tensor properties of the second-order susceptibility could be controlled by applying uniaxial stress to a polymer while poling it with an electric field.
At WSU, my group - in collaboration with Carl Dirk at UTEP and Unchul Paek of Bell Labs, was the first to fabricate single-mode polymer optical fiber doped with nonlinear-optical molecules, which have a large intensity-dependent refractive index. Later, we demonstrated a nonlinear directional coupler in duel-core fibers. In separate work, we used dye doped fibers with a large photo-mechanical effect to demonstrate the first all-optical circuit where sensing, logic, information transmission and actuation were all performed optically in one system to stabilize the position of a mirror to within 3 nm. This same system is found to be mechanically and optically multistable. After this proof of concept, we showed that this system could be miniaturized into a small section of fiber that combines all device functions into a single discrete device that can be easily integrated with many others. This work suggests that it may be possible to make ultra-smart morphing materials.
In other work using optical fiber, we demonstrated that we could write (and erase/rewrite) a hologram in a fiber waveguide and use it for image correction with phase conjugation. Similar fibers were used to demonstrate nonlinear optical processes using twisted light (i.e. a beam with orbital angular momentum) and showed the advantages of using such light to measure the nonlinear-optical properties of a material as well as its use for optical limiting applications.
More recent work has focused on using sum rules to build a broad understanding of the nonlinear-optical response. This work was motivated by my calculations that show that there is a fundamental limit of the nonlinear-optical response. This has lead to the concept of scale invariance and intrinsic hyperpolarizabilities, which can be used to directly compare the nonlinear-optical response of molecules of differing shapes and sizes. More importantly, these concepts have lead us to theoretical studies that have suggested new paradigms for large-non-linearity molecules - which have been experimentally demonstrated. Also, this work has shown that quantum systems whose nonlinear-optical response is at the quantum limit share certain common universal properties.
Our most recent work is focused on understanding our discovery of self-healing in dye-doped polymers. We find that when certain molecules are embedded in a polymer, the system recovers after being damaged by a high-intensity light source. These same molecules degrade irreversibly in liquid solution. This work has applications in making materials that withstand higher intensities.
Tuesday, November 16, 2010
The h-index is a measure based on citations to a researcher's work. The Wikipedia page on the h-index gives a description of how it is computed, what it purportedly measures, and criticisms of its use. No one would disagree that there are many factors that affect the h-index, making it a highly inaccurate metric. So, why do so many people use it? I believe that it is sheer intellectual laziness. What could be simpler than using a single number for ranking a group of individuals?
One of my beefs with the h-index is that it favors scientists that manage large research groups that publish lots of papers and produce lots of PhD s who then get jobs doing similar work and generate more citations for their research adviser. Is it healthy to reward good managers of science for being good scientists? Perhaps.
Scientists are well aware of the importance of their own h-index on career advancement. It is all too common for reviewers to point out that some important references are missing in a manuscript. As you might guess, reviewers often push for their own publications to be cited. Placing such a high degree of importance on a single indicator has a perverse effect on the scientific enterprise.
Perhaps I am selfish, but I live to think and to produce new ideas. It makes me happy. I want to be in the trenches, scribbling equations with my mechanical pencil on smooth white sheets of paper and tinkering in the lab with neat equipment that I have designed and build with my own hands - all in the company of students who are excitedly doing the same. I understand that there needs to be a balance between managing a lab and doing the work. A PI who spends all the time working in the lab is not passing along his or her expertise to others. It is also inefficient to be involved with all the mundane activities that go along with doing research. More ideas can be investigated when the students have a large degree of autonomy, so while difficult, I try to keep a healthy distance.
On the other extreme is the professor who writes lots of huge grant proposals and uses his or her mega funding to hire an army of postdocs who generate a large chunk of the ideas, advise the graduate students, and implement the work. Such an individual is doing a service to society by concentrating a critical mass of intellectual capital to solve important problems. It is an efficient division of labor where grantsmanship brings in the funds and the most capable people do the work.
I want to make it clear that I am not criticizing the big research groups. They provide an important service to the scientific community of producing the next generation of scientists and adding substantially to the knowledge base. Given that some of my work is amenable to large-scale science, I often question my decision to limit the size of my research group. Am I letting down my employer by bringing in a few hundred thousand dollars of funding per year rather than millions? My decision to operate on a smaller scale is based on my conviction that my science produces value to both the university and to society. Arguably, my group produces more publications and more citations per research dollars than many others. Imagine if a researcher's productivity were measured in citations per dollar of funding.
If truth be told, I would be dissatisfied working in a field with hundreds of researchers who are working on a tiny piece of a puzzle. Rather, I derive satisfaction from thinking about things from a unique and broad perspective. As a result, I have far fewer colleagues doing similar work and my students, as a result, will have fewer job opportunities - unless our work leads to a breakthrough. But, I justify our work in my conviction that such work has the potential for making a large impact in the long term.
However, I am concerned that the emphasis on large research groups garnering huge research grants will squeeze out the smaller groups, where breakthroughs in new thinking are usually generated. Given the present-day economic challenges, researchers with larger grants will undoubtedly be held in even higher regard by their cash-strapped administration.
A second perversion of the scientific enterprise is recognition for simply publishing in a prestigious journal. The quality of a paper should be judged on its own merits, not by its venue. The viscous cycle is reinforced by "me too" papers that cite high-visibility journals to imply that the author's work is in an important research area.
Consider the work of Victor Veselago who in 1967 investigated the properties of materials with a negative refractive index. His work got little attention (and few citations for decades) until the recent explosion of research on meta materials, results of which are routinely published in Nature and in Science with sleek color graphics and exciting titles. Amongst the present-day superstars are their lesser-known brethren such as Veselago who are laying the foundations for future revolutions.
Recently, the program director of an agency that funds my work sent me the following email,
I am collecting information to prepare for my upcoming internal review and I am collecting interesting accomplishments for highlights. This request does not substitute the formal annual progress report specified in the grant document that is due at your grant anniversary date of each year. Please provide a (no more than) one page summary of the significant accomplishments and publications (such as Science or Nature publications) in your program...
Clearly, this program director deems it necessary to justify to his superiors his funding choices not necessarily based on the quality of the work, but on where the work is published. What I find more distressing is that scientists in increasing numbers believe that a publication in these high profile journals carries greater importance.
Many years ago, one of my colleagues suggested that I should package my work in a way to make it publishable in Science or Nature. I did not chose science as a career to focus on marketing. Rather, I am motivated on a daily basis by the promise of understanding something new about nature and to share my understanding with others who are similarly driven. Nothing beats the eager faces in the classroom, arguing with me about my rendition of nature's hidden treasures. Nor the excited chatter of my graduate students as they report on new results or insights. Even day-to-day difficulties bring to light the pleasures of seeking the truth. At the end of the day, the negative aspects of the scientific culture fade into the background, where they belong. But, on occasion, my bliss is interrupted by that unpleasant pang of realization that scientists hold positions that are not supported by reason. We should know better. Shame on us.
Wednesday, October 27, 2010
I starting getting sick on Thursday and my symptoms peaked over the weekend. Early Saturday afternoon, I had a hankering for hot and sour soup to sooth my beleaguered throat, so I ordered takeout and proceeded to drive to the Emerald, one of our favorite Chinese restaurants in Pullman. I hastily jumped from my car and announced upon entering, "Hot and Sour soup for Mark." Immediately upon seeing the quizzical look on the face of the woman at the cash registered, I realized that I had ordered my takeout from the New Garden Restaurant.
I was a tad annoyed with myself because I had driven to the other side of town to the Emerald; but, I calmly drove through what is uncharacteristically heavy traffic for Pullman to New Garden. On my way, I dutifully pulled over as two fire engines whizzed by with their screeching sirens and obnoxious horns.
As I approached home, I realized that I had driven past the New Garden restaurant during the fire engine diversion, so, I made a U-turn, and was once again on my way. Upon arriving, I was horrified to see a line of 20 individuals at the cash register. It turns out that a party of 20 had decided on separate checks. I patiently waited for at least 15 minutes for the line to clear.
When I got to the front and announced my name, the young woman at the counter hectically searched for my order, and asked her colleague what had happened to the takeout bag that had been sitting on the counter. The coworker embarrassingly replied that she had given the order to another customer. So, I had to wait another 10 minutes to get my order filled.
On the bright side, my order would have been cold by the time of my pickup; and, since I had originally forgotten to request that they hold the shrimp, I was able to make this change. My wife was starting to get concerned over my long sojourn. But, relieved at my return and appreciative of the wonderful soup, we blissfully watched TV while nursing our illness. My only regret is that I had less time and energy than I needed to finish my work. So, I am now even more behind in my work then ever.
Friday, October 8, 2010
In addition to many stimulating scientific talks, a special session was dedicated to Prof. Zyss's achievements. Several eminent scientists described his research contributions, and past students talked about the profound effect he has had on their careers and their lives.
Joseph Zyss is a multidimensional person who has many interests outside his profession; art and gourmet food. For fun, the organizers arranged for two invited talks devoted to the topics of art in science and the science of food. The culinary talk described the chemical basis for taste and touched on the interesting possibility of chemically synthesizing new foods.
The presentation on art was filled with reproductions of diverse art forms, making connections to science by juxtaposing works of art with mathematical structures and images of real physical systems. The talk ended with the implicit question, will art and science ever converge. My immediate reaction was that science is art.
Mathematics is the color pallet, clay, and media of Physics. While art pierces the soul through images and narrative to elicit emotion, science inspires a sense of awe and wonder directly through the intellect. A piece of art, such as the Pieta, represents a thing of beauty, exquisite in its execution -- recreating the artist's faith and passion. Science, in contrast, through its mathematical structure, encompasses all that is and can be in one grand yet simple set of theories that are intellectually beautiful, reflecting the ingenuity and imagination of generations of scientists.
A painting or a sculpture can recreate only one particular scene as a map portrays in miniature the geography of an area. In contrast, a theory represents everything. It is a kind of universal art that portrays all AND always. It has both predictive power and leads to a deep understanding of all things. To me, the most spiritually intense response comes from seeing the universe through the inner eye of science.
Professor Zyss is a scientist whose life has left and continues to leave many significant brush strokes on the fabric of science. Our ambitions draw each of us to the canvas in hopes that our mark will be permanent, tantalized by the beauty of what has already been revealed, in awe of the larger image that is slowly coming into focus. I am privileged to stand before the canvass, to marvel at its beauty, and to participate in humanity's efforts to leave a mark, even if my work gets covered over by those who follow.
Tuesday, October 5, 2010
The last few days brought some good and bad news. As I had anticipated, Nathans' revised manuscript was accepted for publication. And, Shoresh's and Mark's JOSA B paper, which I had previously reported as being highlighted on the Optical Society of America's website, was the second most downloaded JOSA B paper in September 2010. See http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-27-9-1849 for a free download of the paper.
The paper that I had submitted to the Journal of Chemical Computation and Theory with David Watkins, over which I waged a full out battle with the referees, was finally rejected. I had expected this outcome from the outset, but thought it worth a try. My intention was to expose a new audience to our work, but apparently, this was not to be. Below is the strongest criticism of our paper.
"Professor Kuzyk seems strongly taken with his own work but is not sufficiently mindful of the work by others. Perhaps the most egregious example deals with the issue of neglecting the effect of vibronic interactions on the static first hyperpolarizability. As justification he cites two of his own papers, but totally misses the extensive literature in this field showing that the vibronic effect cannot be ignored. There are, in fact, many instances where the vibronic term is comparable to, or larger than, the pure electronic contribution. Hence, the maximum value he uses (derived only by considering electrons) could easily be breached without invoking any exotic systems."
The reviewer is referring to the fact that I cited one of our papers that shows that vibronic states do not contribute substantially to the hyperpolarizability. Undoubtedly, in heated debates, the parties involved often speak past each other. To do my part, I am once again plowing through this literature to understand the issue. However, I find it is a bit annoying that the reviewer did not point out an error in our logic, but rather made sweeping statements. My guess is that it is certainly possible for real molecules to have vibronic contributions that are large compared with electronic excitations; but, I believe that when a quantum system is designed to have a hyperpolarizability near the fundamental limit, the vibronic contribution will never be as large. And since we are always concerned with the physics of a system near the limits, I believe that we are correct.
It is interesting that the reviewer states that the limit could easily be breached in the presence of vibronic states. Since the best molecules ever measured fall a factor of 30 short of the limit, I do not believe that the reviewer's confidence is justified.
Though this exchange with the reviewer was one of the nastier ones I have experienced, I find it all to be trivial in the larger scheme of things. Happiness is most abundant when I am absorbed in Physics. At some point, I will rewrite this manuscript, taking into account these comments, and will see how our results mesh with the body of scientific understanding. I take solace in the fact that people are reading our papers. I see this review not as a devastating blow, but as an opportunity for new investigations. The last 6 years of my research on fundamental limits were ignited by a comment that was published on my 2000 PRL paper. In the process of accumulating evidence to show the criticisms wrong, I made many discoveries and gained deep insights. These kinds of experiences should not be seen as defeat, but as an invigorating start to a new chapter of research.
Thursday, September 30, 2010
Reviewer #1: This manuscript describes characterization of the response dynamics of cascaded photomechanical devices based on photosensitive liquid crystal elastomers. The idea of these cascaded devices is the most interesting aspect of this work. The authors have built and characterized the time response and fit to a three exponential functions. The authors have identified two possible mechanisms contributing. The paper is well-written, conclusions well-founded and should be of strong interest in the community. Publication is recommended.We have resubmitted a revised manuscript, which addresses only a couple of minor issues, so we expect the paper to be accepted soon.
Reviewer #2: This an excellent study of cascaded optomechanical devices that take advantage of the properties of liquid crystal elastomers. It should essentially be published as is.
In another post, I reproduced a letter that I sent to the Optical Society of America about allowing authors to submit papers in two-column format. I sent a similar email to Applied Physics Letters. Since I hadn't heard back from them, I assumed that they were blowing me off. I just received an email that the journal has actually implemented the change! They write:
Dear Dr. Kuzyk:
Since your e-mail, we have decided to allow 2-column manuscripts to be submitted. We will try it out and see if we receive e-mails from reviewers who do not like the 2-column formatting.
Thank you again for your help with our APL manuscripts.
Most of today's high-tech marvels are based on the tiny transistor, a device that controls the flow of electrical current. A transistor mediates the flow of one current depending on the properties of a second current. Since the interaction is nonlinear, a transistor can be used to amplify a weak signal, perform logic operations, and be used as a memory element. While a single transistor may be technologically unimpressive with regards to computing power, millions of them working together can perform amazing functionality that some day may meet the criteria of intelligence.
I envision a technology made of Photomechanical Optical Devices (PODs), each with the ability to control the flow of light based on the environment (stress, temperature, chemical agents, etc.), having multiple states for a given set of input parameters (i.e. optical and mechanical multi-stability), and having the ability to change shape based on the inputs. Integrating such devices together would lead to ultra-intelligent morphing materials/systems that would enable technologies that are yet to be invented. I have been dabbling with research in this area for 20 years.
The time is ripe to build the scientific foundations for making a novel new material/system that has the ability to morph in response to stress or light. In contrast to common materials that are made of atoms or molecules, and interact through electric fields, I envision a system made of microscopic units that each communicate with all others using light, imbuing it with enormous processing power and intelligence. Add to each unit the ability to respond to stress and perform actuation, and the system gains the ability to intelligently morph between complex structures. Miniaturization of such systems blurs the line between what is conventionally meant by a system and a material - terms that I use interchangeably.
Such materials would fill a new realm of applications. A series of pictures on a piece of film are projected onto a two-dimensional screen to show motion. A smart material, on the other hand, could be made to morph through a series of shapes leading to true 3-dimensional solid-body animations. For example, automobile designers could use them to continually change the shape of a model to test its aerodynamics or aesthetics; a chair could be made to automatically change shape to accommodate a particular body type; and an exact replica of an individual could be made in real time from information sent from a remote location, in effect recreating an animated three-dimensional solid replica of the sender. And you thought picture phones were great! Other applications would include noise cancellation wallpaper, reconfigurable air craft wings, ultra-stable platforms for precision manufacturing or characterization, and reconfigurable optical filters.
The development of such materials/systems would require extensive research aimed at demonstrating the fundamental building blocks, followed by studies of how a small number of them interact with each other when interconnected with light, and would culminate with the development of fabrication methods that could be used to make a bulk material from a collection of microscopic building blocks. Some aspects of the fundamental physics underlying this idea are in place; that is, photomechanical materials exist, interferometers with such materials can be built into polymer fibers, and a series of gratings can be written into a fiber, which in principle, could be made into a network of interacting units. The challenge lies in demonstrating classes of fundamental units that are naturally integratable, and understanding how to build a system from optimized units that work together to provide the desired function.
My vision of the fundamental building block is a waveguide-based feedback device, such as an interferometer, that is made in thin films or fibers, containing a nonlinear-optical and photomechanical material -- thus simultaneously having the ability to manipulate light, sense stress, and apply anisotropic stress to its surroundings. These PODs would simultaneously respond to optical and mechanical stimulus to yield mechanical/optical multistability, logic, stress/temperature sensing, optical/mechanical memory, positioning, and more. A network of PODs, interconnected by light signals along an integrated waveguide device would be scalable to a an ultra-smart material/system with functionality that goes well beyond present materials/systems paradigms. In contrast to a neural network, in which each neuron is connected to a small number of neighbors, a linear array of PODS along an optical fiber would interact with all others, processing information, reacting to stress and responding by selectively passing light and stressing the surroundings.
The development of this new technology may impact many future applications that have not yet been invented. Conversely, the novel materials concept may motivate new ideas for applications that have yet to be invented. I have submitted this idea to the National Science Foundation as part of a solicitation that seeks suggestions for new programs in areas that have the potential for transforming socienty. If NSF chooses not start a major program in ultra-smart morphing materials, I am hoping that this kind of research will someday be supported - even if I am not around to participate.
Wednesday, September 22, 2010
I am reminded of the many quick work trips that I take to all kinds of neat places. But usually, all that I see is my hotel room, the conference room, and the road between the hotel and the airport. Each geographic location offers unique vistas and local culture (or lack thereof). But, in addressing the deepest questions, to me, sightseeing and travel provide only the most shallow returns.
An itinerary that highlights places that spawned ideas that changed civilization would be much more interesting to me. It would be awe-inspiring to see the simple objects used by Cavendish to do his famous experiments or to view Galileo's crude telescope in the natural setting of the Italian countryside, where his observations yielded insights into the true nature of the universe, overturning centuries of dogma and ignorance.
I would like to revisit the meeting room of the first continental congress, where the new paradigm of self governance took root and changed the world. Having recently read the biographies on James Madison, Thomas Jefferson, and John Adams, a visit to the birthplace of their great ideas would now be more meaningful to me.
While the site of the ruins in Rome were deeply moving based on the human ingenuity and appreciation for beauty that they exemplify, I found that the experience would have been more fulfilling had there been a broader intellectual context. Similarly, the Vatican and Notre Dame Cathedral in Paris are magnificent, and the fact that all those people labored for so many years to express their faith may be admirable, but I found the structures to be akin to beautiful facades harboring emptiness.
Events and places are meaningful to me only when there is an intellectual context. Contrast the zombie feeding coins into a slot machine for hours on end with the experience of understanding a deep idea for the first time. When I first took Quantum Field Theory, I recall the thrill of seeing the connection between fields and particles and how so many laws of physics arise from a variational approach (i.e. nature acts in a way that minimizes or optimizes a quantity). When struggling with my attempts to understand the fundamental limits of the nonlinear susceptibility, I recall the final stretch of my frenzied derivations with fondness: a beautifully-simple equation crystallized into its final elegant from out of a huge mess of mathematics. In that moment, I understood something new to the world for the first time.
Learning is characterized by long periods of drudgery, punctuated with the occasional elation of understanding. Similarly, humanity evolves through long cycles of stasis that are interrupted with abrupt paradigm shifts, which change the course of civilization. If I am forced to travel, my preference is to visit those places that produced singular moments that led to revolutions in thought.
Sunday, September 12, 2010
This 1999 film chronicles the whistle-blower Jeffrey Wigand, a former VP of research at the tobacco giant Brown & Williamson. Dr. Wigand was fired for being vocal to his management about his concerns related to the presence of carcinogens in cigarettes. His severance package was contingent on him honoring a confidentially agreement. Such agreements protect intellectual property, but Brown & Williamson used it to prevent Wigand from disclosing shady corporate practices.
At the time, tobacco industry executives were fully aware of the presence of certain carcinogens in cigarettes, but the industry was reluctant to remove them because of their effect on a cigarette's flavor. In testimony before congress, the CEOs of the seven largest tobacco companies claimed that they did not believe that nicotine was an addictive drug. Internally, however, these same executives specifically instructed the the scientists to develop means to enhance the addictive effects of nicotine. In his testimony, Wigand made it clear that it was common knowledge in the industry that "a cigarette is a scientifically designed drug delivery device that is intentionally engineered to deliver nicotine to the brain in seconds."
It took a great deal of courage for Dr. Wigand to expose the facts. To discredit him, the tobacco companies orchestrated a smear campaign attacking his integrity. Other tactics included death threats as well as financial blackmail. Before testifying in a suit filed in Mississippi against the tobacco industry, his tobacco-friendly home state of Kentucky issued a gag order preventing him from testifying. He risked incarceration when returning to Kentucky by ignoring the order. This ordeal cost him his marriage.
It was clear that the tobacco companies' behavior was despicable. Motivated by profit, they formulated methods to optimize the delivery of nicotine to addict users into a life-threatening habit. This brings to mind allegations that the oil companies are using similar tactics to discredit global warming research.
The stakes of global climate change are much higher. If the oil companies are discrediting serious research in public forums for the deliberate purpose of gaining political advantage at the expense of the environment, then the decision makers should be held legally culpable for their actions.
The Insider left me with a great respect for professional journalists, who are doggedly determined to expose the truth. I am concerned that this type of professionalism is being lost as news organizations pander to readers and viewers who are more interested in entertainment than information - consumers who would rather have support for their ideology rather than the truth.
Journalistic market pressures have resulted in news outlets that cater to ideology. Fox News is a glowing example of ideology taken to the extreme. John Stewart of the Daily Show does a great job of exposing the gross lies championed by Fox. Fox News takes video clips out of context, make statements without evidence and uses repetition to give the sensation of truth, and fires up viewers into misplaced anger. This irresponsible behavior borders on the criminal.
Some cases in point:
1. Repeating over and over again that President Obama was foreign born even when his birth certificate was available to the public.
2. Glenn Beck asking why the other networks did not show the videos of Israeli servicemen being beaten as they boarded the Turkish flotillas. (John Stewart showed date-stamped videos of several other networks that were showing the videos at the same time or prior to Fox.)
3. According to Mediamatters.com, "Saudi Prince Al-Waleed bin Talal -- the second-largest shareholder of Fox News’ parent company News Corp. -- has deep funding ties to Imam Feisal Abdul Rauf, the “principal planner” of the Islamic community center in lower Manhattan." It is interesting that Fox News associates Imam Feisal Abdul Rauf with terrorist groups, but fails to mention his ties with their #2 stockholder. Ironically, following the logic presented on Fox News, Fox News is directly contributing to terrorists.
4. Fox News showed a short video clip of President Obama stating that he was enacting the biggest tax increase ever on everyone. John Stewart played the full clip which stated that if the Bush tax cuts were allowed to expire, this would result in the largest tax increase ever on everyone. Obama was justifying his decision not to allow the tax cuts to expire.
The list goes on.
In conclusion, corporations and the news media should be held to the highest standards of honesty and integrity, which are the pillars of truth. Executives of companies who lie to increase profit at the expense of humanity should be held legally liable as should members of the media who intentionally distort the truth to increase ratings. If we as a nation make the wrong decisions, let them be made solely on the basis of the facts.
Saturday, September 4, 2010
An article in the Wall Street Journal by Hawking and Mlodinow on Why God Did Not Create the Universe begins with:
"According to Viking mythology, eclipses occur when two wolves, Skoll and Hati, catch the sun or moon. At the onset of an eclipse people would make lots of noise, hoping to scare the wolves away. After some time, people must have noticed that the eclipses ended regardless of whether they ran around banging on pots.
"Ignorance of nature's ways led people in ancient times to postulate many myths in an effort to make sense of their world. But eventually, people turned to philosophy, that is, to the use of reason—with a good dose of intuition—to decipher their universe. Today we use reason, mathematics and experimental test—in other words, modern science. "
and ends with
"Our universe seems to be one of many, each with different laws. That multiverse idea is not a notion invented to account for the miracle of fine tuning. It is a consequence predicted by many theories in modern cosmology. If it is true it reduces the strong anthropic principle to the weak one, putting the fine tunings of physical law on the same footing as the environmental factors, for it means that our cosmic habitat—now the entire observable universe—is just one of many.
"Each universe has many possible histories and many possible states. Only a very few would allow creatures like us to exist. Although we are puny and insignificant on the scale of the cosmos, this makes us in a sense the lords of creation."
This article and the hundreds of the comments it elicited brings up a point that I have wanted to discus for some time, and that is the idea of multiverses. The multiverse is a collection of universes that are predicted by certain cosmological theories. Many physicists object to the notion that the existence of the other universes are undetectable, therefore, they would argue that "believing" in a multiverse is akin to accepting God.
This objection is similar to objections that have been waged against other new ideas in science that are now commonplace. For example, Ernst Mach objected to the idea that atoms were "real" even though chemists had indirect evidence a century earlier. His gripe was based on his conviction that if something were real, it could be seen directly. Ludwig Boltzmann, Mach's contemporary, struggled for years to get physicists to accept atoms, a critical ingredient of Boltzmann's thermodynamic theories. As the stalwarts died out, and theories based on atoms were experimentally verified from every possible angle, the new generation of physicists of the early 1900s accepted atoms as real.
These arguments hinge on the question of what is meant by "real." Mach's criteria was that one had to be able to see reality "directly" using, for example, a microscope. Ironically, Galileo's contemporaries did not believe that his views of the heavens were real because the telescope purportedly distorted reality. He proved that the telescope depicted far-away objects in true fidelity by comparing what was observed in his telescope to direct visual observation of terrestrial objects. Today, we have been to the moon and confirmed Galileo's observations of craters and other features.
A more modern view of what is real might be defined by its consequences. Though one cannot see an electron, its existence has astronomically many testable consequences that have been confirmed. Every time we use a computer, watch TV, or drive a car, the existence of the electron is confirmed and reconfirmed.
Einstein never believed in quantum mechanics because of its inherent probabilistic interpretation. He felt that someday, an underlying theory with hidden variables would be found that would make quantum mechanics fully deterministic. In the 1960s, Bell showed very elegantly that hidden variable theories were inconsistent with observations. It would have been interesting to have gotten Einstein's impressions of Bell's work, had Einstein lived a few more years.
A deeper issue with quantum mechanics is the central role of the wavefunction, which cannot be directly measured. Only the consequences of the wavefunction are observable. Once again, because quantum mechanics makes so many accurate predictions in the realm of atoms, molecules, nuclei, and force fields, we accept that wavefunctions exist, not as an object that we can directly sense, but as proxy for some unmeasurable structure of reality.
Getting back to the multiverse, if a cosmological model of the universe predicts the existence of multiverses AND simultaneously predicts everything that we observe in our own universe, then accepting the existence of a multiverse is qualitatively no different than accepting much of what is standard physics of today. The key is that the theory must predict what is observed as well as making new testable predictions. The postulate of the existence of God, on the other hand, does not make any quantitative predictions that can be tested.
Are multiverses equivalent to theology? I would say not. Science seeks answers through hypothesis and experiment. Theology, according to dictionary.com is "the field of study and analysis that treats of god and of God's attributes and relations to the universe; study of divine things or religious truth; divinity." Theology presupposes the existence of God while science accepts a theory only when it is consistent with measurements. If an experiment falsifies a hypothesis, it is discarded. Theologians, on the other hand, take without proof the existence of God.
Monday, August 30, 2010
The Washington Post said it best,
"When Parsons won for his role as uber-serious physicist Sheldon Cooper on "The Big Bang Theory," voice-over man John Hodgman joked that nerds were "taking the streets" in fits of joy. He was half-right; they actually took to Twitter to celebrate this victory for geeks everywhere."
Sunday, August 29, 2010
I devote each Saturday and Tuesday to course development. Though I took a couple of breaks to eat and surf the web, I remained true to my schedule. After grading last week's homework assignment and a quiz on Saturday, I continued to plow through the textbook and organize the material for my lectures.
My serious teaching career (excluding my work as a teaching assistant in grad school and an instructor at a community college) started 20 years ago at Washington State University. Those years have not jaded my enthusiasm for teaching. I continue to make adjustments based on my past successes failures.
But how are improvements possible when good teaching is difficult to define? While there is mounting research on pedagogy, in my mind, many of these studies are inherently flawed. For example, while courses that are based on peer instruction and conceptual problem solving certainly lead to better understanding, it makes it difficult to cover the same amount material. Admittedly, it is wasteful to teach lots of stuff that everyone forgets; but, coverage is also important - especially if a course is a prerequisite in a sequence courses. Such trade-offs must be carefully weighed in an undergraduate curriculum, especially for the student who will take one or two physics classes in a lifetime.
Graduate programs, on the other hand, are populated by motivated students who have both an interest in and an aptitude for physics. PhD programs like ours are based on coursework and research. All students take a set of core classes in their first two years. To become a PhD candidate, they must pass the PhD qualifying exam, which is given at the end of the forth semester.
The qualifier exam committee solicits two problems from each faculty member. A list of topic areas is used as the basis of the solicitation to ensure that the test is well-balanced. Thus, the qualifier exam reflects what the faculty as a whole believe is the core competency of a PhD in physics, and is not necessarily limited to the material covered in class. However, past exams are made available to the students as a study aid.
I find this system to be the ideal game-theoretic approach to ensure buy-in by all parties. It reflects poorly on me as a professor if the students were to do poorly on the part of the exam that is associated with my course. This makes me think very carefully about how to most effectively cover the material to get the optimal balance between breadth and depth. The looming qualifier exam motivates the students to learn the material for long-term understanding, not just for a particular test. Everyone works harder as a result.
My approach is to stress the fundamentals and give lots of examples. I gloss over topics that the students can learn on their own. When preparing a lecture, I first read the material, sometimes many times, until I feel I have a good understanding of the concepts. I then identify what I think are the important points, and use them as anchors in my lectures. I post readings and homework assignments on the web well ahead of the lecture dates so that the students are well prepared for class.
In crafting my lectures, I select problems from the end of the chapter on the basis of how well they complement my notes and whether or not they challenge the student to think more deeply. To select appropriate problems requires me to first suffer through the calculations. This time-consuming task give me ideas on how to fine tune my notes to address potential misunderstandings.
The night before my class, I go over my notes to make sure that I stress the important issues. The morning before my class, I copy my notes to scrap paper, going over in my mind the flow of my presentation. I pay particular attention to the anchor points.
Finally, I reproduce my lecture on scrap paper without looking at my notes. My preparation is complete when I am able to navigate from one anchor point to the next based on general principles of physics and logic.
So to finally get back to my diary entry after this long-winded diversion, I spent most of Saturday reading the textbook, writing notes, and working physics problems. This reminded me of how learning physics requires an intense and prolonged effort. I am thankful to my employer that my teaching responsibilities force me to spend time thinking very deeply about the subject that I love so dearly. And then, I get to share my enjoyment with a group acolytes who share my enthusiasm for learning.
Thursday, August 26, 2010
At 9:15am I attended a meeting with a prospective candidate who was interviewing for the position of dean in the college of sciences. Much of the discussions centered on the doom and gloom of the next round of budget cuts that could be as high as 10%. This is on top of a 30% cut from last year, which was on top of smaller cuts in the previous years.
It's all a mater of simple math. The cost of educating a student is X. Let's say that the state pays 50% and the tuition paid by the student (or scholarships) pays the remaining 50%. Because of the bad economy, the state can now afford to pay only 35%. If we were to cut 15% of the faculty we would be able to teach fewer courses so the students would have less choice and more students would be packed into larger classes. If we bring in fewer students, revenue drops. If faculty teach more courses, research productivity and funding drops. The only real solution is to raise tuition to cover the difference; but, in this bad economy, increased tuition would shift the burden to the families of students, many of whom are also in a financial bind. These are tough times and there are no clear solutions.
After the meeting, I spent a couple hours working on our paper and going over my lecture notes. Subsequently, I had a quick lunch and attended a practice talk by a postdoc. After that, I spent a couple of hours in my office answering questions about a homework assignment that is due tomorrow in the graduate classical mechanics class that I am teaching.
In addition, Shoresh reported on progress he is making in testing the sum rules in quantum wires and Shiva showed me some interesting data using his new temperature controlled chamber. We also talked about some new ideas for designing a better experiment for simultaneously measuring the linear absorption spectrum and the ASE signal.
When I got home, I worked some more on my class notes and wrote up solutions to one of the homework problems. Before we ate dinner, our son Skyped us to show off his cool new digs - an ultramodern house with a movie theater. He will be house-sitting for one of his professors for the academic year.
After dinner, I played floor hockey. I felt unusually fatigued and could do nothing right. I got home at 9:15pm, took a shower, and worked on my notes a bit. I really did not feel like working, so I decided to make an entry on my blog.
This post is incredibly boring; but, it captures my mood. To be happy, I need to spend more time thinking about physics. If I do not exceed a minimum threshold, I feel that my day has been wasted. I plan to do lots of work this weekend, and look forward to the enjoyment associated with the rush of firing neurons.
Monday, August 23, 2010
It is wiser to treat each issue on its own merits rather than parrot the party line, and to change one's mind as new evidence accumulates - qualities lacking in ideologues.