Wednesday, December 28, 2011

Annual Review of Faculty

I will once again be doing annual reviews of our faculty. Here I share my views on the process; and, describe the metrics I use to make evaluations.

There are three general categories; (1) teaching, (2) research and (3) service to the university and the profession. The annual review is meant to give the faculty feedback about their performance for a particular year, and is one factor in determining raises. However, given budget cuts, there will be no raises in the foreseeable future.

While a faculty member may have a long and distinguished career, it is possible to have up and down years based on the simple fact that there are fluctuations in the metrics; one year a faculty member may have 10 excellent papers followed by a year with none. However, the annual reviews will not fluctuate as much as the metrics given that fluctuations are inevitable and usually not a sign of a problem.

Teaching is evaluated based on several pieces of information. The syllabus shows how the course is organized and faculty usually provide a narrative on new ideas that they explored to enhance learning outcomes as well as student feedback in the form of course evaluations.

I am strictly opposed to standardized teaching evaluation forms because they often lead to meaningless comparisons between professors in the form of numbers that each students assign using very different criteria. I marvel at the well-reasoned forms that each of our professors produce and the unique information that they solicit. Since each faculty member may be using different approaches, a single number assigned by a student does not tell a meaningful story. Thus, I usually read every comment made by each student - even in classes with 250 students. It takes lots of time, but provides quality information with which I can get a better sense of how well our courses are working.

As a case in point, several years ago, I scanned through a bunch of evaluations of a young professor who was teaching a large section with hundreds of students and noticed two things. The mean was slightly below average (i.e. the typical student choose average or below average when comparing this professor to others at our university) and the distribution was bimodal with a quarter of the students giving him the highest marks and three quarters giving him below average ratings.

I gained great insights from the correlations between the numbers and the narratives. The typical student who gave a low ranking said the course was "unfair" because it was too hard. The students giving him higher marks stated that the course was challenging but the professor spent lots of time explaining complex concepts to the students during class as well as during office hours. In addition, this professor scheduled non-mandatory help sessions so that students could ask questions and get extra help.

A picture emerged of a dedicated teacher who expected excellence but was willing to expend a great deal of additional effort to make sure that the students learned. While his average course evaluation was lower than many of the other faculty in our department, I judged him to be a more dedicated teacher than those whose evaluations were high and the comments uniformly positive about the fact that the class was "fun" and "easy."

Research is more difficult to judge. The fact that a funding source is willing to pay top dollar to a given faculty member is a good indication that this individual and his/her work is considered useful or interesting. Similarly, papers that make it through the peer review process at good journals have convinced an editor and a few reviewers of correctness and importance. If a piece of work gets many citations, then it shows that people are reading the papers and finding them useful. Thus, a blend of these factors provides a good indicator of research productivity and quality.

However, these indicators can have the opposite meaning. For example, a paper may be cited many times as an example of bad science. Or, an individual with lots of citations may be a technician who provides specialized samples to many research groups. While such a person is contributing to the science by providing samples, the number of citations may not be a sign that the work is particularly interesting or creative.

Lots of funding is not always a sign of good science. A company may grant big bucks to a researcher to test a trivial property of a product. In contrast, a small grant from the National Science Foundation for work by a single PI who is challenging our perceptions of the nature of space-time would carry much more weight in my mind.

I try to consider all these factors together when evaluating the research productivity of a faculty member, which includes learning a little bit about their research. I may look at the h-index and or the numbers of publications or citations per dollar spent, or other ratios to develop an impression of the research quality. Faculty members who have attained Fellow status in a professional society get such honors from significant lifetime contributions to their fields, so fellowship in a society factors into the annual review. In the end, I make a value judgement that may or may not be in line with what others may think. I accept the fact that the process is far from perfect.

Finally, there is service. Every faculty member is expected to contribute to the operations of the department and the university. Faculty members sit on tenure committees, thesis examination committees, search committees, and do all kinds of thankless tasks such as recruiting students, writing reports for the administration, and doing self studies that support our claims that we are a high quality department relative to our peers. I look for not only quantity of service, but the impact that it has on our department and university.

Service to the profession takes the form of membership on program committees for conferences, being on editorial boards of journals, reviewing papers for journals or proposals for funding agencies, organizing conferences, acting as an external examiner at other universities, and serving on panels that take advantage of an individual's scientific expertise. An active scientist does this kind of service as a matter of daily routine. I thus expect to see a substantial professional service component from each faculty member.

To make a final assessment, all of these factors are taken into account. A positive annual review requires significant accomplishments in at least two of the three major areas, with emphasis on research and education. What makes my job especially difficult is that our department is very strong. All faculty members are doing high-quality research, are well known in their fields, are well funded and are advising undergraduate/graduate students.

I have a few weeks break until I need to get around to this unpleasant task, which takes 5 full days to complete. For now, I am enjoying my break doing some new physics, which includes having completed an interesting calculation that was motivated by my proofing of one of Shoresh's manuscripts. Before the intensity of the new semester begins, I need to proof and submit a few more manuscripts for publication, as well as review a few more papers.

This past year has been a local maximum with a record 12 publications in refereed journals (almost 10% of my lifetime 123 publications), not to mention a bunch of invited and contributed talks. Now I will need to get back to work to maintain this momentum. But not until I take a day off to enjoy family, friends, and this wonderful place we call home.

Saturday, December 24, 2011

The amazing world of numbers

The other day, I was trying to get straight the words for the numbers in Italian. While our counting system maintains a strict pattern, the words representing them are irregular. The exceptions to the regularity got me thinking about how number systems developed from primitive times. As a good scientist, I will first propose my hypothesis without looking at the history books, though I admit having some knowledge on the topic.

Consider the necessity of counting to the Shepard while letting his flock out into the field and needing to know if any are missing at the end of the day. The Sheppard added a stone to a pile for each sheep going out into the field and then removed a stone for each returning one. If a stone remained, the Shepard knew that a sheep had gone missing. In addition to being a practical counting technique, this procedure established the one-two-one correspondence between two sets of objects with the same number of elements. From a physicists point of view, one could say that this also established the principle of conservation of stones and conservation of sheep. Their numbers did not change, they just moved about from one location to another.

To solve the issue with proliferating sheep and shortages of stones, the Shepard recognized that he could use a different type of stone to represent, let's say 10, sheep. Thus, after the ninth stone was placed on the pile, the tenth sheep would be represented by the single 10-sheep stone while removing the previous nine. An extra stone would again represent each additional sheep until the 20th, at which point the next 10-sheep stone was used. Later, it became apparent that one need not use a special stone representing 10 sheep. Rather, one could place a stone in a different spot. Thus evolved the base 10 system, with separate symbols, or numerals representing one to ten, with these same numerals representing tens when placed in the tens spot, etc.

The "base" 10 number system most likely originated because of the ready availability of 10 fingers. I find it interesting that the word for finger and number shares the word "digit."

Other bases are also possible. When we were in Italy, I was at first puzzled at the system of Roman numerals engraved into the ancient buildings, which differ from what we commonly refer to as roman numerals today. For example, the modern form IV was represented as IIII and what we would recognizes as IX was written as VIIII. This even carried over to larger numbers, such as CD which they wrote as CCCC. Clearly, the original form of the Roman numeral system seemed to suggest base 5, as one would expect from counting on the fingers of one hand.

Now back to Italian. The pattern from zero to nine is as one would expect for base ten, with unique names corresponding to each numeral. Above dieci (ten), the system becomes schizophrenic. Undici (for eleven) seems to be saying one and ten, but quindici (for fifteen) is irregular in the sense that it is not a compound form of dieci and cinque (five). After Sedici (sixteen), the pattern reverses to diciassette, diciotto, diciannove, etc. Interestingly, the naming pattern for 20 and above continues along a strict convention without exception. Happily, 50 is cinquanta not quinquanta as I would have expected given the expression for 15. This pattern suggests an original base 16, or perhaps 15 with origins in the Roman base 5 system, which later got fixed to be consistent with base 10 convention. Whatever the case, the words hint at a mixture of systems.

Words and grammar can carry secret messages from the past. Ideas that were once imbedded in peoples' minds crept into language and became firmly rooted once it was formalized into written form. Thus, what remains today in any language language provides a snapshot of common usage from the past, which reflects the understanding of those times.

I was excited the morning that these ideas raced through my mind. I then thought about numbers in different languages. The words for the numbers in English are distinct until thirteen, which takes the form of three and ten, etc. Could this show the remnant of a base 12?. This seemed plausible, given the fact that some units of time (12 hours representing half a day and 12 months making a year) seem to have a preference for 12. And don't forget 12 inches to a foot.

I then rattled off the numbers in Ukrainian. It was purely base ten. I had taken French in high school and some in college, but my French got totally erased when I started to learn Italian (except for the time that French got mixed in with my Italian when I said to a French colleague while in Italy "qualchechosa," a hybrid of "qualcosa" (Italian) and "quelque chose" (French)). So, I asked my wife to remind me of how to count to twenty, and the pattern turns out to be the same as in Italian!

Stones may correspond to sheep, but how does one deal with fluids? This is an important quantity when bartering in liquids (as in ordering a beer). At some point, humans must have recognized that liquids are conserved. In other words, they can be moved around and split into smaller amounts, but when recombined, the quantity fills the original container in the same amount.

In the case of liquids and weights, there appears to be a preference for base 2, which makes sense given the ease with which we can split liquids in half over and over again. There are 2 cups to a pint, 2 pints to a quart, 2 quarts to a half gallon, and two half gallons to the gallon. Also, there are 8 ounces to a cup and a pint is 16 ounces (base 16!). And don't forget 16 ounces to a pound. The relationship between weight (or more precisely mass) and volume is clear. 16 ounces of water or beer weighs 16 ounces, or one pint weighs a pound. No wonder!

The birth of the decimal Metric System (base ten) coincided with the French Revolution, when two platinum standards representing the meter and the kilogram were deposited in the Archives de la République in Paris, on 22 June 1799. This was the first step in the development of the present International System of Units, in which the basic units of distance, mass, time and current are the meter, kilogram, second and ampere, respectively.

For convenience, modern day computers use binary, or base two: ones and zeros are easily representable with the gate of a transistor being on (1) or off (0). For programing convenience, the bits (binary digits) are combined into groups of 4 leading to a hexadecimal representation (base 16), where two hexadecimal "digits" can represent the numbers 0 through 255. As such, pairs of hexadecimal numerals are used to describe the alphabet (upper and lower case) the ten base 10 numerals, as well a bunch of special symbols.

The simple act of counting eventually led to the transformation of primitive society into one that can understand the mysteries of nature, which since antiquity had been thought incomprehensible in the absence of deities. Our language, on the other hand, provides clues of the thought processes that went into the development of counting, which forms the basis of mathematics, physics, the sciences, engineering and technology - historically following approximately that order.

After writing this post, I searched Wikipedia for additional information on various bases used by various civilizations in various eras. In the third millennium BCE, the Summarians used a base 60 numeral system, called the sexagesimal system. The symbols used are shown to the right. Incidentally, our system of 60 seconds to the minute and sixty minutes are -- you got it, base 60! But so are angular measurements. There are 60 arc minutes in a degree, and sixty arc seconds in an arc minute. This connection makes sense given that the timing of the sun's apparent motion in the sky is measured as a change in angle over a change in time period.

It is obvious how the base 5, 10, and 20 systems follow from counting with fingers and toes. Thus while the sexagesimal system is base 60, the symbols follow a base 10 pattern. However, an advantage of base 60 is that 60 has a large number of factors (1,2,3,4,5,... you can determine the rest) so that fractions are easier to represent.

According to the Wikipedia article on base 10, peoples using other bases are as follows:

  • Pre-Columbian Mesoamerican cultures such as the Maya used a base-20 system (using all twenty fingers and toes).
  • The Babylonians used a combination of decimal with base 60.
  • Many or all of the Chumashan languages originally used a base-4 counting system, in which the names for numbers were structured according to multiples of 4 and 16.
  • Many languages use quinary number systems, including Gumatj, NunggubuyuKuurn Kopan Noot and Saraveca. Of these, Gumatj is the only true 5–25 language known, in which 25 is the higher group of 5.
  • Some Nigerians use base-12 systems
  • The Huli language of Papua New Guinea is reported to have base-15 numbers. Ngui means 15, ngui ki means 15×2 = 30, and ngui ngui means 15×15 = 225.
  • Umbu-Ungu, also known as Kakoli, is reported to have base-24 numbers. Tokapu means 24, tokapu talu means 24×2 = 48, and tokapu tokapu means 24×24 = 576.
  • Ngiti is reported to have a base-32 number system with base-4 cycles.

So, what became an innocent language lesson on the numbers led to a train of thought that occupied my mind for a morning and gave me the satisfaction of understanding something that was new to me. The fact that I had not learned something new to the world did not bother me a bit.

With all this numerology and talk of ancient number systems, am I worried about 2012? What do you think?

I am finishing this post after our traditional zillion-course Ukrainian Christmas dinner, followed by a glass of eggnog and countless chocolate-covered pretzels, so errors in my logic are undoubtedly plentiful. No apologies!

I wish all of you the best that the holiday season has to offer. Given the international nature of my handful of regular readers, I would be interested in hearing about how you form the words for the numbers in your native languages, and at what point if any, they are irregular. In the meantime, I will be sipping on another eggnog. Good night!

Wednesday, December 21, 2011

A correspondence on the intrinisic hyperpolarizability

I often get correspondence from scientists form all over the world. One such arrived a couple days ago asking about the intrinsic hyperpolarizability and why it is a useful quantity for comparing molecules. Below is the original message and my response:

Email to me:

Dear Sir,

I have a doubt regarding beta-intrinsic value. Which molecule is of greater practical importance, having a greater beta-intrisic value or a greater beta-value? If molecule has greater beta-intrinsic and lesser beta-value as compared to its related counterpart can it be regarded as a better molecule for practical applicability?
Thanks.

Kind regards,
Sincerely,
So-and-so

My response:

Dear Dr. So-and-so,

The intrinsic hyperpolarizability is used to understand the origin of the nonlinear response of a molecule. Making a molecule larger will yield a bigger value of beta; but, the intrinsic hyperpolarizability tells you how effective it is given its size. This kind of understanding can lead to the rational design of better molecules by first identifying ones that have a large intrinsic hyperpolarizability and then making them larger using the same "shape" or theme.

Having a molecule with a large hyperpolarizability in itself is not technological significant because that property alone will not necessarily make it useful in a device. It needs to be incorporated into a material with a large bulk response and then needs to be formed into a device component that is photochemically stable, etc. Thus, a molecule with large beta is not of technological interest without lots of other work to determine other properties; and, a small intrinsic beta makes it less interesting from the point of view of science.

A large beta molecule may be of interest to others if it has other unique properties, such as an ability to attach it to microdots to enhance local electric fields, or if it acts a charge sensitizer in a polymer, etc.


Best,
Mark G. Kuzyk

In the near future, I plan to write a description of our research aimed at the non-expert so (s)he can gain an appreciation of our work, which is based on trying to understand complex properties of a system by looking at large-scale patterns. Stay tuned.

Tuesday, December 13, 2011

A video game from the past.

When I was in high school (circa 1975), I built a computer from a kit. I spent days and nights soldering together components and inserting chips. The final box was similar in size to present-day workstations (click here for a photo and specs). That is where the similarity ends.

In those archaic computers, programs were written in machine language, were entered in binary with toggle switches on the front pane and the results were displayed on a couple dozen light emitting diodes. I was excited just being able to write a program that made the row of light bulbs simulate motion.

My dream was to someday acquire a teletype terminal so that I could type in programs and be able to print them out rather than seeing one line of code at a time in glowing red dots. Worse yet, there was no external storage, so once the computer was turned off, the programs disappeared forever. I had well-organized hand-written sheets of papers in a binder with code so that I could quickly re-enter it with the toggle switches in the future.

A few years later, I was ecstatic that computers for the home were available for the low price of a thousand bucks. They still had no hard drive for storage (I would save up for that later), but they came with a built-in keyboard and a plug for a TV set that acted as the display device. I paid extra to upgrade my RAM from 16 to 48kB - today I have almost a million times that amount of memory. Click here for detailed specs of these computers.

When I first got my Atari 800, I was up for 2 days straight writing code under huge protests from my parents. I also had to plead with my frugal parents not to turn off the computer so that my programs would not disappear.

It took me about an hour to duplicate a simple version of that classic computer game called Pong, with an on-screen paddle that was controlled with a joystick and a ball that bounced around the screen making a sound each time it was deflected. I can still see me my mother playing the game, shouting out curses when she missed that little ball. The Atari 800 was great with sound and video, so it was ideal for video games.

After I got my floppy drive (I think it had a few hundred kilobytes of space - my drives now have a million times more) I decided to write a space-battle type video game, which I originally called Star Trap but later had to change to Stun Trap because of copyright issues (another company had already trademarked that name). Before writing the game, I got documentation on how to call special routines that would activate pixels on the screen for animation. I also got an assembly language compiler so that I could write my code in the ultrafast machine language of the MOS Technology 6502B chip.

I spent a bunch of time coding and eventually started a company named Affine Inc. to manufacture and market Stun Trap. After incorporating, I got family members and friends to invest real cash. With these funds, we produced the game, including artwork for the cover of the box(right) and our advertisements that appeared in computer magazines. At that time, we also opened a computer store and ran a mail-order computer business to generate more cash flow.

The final touch on the software end was my own invention for software protection. The game would only run if our disk was in the drive. We made a precise plastic fixture that would hold the disk in place so that we could puncture it in a specific place (we used a common push pin). The program would then write to the location of the hole and check for a write error. The error indicated it was an official version.

Back then, to get a good unit price on the packaging, we had to order several thousand boxes at once. These filled our tiny condo, as did our inventory until we moved into our computer store, with a backroom for inventory and a small sales desk for the mail order business.

Things moved quickly. After our add appeared, we were picked up by a couple big chains in New Your city who bought enough games to stock each store. The games sold for a retail price of about $25 to $30 and and it cost us about $12 to manufacture.

The big break came when K-Mart was considering buying 50,000 games. The downside was that they expected terms of net 30, which meant that they had 30 days from delivery to pay, so that we would need to borrow $600,000 to supply the product. And, they had the option to return the games if they didn't sell. Luckily, the Atari game market flopped before we decided how to deal with such a potentially large sale and its associated risks. Our game sales dried up to zero overnight, and in the end, I believe we sold less than a few hundred units.

The computer store and mail order business continued operations for quite some time, but I had had enough. My one-year hiatus from grad school was getting me itchy once again to do physics, which I did with a vengeance upon my return. The rest is history.

A few days ago, I chanced upon a website that had a screenshot of my game (shown here) as well as the code to run it on an Xbox (someone had taken the time to transcribe the code). Given the few number of units that were sold, I am curious where they found a copy.

It was fun writing the game and producing a product that lives on via the internet. Hopefully, my present activities will be more fruitful to the citizens that support my passions than this game, which was wonderful when viewed though my eyes, but in the end not a very popular product.

I would be interested in hearing from anyone if they are able to run the game on their Xbox. http://www.atarimania.de/detail_soft.php?MENU=8&VERSION_ID=5117

Tuesday, December 6, 2011

The risks of producing energy


There are risks associated with everything we do. Children are killed playing sports and adults are killed pursuing activities in their leisure time. The risk that we are willing to accept in any activity depends on a cost/benefit analysis. Often, feelings cloud our judgement.

A recent power failure got me thinking about our reliance on energy in cooking, cleaning, preserving foods, controlling our environment, making products, transportation, entertainment, information technology, etc. Almost every activity depends on the consumption of energy. The benefits of energy are clear, so what are the costs? Death, for one.

Workers being killed in dangerous energy production facilities as well as deaths in the general population from the byproducts of energy production must all be taken into account in determining the risks. It is estimated that energy production costs nearly one and a half million lives per year. Fossil fuels, especially coal, are the biggest offenders. However, these numbers may be large due to the fact that fossil fuels are responsible for a major fraction of energy production.

A more equitable way of to compare mortality rates associated with energy production is to divide the death rate by the amount of energy produced by that energy source. The column labelled FATAL/TWH (or fatalities per terawatt of energy production) shows this ratio.

ENERGY SOURCE

DEATHS /yr

FATAL/TWH

TWH

NOTES

Coal – world avg.

1,000,000

161.00

6,500

(26% world energy, 50% of elec.)

Coal – China

278.00



Utilizing heavily-manual practices

Coal – USA

15.00



Mostly open-pit & u/g machine

Oil

342,000

36.00

9,500

(36% of world energy)

Natural Gas

23,000

4.00

5,750

(21% of world energy)

Solar (rooftop)

6

0.44

12

(less than 0.1% of world energy)

Wind

22

0.15

150

(less than 1% of world energy)

Hydro

290

0.10

2,897

(EU deaths, 2.2% of world energy)

Hydro + Banqiao

3,500

1.40

2,500

(171,000 Banqiao dead)

Nuclear

104

0.04

2,600

(5.9% of world energy)

World

1,390,000

55.7

25,000


Unaccounted for

83,500

55.69

1,500

fatalities prorated







Source: nextbigfuture.com

To be evenhanded, the death rates must include all possible causes. For example, the wind power and rooftop solar statistics include deaths of installers falling from tall ladders. The nuclear numbers include deaths due to exposure to nuclear waste as well as direct radiation exposure of residents near plants.

Since some of the numbers are difficult to estimate, it would not be surprising if they were off by 30% or even more. Keeping this in mind, we can still make informed judgements.

Until researching this topic, I was unaware of the Banqiao dam in China that failed in the mid 1970s -- resulting in an estimated 171,000 fatalities. This dam and the nuclear power plant in Chernobyl share two important similarities: They were built with the help of engineers from the former Soviet Union; and, their failures were catastrophic in their devastating effect on the environment. However, the nuclear disaster cost far fewer lives, and the deaths were spread out over decades, leading to perhaps 100 deaths per year. For a description of the technical details of nuclear meltdown, see the lecture by Richard A. Muller. Muller's book, Physics for Future Presidents, addresses the issue of nuclear waste - a must read for anyone who wants to develop an informed opinion on the topic.

According to the numbers, hydroelectric power generation is far more dangerous than the use of nuclear reactors. Some people might argue that this hydroelectric disaster should not count because it was a singular event. Using similar arguments, why not remove the Fukashima and Chernobyl accidents as well? Removing inconvenient data is bad science but makes for good politics and feeds ideology.

From the perspective of the individual, the effect of various health risks on life expectancy has a more visceral effect. The table below shows estimates that I gathered from the internet as well as rough numbers that I calculated for Fukushima. Alcohol consumption is comparable to the risks of being exposed to what is considered a huge dose of radioactivity, while obesity and smoking carry even higher risks.

Health Risk

Life expectancy lost

Smoking 20 cigs a day

6 years

Overweight (15%)

2 years

Alcohol (US Ave)

1 year

All Accidents

207 days

All Natural Hazards

7 days

300 mrem/yr – in addition to background

15 days

1,000 mrem/yr – in addition to background

51 days

Person standing at main gate of Fukushima power plant for 1 month after peak emissions

1 year

At 6,000 mrem peak at Fukushima

306 days


An inhaled particle of smoke that increases the likelihood of death by the same amount as a nucleus of strontium should be equally feared. I would greatly prefer to work at a nuclear power plant than a coal mine and would prefer to live a few miles away from a nuclear plant than an oil refinery.

So, before eating those sugary snacks, consider a tour of your local nuclear reactor. Your life expectancy will be greater as a result.