ChatGPT is a program that purportedly converses with humans like an equal. Professors have identified students who have used this artificial intelligence to write term papers by noticing that “the writing was too good for college students.” I was intrigued by this machine's uncanny ability to mimic human interactions through language, so gave it a spin.
I decided on a simple test – asking GPT to write an essay about me. This was not meant to stroke my ego, but rather was motivated by laziness, since it required little work to check for accuracy. Also, my online presence is modest compared to luminaries in the field, but substantial since much of my work requires publications in journals, books, seminars, colloquiums, and conference presentations. Why this hits a sweet spot is answered below.
GPT’s essay is attached below. The statements that are false, highlighted in red text, are a large part of the document’s substance. On that count, GPT has failed. At the other extreme, it gets a high grade from MS Word, with the sole suggestion to remove a comma, which is highlighted in magenta. The parts of the essay that were nominally not wrong were devoid of content, using cliched praise without factual backup.
Philosophy professors were impressed with GPT’s writings on the greatest thinkers from antiquity. How could this be given that the essay on me was downright poor? My take is that GPT has at its disposal mega volumes of writing on well-known historical figures, so stringing together snippets of good writing with perfect grammar comes across as impressive. This will not work when the writer needs to understand multiple factual sources prior to synthesizing a new thought on the topic.
The error that GPT made in my case was confusing which information belonged to which subject. Information about me often appears on university websites, that lists multiple faculty on one page, or in the announcement of awards, which again usually names multiple recipients. When GPT attributed the wrong alma maters for my various degrees, it undoubtedly picked these items from a university web page, where it incorrectly mixed and matched faculty with their attributes. And the announcement of me becoming a Fellow of the OSA also included winners of other prizes. You get the point.
Finally, much of the remaining text might be confused with a
human’s writing, but not of a sophisticated one. For example, GPT praises my work with
superlatives in the to
me used when award citations are announced. GBT goes no further than to parrot these,
without explanation. It would have been impressive
if it had explained how a particular discovery could lead to an application,
but the text fell far short.
There are a couple of exceptions where GPT seems to have composed hair-raisingly meaningful sentences. I shudder at the fact that there may be a little some there beyond my criticism. But for now, I have no reason to believe that this had anything to do with the emergence of intelligence.
In conclusion, it is not surprising that GPT does well when no thought is required to understand the meaning of a topic, but a large gulf still separates it from thinking humans. Under many special circumstances, GPT may seem incredibly intelligent and thoughtful, but I am resting easy that no machine will replace the human mind soon.
GPT's Essay on Mark G. Kuzyk
Mark G. Kuzyk is a prominent physicist and materials scientist known for his research in nonlinear optics and photonics. He received his undergraduate degree in physics from the University of California, Berkeley, and his Ph.D. in physics from the University of California, Santa Barbara.