Granite Geek: New Hampshire faces ChatGPT, the new language of ‘artificial intelligence’
Published: 01-17-2023 6:51 AM |
Hands up, anybody who is old enough to remember ELIZA.
That “natural language” program came out of MIT in the mid-1960s and caused a flurry of interest by creating text conversations online that fooled some people into thinking they were dealing with a human being. Its limitations quickly became apparent, however, and ELIZA retreated to the land of academic research, the first of a long string of debunked claims about software that communicated “just like a person.”
Bad news, Homo sapiens: The claim is debunked no longer.
ChatGPT, a program that makes statistics-based combinations of online material to create prose, poetry and even computer code that is unbelievably good, has arrived.
This isn’t just a better ELIZA. Along with similar programs that generate visual art from simple cues, it’s an entirely new application of what is loosely called “artificial intelligence” or AI. And I have to say that it does feel intelligent even though it’s just a huge amalgamation of if-then and yes-no rules.
Things, to use the cliche, will never be the same. Just look at the nearest classroom.
“Some of the professors, adjuncts, are playing with it. One person generated a sonnet about me,” said Alan Lindsay, head of the English department at NHTI. “I agree that it’s different. You could be fooled by this if you’re not paying close attention.”
Colleges and high schools are trying to figure out how to deal with a free online program that in just a few seconds can produce very good essays, hundreds of words long, about complicated topics using only simple prompts (“compare Catcher in the Rye with Harry Potter” or “write a limerick about New Hampshire” or “explain how the power grid works”).
Article continues after...
Yesterday's Most Read Articles
The immediate fear is that ChatGPT is the perfect tool for cheating but more importantly, it raises questions about how education should proceed. The well-established model of explaining things in class and having kids write about it for homework might be out the window.
“There’s a lot of concern from teachers wondering can we use it in school? How can we use it? … We’re watching and learning and waiting to see how things develop. It’s such a new technology and it is moving so quickly; it changes literally every day,” said Pam McLeod, director of technology for the Concord School District.
(ChatGPT can’t be used in public schools right now because it requires a personal phone number for authentication, which violates privacy policies. Concord blocks it from their in-school network although I suspect that won’t slow down many teenagers.)
New Hampshire Education Commissioner Frank Edelblut, who gave a presentation about ChatGPT to the state Board of Education on Thursday, said he feels like it is the latest iteration of technology like spellcheck and Grammerly that takes over skills that once had to be laboriously learned in class.
“We need to update our models for the 21st century,” he said. “When we send that child home to write a 500-word essay, what is it we’re trying to instill in them and teach them? They need the concepts and logic models to be able to think through that stuff. The same learning objectives can exist with this new tool.”
As an example, he said that students could write an essay and then generate an essay on the same topic from ChatGPT to compare the results.
Educators aren’t the only ones who are uncertain about what ChatGPT will do to them. My profession is definitely getting nervous.
Software has been used for several years to generate simple types of journalism such as short game stories and company earnings reports, but ChatGPT lets news organizations go way beyond that.
CNET, a well-regarded website covering technology, has begun using it to write some articles and it’s very hard to tell that no ink-stained wretch was involved. Freelancers are complaining of losing business, with clients who once hired them to write material now doing the job with ChatGPT, perhaps paying a small part of the previous fee to double-check the result.
Computer programming jobs are similarly at risk. Give ChatGPT a few prompts and it can combine existing software code to create new code that is often, I am told, shockingly good. There are already reports of hackers having used it to create new types of malware. If you know what a “script kiddie” is, they’ve been handed a potent new tool.
Perhaps the group screaming bloody murder the loudest are visual artists because natural-language processing has begun creating digital pictures that are intriguing, even artistic, in seconds. Right now, for example, I’m looking at a beautiful depiction of a castle that was produced in almost no time by feeding Coleridge’s poem “Kubla Kahn” into an art AI program — so I guess it’s a stately pleasure dome rather than a castle.
I assume that music, spoken-word performances and movies will also become fair game.
This raises big questions about copyright and intellectual property, since these new artworks are really high-level mashups of existing artworks, but if history is any guide that concern won’t slow the technology much. Microsoft seems to agree: It is rumored to be buying OpenAI, the company that made ChatGPT, for $10 billion.
So, does this mean we are finally seeing the start of that long-predicted apocalypse in which humans get replaced with software?
It’s certainly true this technology will create vast repercussions impossible to predict, there are still real limits to the approach, because artificial intelligence isn’t actually intelligence.
What we’re seeing is a very deep way of combining human creations, anything that can be found on the internet, by following statistical patterns found in those creations. These programs are not dreaming up any new creations and that results in some glaring shortcomings.
For example, librarians are reporting people coming in and asking for books that don’t exist. They were recommended by a ChatGPT script, which combined information about existing books in ways that are statistically valid but meaningless. Since ChatGPT doesn’t have any knowledge of the world — it doesn’t have any knowledge at all — it didn’t realize the problem.
I’ve seen ChatGPT described as the ultimate BS artist, to use a family-newspaper-friendly euphemism. A BS artist, as you know, is somebody who always sounds plausible even when they’re making it up as they go along. That’s exactly how ChatGPT works. It’s using existing patterns so it sounds right even when it is producing complete nonsense.
In a way that makes it even more alarming, yet another quick and easy tool for misinformation. But it also gives hope that us carbon-based life forms will retain a competitive advantage.
In the meantime, here’s your homework. Sign into ChatGPT and give it the prompt “write a newspaper column about the effect of ChatGPT on New Hampshire” to see what it produces.
Let me know of any comically bad results. Results that are better than me — you can keep those to yourself.