Granite Geek: Parts of the state’s new Code of Ethics about A.I. will look familiar to science fiction fans

By DAVID BROOKS

Monitor staff

Published: 10-02-2023 4:37 PM

Take a look at the state’s brand-new Code of Ethics For Generative Artificial Intelligence. If you have certain literary preferences, which you probably do if you’re reading this column, something will look familiar.

“I’m so glad you mentioned that,” said Ken Weeks, the state’s chief information security officer, after I pointed to a line in the Code of Ethics saying “AI Systems should neither cause nor exacerbate harm or otherwise adversely affect human beings and the natural environment.” 

Was that inspired by the Three Laws of Robotics from science fiction writer Isaac Asimov?, I asked.  

“Of course it was!” he said, laughing. “Anybody who went into this and tells you they didn’t take into effect the Three Laws of Robotics, they’re not (admitting) it.” 

And that’s a good thing, he said. “Just because the application and the technology is new, it’s important to remember there has been thinking about this for decades, even if it was speculative.”

New Hampshire’s state government, like every other government, organization, business, entity or human being, is trying to figure out how to respond to the various systems known as generative AI.  The Code of Ethics and the resulting executive branch policy is the first of what will undoubtedly be many such efforts. 

“What sparked this for me was back in late April, I was attending a meeting of the National Association of County Information Officers and several discussions during those meetings about how to improve services for citizens, how to automate certain functions … and it almost all centered around using artificial intelligence setups,” said Denis Goulet, commissioner for the state’s Department of Information Technology. 

“Vermont has done pretty extensive work on AI with a task force – the good, the bad and the other – and I thought we should start thinking about something like this in New Hampshire, to get ahead of the problem before we have agencies just diving right in. We already had some (agencies) looking for use cases,” said Goulet. “We leveraged work extensively that Vermont had done.”

Article continues after...

Yesterday's Most Read Articles

Here’s what’s in the state’s decade long interstate project
Swenson Granite quarry in Concord to fully reopen by next summer, owner says
NH voter guide: Local polling locations and hours of operations
Six candidates, three seats: school board race slated to be referendum on Middle School project
As NH Retailer of Year, Gibson’s Bookstore is part of a surprising trend: Thriving independents
‘Save the country’: JD Vance echoes voter concerns of immigration, economy at NH rally

Most of the resulting three-page Code of Ethics (it is linked from the department’s home page at www.doit.nh.gov) could apply to any technology, saying things like it it shouldn’t be biased against any individual or group, shouldn’t infringe on rights, and must be transparent and accountable.

And then there’s this: “Automated final decision systems should not be used by any government organization in the State of New Hampshire. … In New Hampshire, humans interacting with AI Systems must be able to keep full and effective self-determination over themselves and be able to partake in the democratic process.”

To me, this is the key because it’s the point where generative AI differs from websites or chatbots or other similar systems. AI isn’t actually intelligent but sure seems like it is, creating a strong temptation to hand it the reins and let the software make the decisions. It’s so much more efficient! 

But so much more dangerous. 

Goulet emphasized a realization about the dangers several times in our talk.

“We didn’t want any resident or New Hampshire business, or visitor for that matter –  any human being interacting with the state of New Hampshire – to be put in a position where a software code was making a determination for them that affected their rights. A human always has to do that,” he said. For example, he talked about a request for unemployment insurance, which requires gathering and correlating a bunch of information before it gets approved or not.

“The case manager can actually set a bunch of tasks at 4:30 … allow those to run overnight, then come back in the morning and that information is gathered and presented in a way that enables very efficient decision making. You end up with the state working 24/7,” he said. “Theoretically you could let the machine decide the fate of this individual. … But in the end, according to our Code of Ethics, you have to have a human say yes or no, not the AI.”

That human also has the responsibility to know what datasets were used because generative AI is brilliant at presenting complete nonsense in a form that is very, very believable. The software is the ultimate bull artist. 

Goulet said handling these questions through executive policy is preferable with something as unsettled as AI. It is likely that this new executive policy, and possibly the Code of Ethics behind it, will change and perhaps change often as the technology matures.

Creating new laws will also be needed, partly because policies from the Department of Information Technology only cover the executive branch but not the Legislature or the courts. However, that’s slower to do and harder to tweak as the perils and promise of AI become clear. If I was writing an AI law right now, in fact, I’m not sure what I’d say beyond “don’t be evil,” and we know how well that has gone with Google.

So maybe Asimov’s Three Laws of Robotics – first, robots can’t hurt people; second, they must obey orders; third, they can protect themselves only if the first two rules have been met – aren’t a bad guide.

“I think he got it pretty right,” said Weeks. “They’re simple, easy to understand. It  just kind of makes sense.” 

]]>