Site icon The Handbuilt City

ChatGPT Fabricated A Plausible, But Incorrect, Biography Of Me.

In December, science and mathematics journalist Charles Seife was disturbed to learn that he, according to OpenAI’s new natural language processing interface ChatGPT, had actually died a few years ago. Pressing the artificial intelligence to explain, it seemed to double down on the factually incorrect characterization of the author’s imagined death, eventually even going so far as to provide him a URL for the link. I asked Chat to describe me, and its response might have been believable– but everything it said was, well, completely wrong.

Accuracy in Information

My high school math teacher used to say that “close enough only counts in horseshoes, hand grenades, and nuclear weapons.” For me as a part-time journalist, accuracy is important to me, but deep clarity is more important than absolute accuracy. I have, on a limited number of occasions, had to correct things I’ve written on this blog, but these are usually clarifications rather than retractions. A lot of the particular style in which I write and analyze is a matter of finding a specific angle that represents a hypothesis, and looking to see to what degree, if any, it’s true. I write a lot about discourse and what one might call media theory, where it’s sort of a meta-analysis of information (having been schooled in the finest of liberal arts nonsense, this probably makes sense).

But if I can’t find the answer to something, I’m not terribly likely to make a conclusion based on a hunch. I often write stuff like “I wouldn’t be surprised if [x].” In ChatGPT’s case, it has occasionally been observed just making up information. I noticed this when researching infrastructure disasters last month (why not?), and Chat told me about a train derailment in China that killed hundreds of people. But this derailment never happened. There was a train derailment that had occurred after an earthquake. But there weren’t hundreds of deaths. Chat conflated two instances and sent me the combined ahistorical event. It’s a curious problem to have, because it’s not the kind of mistake humans would make, as often as we do get befuddled or confused. ArtNet noted this, too, in an article earlier this month (shoutout to cuz Rebecca Zorach in that one!). Chat was just fabricating stuff and it wasn’t clear why.

Asked about me, then, Chat said that I am:

an urban planner, writer, and community activist based in the United States.

Nat Zorach has a Master’s degree in Urban Planning from the University of Michigan and has worked on a variety of urban planning and community development projects in the Detroit area. He is also a writer and has contributed articles to various publications, including Model D, Next City, and Belt Magazine, on topics related to urban planning, community development, and social justice.

In addition to his work in urban planning and writing, Nat Zorach is also involved in various community organizations and initiatives. He has been a member of the Detroit Food Policy Council and has worked with the Detroit Community Wealth Fund, which supports worker-owned businesses and other community-led economic development projects.

If you knew me in Detroit but didn’t know specifics, you might believe this. It’s highly plausible. I am very familiar with the Detroit Community Wealth Fund, but I have never worked with them. I am also very familiar with the Detroit Food Policy Council, but I have never been a member. Interestingly, I have direct connections to a couple of people who are members of the board of the DFPC, but I haven’t spoken to any of them in years, and I’m not even sure the folks I’m thinking of would actually remember me. I also hang out with a lot of people in the University of Michigan scene, but I am surely not an alumnus, as I am reminded when I apply for jobs and people ask me, “where the hell is Grinnell College?”

So, it’s clear that ChatGPT was able to find some sort of publicly digestible information about me. It just perhaps didn’t know what to do with it. So it fabricated. This is essentially what Seife says is happening– because it’s tasked with responding to a prompt, even though it doesn’t actually have all of the information at its disposal.

Interesting, certainly! But problematic for the robot to start outright inventing things. Seife doesn’t really conclude why this has happened, other than that the AI took instructions and is doing what it’s told– even if it’s response involves making something up, probably by parsing available information and creating a narrative that ties together that information.

An image from Midjourney AI, licensed under CC NC 4.0. A giant, evil supercomputer is destroying the old city of Jerusalem. I mean, hey, whatever floats your boat, Yuval!

Understanding, And Respecting, The Limits

There are some things I’ve used ChatGPT for, even for some of the content you’ve been reading recently! It’s useful as a research assistant, for example, if I know that something exists but I don’t know what it’s called. Google is often helpful, of course. But sometimes not! I have an article coming out this week, for example, that looks at some technical HVAC issues. I would have had a much harder time doing the research for it using Google, because ChatGPT is able to synthesize connections that might involve comparing information from different sources that can be at the least time-consuming when done via Google.

It’s also great in its ability to quickly assemble information from a number of broad, disparate sources, even though it won’t do it in the most up-to-date format. This is useful for comparing, for example, state-by-state data or regulatory information. Google might require several queries– Chat does it all in one. But the inaccuracy of its previous responses mentioned above has me grateful that I’ve done my due diligence to investigate.

Most of the people I know who have been using ChatGPT have been using it because they have to write stuff but aren’t strong writers. Chat gives, as Seife mentioned, formulaic, but coherent responses to things like essay prompts– a C paper, he said, but a C paper by a robot! I guess this is valuable, although writing is really something that you can get better at by doing more of it. Am I worried about it taking over the world? Not particularly with that commitment to accuracy. I’m sure there will be more written about this by people smarter than me in the coming months, especially since Chat’s new thing just dropped and everyone is already losing their minds over it. Glad I haven’t been preëmptively deep-sixed by the robot yet, though. I always make sure to greet it and thank it. You never know when, ya know, Skynet, or whatever.

Exit mobile version