Don’t Outsource Your Perspective to a LLM

A person playing trumpet and a robot playing drums are on a stage in a jazz band. They are looking at each other. In the style of a 60s jazz album cover.
Credit: Aaron Gustafson × Designer

At A List Apart, I’m seeing a lot of article pitches that were clearly written by a Large Language Model (LLM) rather than a human being. In many cases, the people making the submission clearly put a lot of thought into the prompts they used to get the output they desired, but had zero follow though when it came to taking ownership of the output they were handed. To be clear, the issue I have here is not that they used an LLM as part of their process, but rather how they failed to wield such powerful tool effectively.

Improv

A while back, I read a great piece from Johnathan May that discussed where LLMS fail us (factual knowledge) and where they excel (conversation). He highlighted how awful they are when it comes to retrieving facts (LLMs are not the reference librarian you think they are), but how useful they can when factuality is not as important. Armed with this knowledge, he did some experimentation and discovered how well LLMs perform as an improv partner… someone to bounce ideas off of.

It was with that idea in mind that I decided to test out this approach when working on my axe-con keynote last year. I already had an outline for the talk and knew what I wanted to say, but took some time to bounce my ideas off of ChatGPT and see how it responded to them. In most cases it responded with paragraphs of prose clearly demonstrating its lack of reasoning on any subject—not surprising though, LLMs are word prediction engines. In other instances, however, it surprised me with how the words it predicted managed to “frame”1 particular concepts in novel ways, prompting me to think differently about how I might approach it in my talk.

I feel confident claiming that the talk was entirely a product of my own efforts. At the same time, I’m also appreciative of ChatGPTs role as an improv partner in the process of creating it.

Hope

I guess I’m hopeful more people will embrace using LLMs in this way. We need to recognize what tools like these are good at and what they aren’t. Hammers are awesome at driving in nails, but they’re only reasonably good at removing them—from wood, yes; from drywall no. Hammers are also the entirely wrong tool if you’re looking to drive or remove a screw.

LLMs are excellent at reworking stuff you’ve written—summarizing content, adjusting it to a particular reading level, helping you craft a suitable conclusion for an article or talk based on your existing content. They are not fact machines. Furthermore, they can’t replicate or replace your unique perspective on a subject and that’s the very humanness that makes your writings, talks, and other forms of communication worthy of our time.

Parting words

If you find LLMs useful in your process, more power to you. If you don’t, that’s cool too. Just be aware of their limitations while embracing their power to help. And don’t ever try to pass off their writing as your own. Not only can we tell, but it robs us of the chance to hear what you think. And that’s what we really want to read.

P.S. - In case you’re wondering: No, I didn’t use an LLM to help me write this post 😉


  1. To be clear, I am not trying to anthropomorphize the LLM here. ↩︎


Shares

  1. Brett Kosinski
  2. Andrey Taritsyn