One of a lawyer’s principal professional obligations is competence. A lot of commentators are telling attorneys that adoption of GenAI / LLMs for legal tasks is required to maintain competence. Here is why that narrative is dumb and wrong.

I worked for six years drafting AI patent applications for IBM. I have a general understanding of AI — enough to get patents on inventions in the field. Beyond that, I’ve studied statistical methods and probability. I understand the principles underpinning LLMs.

An LLM is a machine that rolls weighted dice to pick plausible phrases from a somewhat-curated MadLib chart that might include some way-off vocabulary. It lacks the understanding of a kindergartner, a kitten, or a crustacean. Any results you get from an LLM that look kind of right require scrupulous checking for veracity.

A lawyer who cares about providing competent and accurate legal service must check an LLM’s output by replicating the work themselves. Any lesser effort is equivalent to handing a file to a drunken junior whom you know to hallucinate, prevaricate, and lack a conscience — and then signing off on that junior’s work product.

Would you continue to employ an associate who had LLM-level reliability?

Would you be willing to work with a partner who continued to employ that associate?

Why would you expose yourself to that liability?

Using LLMs for legal tasks is dumb and wrong.