The Folly of "Legal" AI Products
What can the nature of AI solutions for lawyers tell us about the clash of tech and the legal profession?
I recall the first time I played around with Harvey, an enterprise-level AI product for corporate lawyers. It was cool for a few days, and then when I had to push it to achieve different tasks, it couldn’t do much better than a paid version of ChatGPT. (My thoughts in more detail on Harvey from a private funds perspective here.)
I harbor significant skepticism about whether many of these highly valued companies, including Harvey’s competitors, can deliver the value that they purportedly market. Many of these legal “solutions” are, in more confidential conversations I have had, mockingly referred to as “vaporware.” Software that is promoted in public but, for several reasons, delayed in their launch or never launched at all.
The story around the first 18 months of Harvey’s existence among many chief innovation or data officers at large US law firms amounts to the fact that the company occasionally published press releases announcing a new round of venture financing but otherwise was extremely secretive about what it was doing.
When the promised product was released, it hardly justified the prices the company charged. And when you’ve got very particular investors (venture capital funds) whose primary motivation for backing a startup is its ability to scale at the speed of light, the fundamentals of a business can be compromised.
In my experience (and this applies to multiple legal AI products, some of which are geared towards private funds and others which are more general), the fundamentals, hardly existed to begin with. And the fundamentals didn’t drive the decision-making of these businesses because their founders did not know what those fundamentals are.
There were three reasons for this strategic (and largely unforgivable) mishap on the part of large AI companies and their investors.
The first is that many of these AI teams were unable to anticipate the surprisingly resilience of the billable hour. The second is that legal data is by both nature and degree exceptionally unique from other kinds of data. The third is the regulatory structure of the legal profession (which is very different from the regulatory structure of the law).
Number one. Many of the largest legal clients in the world are quite cost insensitive on certain matters. They just need to hire the most reputable firms as a cost of doing business and as a safety net in case they need to point fingers (“I hired the best attorneys, so you can’t blame me.”) So that kind of means law firms will continue to bill them at whatever prevailing market rate there is or, ideally and quite frequently, higher rates.
So I mean, does the philosophy of “efficiency” really stick here? I am not sure that AI companies thoroughly interrogated whether the facticity of speedy automation was best fit for an industry that often acts in a way that luxury goods do: people are willing to pay whatever the cost because the mere purchase of the thing itself signals to their stakeholders (whether jealous onlookers or, for lawyers, clients) that the appropriate box was checked.
As such, an AI product that promises the delivery of a quicker, and potentially better, work product flies in the face of financial logic. Billable hours for a certain gentry of clients will always be here to stay, including for even the work that is most automatable and traditionally done by junior attorneys.
Put more bluntly, though the AI narrative is usually “we aren’t here to replace the bespoke work, just the automatable, low-hanging fruit,” it doesn’t make sense to make efficient the latter if billions in legal revenue is largely agnostic to cost. Not to say these clients don’t care, just that the financial sensitivity isn’t as prominent as we may like to believe.
Heck, for us in private funds, consider the fact that Kirkland charged $100 million in legal fees to Blackstone alone last year. The figure made headlines for about 2 days. Then the world moved on. And that’s just Blackstone’s Kirkland bill (Simpson Thacher has long been the preferred law firm for Blackstone).
The main reason why the large law firms are buying expensive AI products is because their clients want them to. The partner rank, while may include the curious, could largely care less about the technological potential of a technology meant to disrupt a century-long revenue model. This helps explain why expensive subscription-based technological products (including non-AI products) have poor adoption and usage rates among attorneys.
Large law firms burn millions of dollars a year on this stuff because they serve as marketing tools (“we use the latest tech solutions”) and because of, as noted, client pressure (“are you guys using AI to save time on X?”. But neither reason is sufficient to warrant a sincere and wholesale adoption that technology providers are wont to tout about.
This is to say that the purchase of an AI product does not indicate desire for adoptability or even potential baseline usage.
The attorneys that do make copius usage are still few and far between. And the likes of Harvey are toys, not tools. Play things that are fun to tinker with.
Number two. Let’s assume everyone wants to use the AI product of choice. They are immediately and squarely faced with a serious problem: ethics of usage and confidentiality of information. For a profession whose principal value comes from the exchange, utilization, and formulation of information—kind of like the intelligence world—there is little room for error if that information is misused or misplaced (to put it lightly). In this respect, there are few parallels to the law. If information is the core language of AI technology, then challenges to use it in the law becoming immediately apparent.
Moreover, there is hardly any guarantee that the output of an AI product will be correct even 1% of the time. The profession requires 100%.
It’s quite evident that most legal AI preferred speeding to launch rather than meditation on the facticity of how to structure the technology around confidentiality, which after many errors became a second order consideration. And what’s more surprising is that there is a unique American “culture” around the law: the principle of attorney-client privilege.
In missing the primacy of confidentiality, did the tech world also admit ignorance of base-level American cultural artifacts and practices?
Were countless real-world headlines taught eleven thousand times throughout the standard K-12 schooling—Brown v. Board of Education, Roe v. Wade, the OJ trial—and Hollywood’s decades-long cultivation of procedurals and true crime as standalone genres not enough to cultivate the necessary awareness to build confidentiality into the foundations of next generation technologies meant to serve the legal profession?
To be very clear, none of this required these AI founders to be technical experts of any kind. It just required simple cultural and historical literacy, much of which could be found in a run-of-the-mill Netflix series.
So then, what the hell? It isn’t enough for the clients of these products (i.e., the lawyers) to deal with the minefield of confidentiality in the normal course of their daily work?
To then have a heap of products thrown onto you, largely driven by the shortsighted but familiar hype of Silicon Valley, becomes a largely frustrating world to work in.
First principles were ignored. Speed and efficiency were not among those principles. If only Silicon Valley had learned from its own priors. But the rap sheet continues to grow.
Number three. The inception of legal AI began in the wrong place, and that reflects the ignorance of many founders in respect of their knowledge of the American legal profession. Every lawyer passes through the halls of state bar procedures to become licensed. These organizations, which are not governmental entities, are the among the profession’s most important gatekeepers (along with ABA-accredited law schools).
And every attorney featured in some headline about the misuse of AI has likely been put in front of an ethics committee. The recommendation of such committee has led to the disbarment, in many cases, of the attorneys involved. Many bar associations are hosting literacy sessions and partnering with law firms to provide greater insight on how AI is used.
But think about how much more impactful legal AI technology could’ve been if the founders actually spoke to lawyers in the first instance. To understand the strictures and limitations of the people they want to build something for. One quickly realizes that a conversation with a state bar association is absolutely indispensable.
So what to do?
Well, don’t look at me. I have little idea as to what type of solution will stick. I prefer the perched seating of high-brow analytic disparagement.
I wish I was a bona fide literary critic. But I’ll settle for ne’er-do-well fund lawyer and occasional AI-contrarian.
All of this is messy. Which is why it is largely ignored.
There’s a great quote by James Baldwin, “The purpose of art is to lay bare the questions that have been hidden by the answers.” I think that could equally apply to technology.
AI is presented as an answer to something. But it feels like, at least for the law, the technological products haven’t really figured out the right questions to ask. So there’s a constant reference to the answer, which always happens to be couched in efficiency, ease, or speed. All three of which are often the enemies of good legal work (though, sometimes they can be good). One can only know that if there was greater attention to asking the right questions.
Very fun read! Thanks for the honest assessment.
A good and thoughtful piece - thank you Shahrukh!
Hasn't Harvey been built to address your point 2 (confidentiality of client information)? I understand that Harvey claim that any law firm/client data uploaded into their system is not used/analysed beyond the law firm's instance in Harvey. Or is your concern that Harvey won’t be able to maintain such safeguards or that they might in future be breached or fail?
I agree with you that the efficiency gains of lawyers using AI (if realised) do run counter to the billable hour model (your point 1). But there's a lot of commoditised work out there which could be made more profitable if the relevant firms/practices leveraged efficiency gains offered by AI in such engagements.
Your point 3 (accuracy of AI output) is a serious concern with the use of AI in private practice. But isn't this ameliorated somewhat if we change our mindset from "the AI must be 100% accurate or we should never use it" to accepting that it is going to make mistakes / hallucinate but this shouldn't stop us using it provided we put in place robust processes to always carefully check its output including its citations and sources and, critically, to also consider what it may have missed or overlooked (not dissimilar to supervising and checking the work of junior inexperienced attorneys although I appreciate there are important differences, not least the accountability of a junior to his employer which is not shared by an AI tool)?