14 Comments

Two cents... I think you got most of the reasons right: exclusivity, openai relationship, valuation. But you may have missed some key context.

Lawyers at all levels have had a common experience: we trudged through law school, we jumped though the bar exam hoops, we started at the bottom and kept running up whatever hill we chose. Many dropped out, some "made it", almost all became jaded.

Legal tech is similar. For one, most legal tech companies have at least one lawyer-founder. All have the shared experience of building product for, and selling to, lawyers, which isn't always pleasant and is often a longer cycle than other verticals. And it hasn't typically made many unicorns or had a strong identification with Silicon Valley.

In both communities there is a sense that you succeed by being smart, working hard, making the right connections, and paying your dues. Then you are part of a semi-exclusive esoteric club, complete with it's own language, culture, insecurities, heroes, and villains.

This is contrasted by the start-up/SV culture of winning with ideas, hustle, disruption, and VC $. Seniority can be a liability, the idea of a "ladder" of success is antithetical, and disrupting social norms is part of the fun.

AI itself is already unsettling to both legal and LT because, taken to it's logical endpoint, it could render both irrelevant. In this context Harvey enters:

- The founders are a second year biglaw associate and a Google DM engineer, which to the legal/LT communities is coded as "baby lawyer, silver spoon, knows next to nothing" and "silicon valley genius, threat".

- Their website was empty, they did zero PR or marketing early on, they didn't engage with the legal tech community, and reportedly ignored efforts to by the community to engage with them.

- Yet they raise a bunch of money, seemingly have an insiderish relationship with OpenAI, close early notable deals, are given a high valuation.

So the takeaway for many is that these guys were undeservedly given an inside track, have not paid their dues, have an indifferent or even contemptuous attitude towards the LT community of "we don't need you or your path to success", and that they were going to win nonetheless.

Hence the schadenfreude when their product didn't seem to live up to the hype, especially for reasons like "not understanding the market that well" that induced a collective I-told-you-so. (No comment on the fairness of this judgment. You already point out that they have impressive achievements.)

The Casetext comparison is extremely relevant, but I think you missed why Casetext was different. Casetext had already put in 10+ years when GPT-3 was made available. They had built their own AI models trained for legal. They engaged with the legal tech community quite a bit. They struggled! They seemed to be treading water at best when GPT came along. But when GPT did come out, they were uniquely able to implement it (by swapping out their own models) because they had put in the time, they knew the customer, they had built the rest of the machine. My impression was that the LT community was already rooting hard for them when they were acquired and was happy for them even though the acquirer was culturally a villain.

People, especially in legal, like when other people seem to get what they deserve.

Expand full comment

All fair points. Question: does this response seem reasonable to you? Is it productive thinking?

Expand full comment

It depends on your perspective. Schadenfreude helps to enforce valuable community norms, i.e. bad actors should not be rewarded. You may think those norms are important, i.e. move fast and break things is not ideal for an industry charged with promoting the interests of justice.

Harvey did bring more attention to legal tech. Had it turned out to be a Theranos, the attention would have turned negative.

On the other hand you may think those norms are just another form of protectionism (in an already well-protected industry) and you don't particularly care if the winners paid someone's definition of their dues. It hampers innovation and investment and is counterproductive, on top of the sour grapes vibes.

Expand full comment

If it's just animosity for the new kid on the block, then interesting that other emerging companies like Leya (started after Harvey) haven't gotten the brunt of the hate.

Expand full comment

I don't think the others have gotten the same publicity, financing, valuation, hype, etc. that created the feeling of an anointment early on. Not sure if the others approached these communities in a more collaborative way?

But agreed the hate seems motivated by something more.

Expand full comment

This is a good read @zach. Including the links to the fans and detractors. I met Winston Weinberg at a conference recently. Very impressive chap. Perhaps a lot of people feel like Jake Barnes: “I mistrust all frank and simple people, especially when their stories hold together.” I totally get that. Especially when it’s not simple.

Expand full comment

Loved reading your take, Zach. How dare Harvey not bring you in at the start. Don't they know who you are?! Truly...I appreciated learning more about Harvey.ai. Thanks!

Expand full comment

Maybe THAT is why everyone hates them lol

Expand full comment

I hadn’t thought about the perspective that Harvey opens up the avenue for other tech companies to sell to law firms, thanks for that. Two other aspects I’m thinking of:

1 - most law firms and lawyers are small-to-medium brands, the tech should be sellable to them as well

2 - curious how the impact of Harvey cascades to the quality of results for the end client, which ultimately is who lawyers serve

Expand full comment

I like "Harvey" https://archive.org/details/harvey-1950 but it made it difficult reading your article while thinking about Jimmy Stewart and his big rabbit. So, I'll have to take a break and come back and read the article again in a few hours.

Expand full comment

What the what? How would that not make reading the article like 5 X more entertaining?

Expand full comment

Yes, after visiting Harvey.AI and re-reading the article it made much more sense.

I can see how the use of AI could make lawyers more efficient as long as they know what they are doing and don't let the AI lead them astray. I remember reading a story somewhere about a lawyer who had 100% trust in his AI program which created documents he filed in a case that he apparently did not read before filling and the judge was livid.

I wonder if the use of AI will make paralegals more efficient or extinct? I suspect that, as with most disruptive changes, those that do not learn the right skills will end up doing other things. Lawyers who learn new skills will probably learn to delegate differently.

Of course, this topic reminded me of the time back around 1986 when I was the Chief of Software Engineering on the B-1B Test Team and I was called in to look at why an AI system that was created to download data from the B-1B computers after a flight and generate maintenance work orders created so many work orders where the mechanics could not duplicate the alleged problems.

It turned out the program was not faulty. The problem was back then people were faster than computers. By the time the data was downloaded, processed, and the work order given to the maintenance airmen, the next shift was on duty. They went to work on many problems that were already fixed by the previous shift, which met the plane when it landed, talked to the crew, generated their own work orders, and started fixing problems.

Expand full comment

As long as we can agree on the giant rabbit 🐰

Expand full comment

Awesome work 😎 as usual

Expand full comment