Discussion about this post

User's avatar
Robert Sharp's avatar

There was a case I argued where the law report emphasised the wrong thing and glossed I what I considered to be the most important parts of the ratio. That report was picked up and amplified on ‘aggregator’ sites and now provide the definitive summary of the case. That was with entirely human authors of the law reports, working off the judgment alone (which left a lot out) rather than the case argued by the barristers. So of the issues you identify above, I think AI *interpretation* is likely to be the trickiest.

I think the thing that will save or scupper us is whether or not these new technologies are deployed by an existing brand or editorial team with a reputation they wish to protect.

ChatGPT, Google and the other popular AIs come with no ‘warranty’ and the people that make them suffer no diminished reputation when the AI hallucinates.

However, I imagine anything put out by Westlaw, LexisNexis, ICLR will come with a stamp of approval from those companies. I don’t care whether they hand write each report with a quill or whether they deploy a bunch of AIs to generate 100% of the content: their need for quality control to protect their reputation will mean that they have to check their output before making it available to lawyers.

Expand full comment

No posts