AI does have a usage in legal.. which is mostly churning though 100,000,000 pages of text from discovery so lawyers can find 99,000 docs that may be relevant.
And stenography aids.
But generating text for actually being looked at? It’s garbage.
Except that when a legal firm actually did this, it generated a completely fake precedent where it accused (and "convicted") a real person of a crime they didn't commit, fabricating the details in its summary of the case
LLMs are designed to always give you what you ask for. If it doesn't find any information it will fabricate it, because it can't respond with a negative
AI does have a usage in legal.. which is mostly churning though 100,000,000 pages of text from discovery so lawyers can find 99,000 docs that may be relevant.
And stenography aids.
But generating text for actually being looked at? It’s garbage.
Except that when a legal firm actually did this, it generated a completely fake precedent where it accused (and "convicted") a real person of a crime they didn't commit, fabricating the details in its summary of the case
LLMs are designed to always give you what you ask for. If it doesn't find any information it will fabricate it, because it can't respond with a negative
That’s what I said - generating text to be looked at.. LLMs are bad at that.