Nothing messes up a nice research sit like a Red Flag. The villain of Westlaw’s KeyCite system, the “Severe Negative Treatment” Red Flag haunts every associate who just found the exact language the partner demanded only to look in that top left corner and feel their world crumble.
Alas, there’s a new flag coming to town for you sorry souls. Welcome the Red Stripe Flag.
The new addition to the Westlaw tagging arsenal aims to clear up the nether zone between the Red Flag and the “Negative Treatment” of a Yellow Flag by carving out opinions where some, but not all, of the holdings involved are overturned, allowing a lawyer to identify areas where an otherwise dodgy case may remain good law.
That a federal appellate court benchslapping the bejeezus out of some district judge and an opinion vacating half an opinion while upholding the rest both earned the same flag did cause unnecessary panic. And force lawyers to read through the case just to figure out that it’s not as bad as it looked. The new system will highlight the specific part of the case that’s no longer good law.
I asked for more specifics about where the system draws the line between the two, and it seems Red Flags will dominate direct line reversals and such, while the sort of indirect negative treatment that leaves any constituent holding undisturbed will get the new flag. That says to me that, for example, Roe v. Wade would be a Red Stripe given that the more mundane procedural holdings — like that criminal defendants cannot challenge state laws while their case is pending — were untouched by Dobbs. Which is still helpful, but… it seems weird to suggest that Roe isn’t the paradigmatic Red Flag at this point.
But the new flag is just part of this week’s slew of Westlaw announcements.
It’s all part of the unveiling of Westlaw Precision, a new premium offering from Thomson Reuters that reimagines the whole process of legal research in an effort to reduce the inevitable clutter of false positives that appear with a natural language search alone.
With the aid of an army of human attorney editors, Thomson Reuters is recoding the last 12 years of caselaw — plus some older leading cases — along several factors with more information coming over the next year.
Historically, Westlaw’s smart searching takes the query and brings up everything that matches, even if those terms appeared in a context that isn’t precisely (foreshadowing the name!) on point. With Westlaw Precision, the user can refine this search by these factors (or even begin the search directly from the precision template — though I suspect lawyers will gravitate to using these features as filters for customary natural language results).
Basically, if you’re searching for a legal standard specifically in the context of a motion to dismiss, you can just tell the platform and it will stop giving you a barrage of rambling appellate opinions that casually drop “motion to dismiss” in otherwise irrelevant accounts of the procedural history.
It’s a small example, but you can imagine how it expands out.
No more “reasonable reliance” hits about implied agency when you’re looking for contract law. The ability to place these search terms within the proper context does eliminate a lot of clutter in the initial results browse. On that note, the browsing interface in Precision offers a potential efficiency boost replacing the snippets of language in context with more direct conclusions:
Alongside new features allowing researchers to instantly jump to “More Like This” and “Cited With” (for those pesky cases that always come up together in opinions but don’t actually cite each other), Westlaw hopes to massively improve research efficiency.
The company’s early tests indicate it doubles attorney research speed.
From my brief introduction to the product, the biggest challenge is likely in convincing senior lawyers that a search spitting out 10 on-point cases is actually better than 200 cases that a human associate whittles down to… those 10 cases. It’s a form of “simplicity paradox” where getting cleaner, more focused results actually makes the user worry that something got overlooked.
This is why, if I were counseling lawyers trying out this product, I’d advise skipping the dedicated Precision search option and use them as filters for natural language searches at least at first. Assuage those anxieties by seeing how many hits it would come up with cold, then use Precision and decide if the ratio of results makes sense (understanding that cases beyond the last 12 years won’t be included in the limited search). If 900 cases bring back 5, there might be a problem (though probably a user problem). But if 200 cases bring back 15 when narrowly targeted, that seems about right and you can feel confident you at least gauged the expanded universe first.
Lawyers surveyed by Westlaw characterized roughly 35 percent of legal research as “difficult.” That figure jumped to 43 percent when isolating Biglaw attorneys. The billable hour is great and all, but so is getting a timely result to the client. When Biglaw reports an average of 22.4 hours on “difficult” research questions and there’s an option to cut it to 11… well, there aren’t any red flags there.
Get it? I’ll show myself out.
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.