Breaking Legal News & Current Law Headlines | Daily Legal Briefing
  • Home
  • Hot Topics
  • Breaking
  • Business
  • Big Law
  • Small Law
  • Law School
  • Legal Tech
No Result
View All Result
No Result
View All Result
Breaking Legal News & Current Law Headlines | Daily Legal Briefing
No Result
View All Result
Home Legal Tech

OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong?

Daily Legal Briefing by Daily Legal Briefing
February 4, 2023
in Legal Tech
0
OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong?
4
SHARES
32
VIEWS
Share on FacebookShare on Twitter


Android with DepressionWith the rise of ChatGPT over the past few months, the inevitable moral panics have begun. We’ve seen a bunch of people freaking out about how ChatGPT will be used by students to do their homework, how it will replace certain jobs, and other claims. Most of these are totally overblown. While some cooler heads have prevailed, and argued (correctly) that schools need to learn to teach with ChatGPT, rather than against it, the screaming about ChatGPT in schools is likely to continue.

To help try to cut off some of that, OpenAI (the makers of ChatGPT) have announced a classification tool that will seek to tell you if something was written by an AI or a human.

We’ve trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers. While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.

And, to some extent, that’s great. Using the tech to deal with the problems created by that tech seems like a good start.

But… human nature raises questions about how this tool will be abused. OpenAI is pretty explicit that the tool is not that reliable:

Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.

That… is an extraordinarily high level of both Type I and Type II errors. And that’s likely to create real problems. Because no matter how much you say “our classifier is not fully reliable” human nature says that people are going to treat the output as meaningful. This is the nature of anything that kicks out some sort of answer, it’s hard for humans to wrap their heads around the spectrum of possible actual results. If the computer spits out a “this is possibly” AI generated, or even an “unclear if” (the semi-neutral rating the classifier produces), it’s still going to cause people (teachers especially) to doubt the students.

And, yet, it’s going to be wrong an awful lot.

That seems incredibly risky. We’ve seen this in other areas as well. When computer algorithms are used to recommend criminal sentencing, judges tend to rely on the output as somehow “scientific” even though it’s often bullshit.

I appreciate that OpenAI is trying to help provide the tools to respond to the concerns that some people (teachers and parents, mainly) are raising, but I worry about the backlash in the other direction: the over reliance on this highly unreliable technology from the other end. I mean, we already went through this nonsense with existing plagiarism checking tools, which also run into problems with false positives that can have huge impacts on people’s lives.

That’s not to say there’s no place for this kind of technology, but it’s inevitable that teachers are going to rely on this beyond the level of reliability the tool provides.

Instead, one hopes that schools start figuring out how to use the technology productively. The NY Times article linked above has some good examples:

Cherie Shields, a high school English teacher in Oregon, told me that she had recently assigned students in one of her classes to use ChatGPT to create outlines for their essays comparing and contrasting two 19th-century short stories that touch on themes of gender and mental health: “The Story of an Hour,” by Kate Chopin, and “The Yellow Wallpaper,” by Charlotte Perkins Gilman. Once the outlines were generated, her students put their laptops away and wrote their essays longhand.

The process, she said, had not only deepened students’ understanding of the stories. It had also taught them about interacting with A.I. models, and how to coax a helpful response out of one.

“They have to understand, ‘I need this to produce an outline about X, Y and Z,’ and they have to think very carefully about it,” Ms. Shields said. “And if they don’t get the result that they want, they can always revise it.”

Over on Mastodon, I saw a professor explain how he is using ChatGPT: asking his students to create a prompt to generate an essay about the subject they’re studying, and then having them edit, correct, and rewrite the essay. They would then have to turn in their initial prompt, the initial output, and their revision. I actually think this is a more powerful learning tool than having someone just write an essay in the first place. I know that I learn a subject best when I’m forced to teach it to others (despite taking multiple levels of statistics in college, I didn’t fully feel I understood statistics until I had to teach a freshman stats class, and had to answer student questions all the time). ChatGPT presents a way of making students the “teacher” in this kind of manner, forcing them to more fully understand the issues, and even to correct ChatGPT when it gets stuff wrong.

All of that seems like a more valuable approach to education with AI beyond a semi-unreliable tool to try to “catch” AI-generate text.

Oh, and in case you’re wondering, I ran this article through OpenAI’s classifier and it said:

The classifier considers the text to be very unlikely AI-generated.

Phew. But, knowing how unreliable it is, who can really say?

OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong?

Elon’s New API Pricing Plan Seems Perfectly Designed… To Help Send More Users And Developers To Mastodon
Moving Company That Threatened People With $1,000 A Day Fees For Negative Reviews To Pay $125,000 Settlement
Federal Court Says Election Disinformation Isn’t Protected Speech


CRM Banner



Click to Read Original Article

Previous Post

How To Turn Your Firm’s Clients Into Raving Fans

Next Post

Rebecca E. Shope to Serve on the 2023 Leadership Council on Legal Diversity 

Daily Legal Briefing

Daily Legal Briefing

The latest breaking legal news from across World all in one place.

Related Posts

Legal Practice Management Software | Above the Law Non-Event
Legal Tech

Legal Practice Management Software | Above the Law Non-Event

by Daily Legal Briefing
March 25, 2023
Middle School Sued After Getting Stupid About ‘Justice For Lil Pickle’ T-Shirts Worn By Students
Legal Tech

Middle School Sued After Getting Stupid About ‘Justice For Lil Pickle’ T-Shirts Worn By Students

by Daily Legal Briefing
March 25, 2023
Making History With Shepard’s Citations
Legal Tech

Making History With Shepard’s Citations

by Daily Legal Briefing
March 24, 2023
Few Legal Professionals Using Or Planning to Use Generative AI So Far, LexisNexis Survey Finds
Legal Tech

Few Legal Professionals Using Or Planning to Use Generative AI So Far, LexisNexis Survey Finds

by Daily Legal Briefing
March 24, 2023
Women in Law Are Driving an Entirely New Practice Model
Legal Tech

Women in Law Are Driving an Entirely New Practice Model

by Daily Legal Briefing
March 22, 2023
Next Post
Rebecca E. Shope to Serve on the 2023 Leadership Council on Legal Diversity 

Rebecca E. Shope to Serve on the 2023 Leadership Council on Legal Diversity 

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Premium Content

Legal Analytics Is Getting Even Better: An Interview With Lex Machina CEO Karl Harris

Legal Analytics Is Getting Even Better: An Interview With Lex Machina CEO Karl Harris

September 9, 2022
Racial justice clinic announced for University of Minnesota Law School

Racial justice clinic announced for University of Minnesota Law School

March 17, 2022
Common Types of Dangerous and Defective Products

Common Types of Dangerous and Defective Products

October 21, 2022

Browse by Category

  • Big Law
  • Breaking
  • Business
  • Hot Topics
  • Law School
  • Legal Tech
  • Small Law

About US

Breaking Legal News & Current Law Headlines | Daily Legal Briefing.
Online coverage of breaking legal news and current law headlines from around the US. Top stories, videos, insight, and in-depth analysis.

Categories

  • Big Law
  • Breaking
  • Business
  • Hot Topics
  • Law School
  • Legal Tech
  • Small Law

Recent Updates

  • Cooley Sees A Double-Digit Income Drop Amid Layoffs Due To ‘Overcapacity’
  • It's Much Better Being A TV Lawyer…
  • Legal Practice Management Software | Above the Law Non-Event

© 2021 Daily Legal Briefing | Breaking Legal News & Current Law Headlines

No Result
View All Result
  • Contact Us
  • Home

© 2021 Daily Legal Briefing | Breaking Legal News & Current Law Headlines

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?