Breaking Legal News & Current Law Headlines | Daily Legal Briefing
  • Home
  • Hot Topics
  • Breaking
  • Business
  • Big Law
  • Small Law
  • Law School
  • Legal Tech
No Result
View All Result
No Result
View All Result
Breaking Legal News & Current Law Headlines | Daily Legal Briefing
No Result
View All Result
Home Legal Tech

What The AI Industry Can Learn From The Media Industry

Daily Legal Briefing by Daily Legal Briefing
February 28, 2023
in Legal Tech
0
What The AI Industry Can Learn From The Media Industry
4
SHARES
32
VIEWS
Share on FacebookShare on Twitter


Law2020-Ethical-Implications-of-Artificial-IntelligenceNews and media organizations have editorial policies and standards intended to define and guide the kind of content offered to their target audience. The Wall Street Journal targets a business audience with news facts and information. Disney has several properties that appeal to specific audiences in different mediums, but they all support a core mission of entertaining and inspiring people around the world through unparalleled storytelling.

For any media company, there are style guides, editorial policies, review processes, and editing. While some companies follow comprehensive policies and processes with rigor, others might not. Similarly, while some companies are transparent about their policies, others aren’t transparent.

Law Firms Should Consider How Artificial Intelligence (AI) Will Support Their Brand

With AI already a major factor in legal technology in 2023, law firms must assess the technology’s role in their business. Law firms are already posting job openings for “Legal Prompt Engineers,” and Allen & Overy is one of the first major law firms to deploy a firmwide GPT application.

In addition to broad-scale large language models (LLMs) like ChatGPT or the new Bing chat-assisted search, any organization that trains AI or that produces content with AI will likely want to consider developing a policy.

What techniques are used to query an LLM? The way a content creator queries an LLM will matter. New techniques like Chain Prompts help LLMs explain their logic more transparently. This can help with human review.

Organizations that will create content using AI will want to consider how AI is trained, how AI is instructed, how AI formats output, and how AI-generated content is reviewed before publishing.

Just as large media companies stake their brands on the finished work product, so do law firms. An editor will review a news story while a partner will review an associate’s memo, whether generated by an AI or a human.

Relating Editorial Policy To AI

Few people have heard of Meta’s Galactica LLM, even though it was released in demo form two weeks prior to OpenAI’s ChatGPT. Why? Because it was pulled down just three days later, after its responses exhibited bias and spewed nonsense. Galactica’s AI training was not as good as ChatGPT’s, and consumers of Galactica got to see the nonsense firsthand.

AI-generated content and LLMs are in their infancy. New policies, guidelines, and styles will need to be developed for AI-generated content. After all, content is content, whether generated by humans, machines, or humans and machines.

What is the role of AI training in creating consistent output? What is the role and responsibility of those that generate AI content to review it, similar to that of an editor? Galactica’s output was analogous to a media company hiring journalists and untrained writers alike. But not only that, it was equivalent to publishing with little or no editorial review.

Part of the success of ChatGPT is that significant chunks of objectionable content were tagged as such so the AI training would recognize ugly content like hate speech, sexual abuse, torture, and worse.

In the analogy above, ChatGPT’s AI training has standardized more of the writing. It has reduced the number of untrained writers in the analogy but still has a ways to go. ChatGPT output will require review, similar to an editor, in most use cases. That applies even if the content will only be consumed by the user who created it.

A Glimpse Into Training ChatGPT

A little-known fact is that a lot of the dirty work in finding objectionable content and removing it from the AI training sets used by ChatGPT was initially outsourced to workers in Kenya. The steps to create ethical AI and responsible AI in ChatGPT required some poor souls to review some horrific content. That may bother some readers as it bothers me. Just realize the concept isn’t new. Some people play similar roles in the media industry. Video editors watch some pretty horrific content during the editing process and then pixelate the images or cut away at just the right time to protect the broader viewing audience.

AI Editorial Policies Have Parallels To Traditional Content Creation Editorial Policies

There are parallels between AI content creation and traditional content creation. The decisions made by the Kenyan workers under the direction of OpenAI represent a de facto AI training policy. The de facto policy will evolve, and it is unknown if OpenAI will ever summarize or publish a training policy.

More than 175 billion machine learning parameters are in the GPT-3 model, and GPT-4 will likely measure machine learning parameters in the trillions, which will noticeably improve its ability to answer questions accurately.

When an AI service provider eventually publishes their AI training policy, it will help explain the output to those creating content. This will help explain what is intentional policy and what is AI bias or error that needs to be corrected. AI training won’t ever filter out everything.

Regulation Is Probably Inevitable

If self-regulation and disclosure don’t occur, you can be sure legislative bodies will begin to require disclosures. In 2021, the European Union began to propose a regulatory framework for AI.

The awareness of AI-related issues is ramping up faster than that of Internet-related issues. Similar events occurred in the early days of the Internet when privacy was a concern. The Electronic Frontier Foundation and TRUSTe (now TrustArc) advocated for personal liberties and voluntary disclosures to protect the privacy and rights of individuals. Eventually, privacy policies became standard fare on websites. However, the GDPR and CCPA laws and regulations now define much of what can be done with personal identifiable information.

We are quickly entering new territory here and taking the necessary steps to be prepared for what may come is essential. Let’s learn from others that have already traveled a similar journey.


Ken Crutchfield HeadshotKen Crutchfield is Vice President and General Manager of Legal Markets at Wolters Kluwer Legal & Regulatory U.S., a leading provider of information, business intelligence, regulatory and legal workflow solutions. Ken has more than three decades of experience as a leader in information and software solutions across industries. He can be reached at ken.crutchfield@wolterskluwer.com.


CRM Banner



Click to Read Original Article

Previous Post

How Payroll Outsourcing Changed My Entrepreneur Life

Next Post

Cheeky Voicemail Costs Law Student Biglaw Opportunity

Daily Legal Briefing

Daily Legal Briefing

The latest breaking legal news from across World all in one place.

Related Posts

Women in Law Are Driving an Entirely New Practice Model
Legal Tech

Women in Law Are Driving an Entirely New Practice Model

by Daily Legal Briefing
March 22, 2023
DoD driving ‘dramatic’ change to ‘outpace’ foes, line up with National Cyber Strategy
Legal Tech

DoD driving ‘dramatic’ change to ‘outpace’ foes, line up with National Cyber Strategy

by Daily Legal Briefing
March 21, 2023
Nexl, A No-Data-Entry CRM Platform for Law Firms, Raises $4M
Legal Tech

Nexl, A No-Data-Entry CRM Platform for Law Firms, Raises $4M

by Daily Legal Briefing
March 20, 2023
3 Lawyers Weigh In With Their Top TikTok Marketing Tips
Legal Tech

Yes, The US Government Threatening To Block TikTok Violates The 1st Amendment

by Daily Legal Briefing
March 17, 2023
Applying AI To Legal Recruiting: New Tools For Efficiently Matching Firms And Candidates
Legal Tech

Applying AI To Legal Recruiting: New Tools For Efficiently Matching Firms And Candidates

by Daily Legal Briefing
March 17, 2023
Next Post
Cheeky Voicemail Costs Law Student Biglaw Opportunity

Cheeky Voicemail Costs Law Student Biglaw Opportunity

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Premium Content

Interested in helping Afghans apply for asylum? Here’s a way to start

Interested in helping Afghans apply for asylum? Here’s a way to start

January 7, 2022
Biglaw Probably Won’t Be Hemorrhaging Lawyers Via Layoffs Anytime Soon

Biglaw Firm Conducts Layoffs, Citing ‘Shifting Market Dynamics’

March 16, 2023
14 Attorneys General Back Mexican Government Lawsuit Against U.S. Gun Makers

14 Attorneys General Back Mexican Government Lawsuit Against U.S. Gun Makers

February 7, 2022

Browse by Category

  • Big Law
  • Breaking
  • Business
  • Hot Topics
  • Law School
  • Legal Tech
  • Small Law

About US

Breaking Legal News & Current Law Headlines | Daily Legal Briefing.
Online coverage of breaking legal news and current law headlines from around the US. Top stories, videos, insight, and in-depth analysis.

Categories

  • Big Law
  • Breaking
  • Business
  • Hot Topics
  • Law School
  • Legal Tech
  • Small Law

Recent Updates

  • Former Biglaw Attorney Suspended From Practice For Insider Trading
  • This Biglaw Firm Is Expanding By Hiring A Boatload Of Lateral Partners
  • Lawyers Should Nurture Smaller Clients

© 2021 Daily Legal Briefing | Breaking Legal News & Current Law Headlines

No Result
View All Result
  • Contact Us
  • Home

© 2021 Daily Legal Briefing | Breaking Legal News & Current Law Headlines

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?