Law firms moving quickly in the field of artificial intelligence weigh the benefits, risks, and the unknown


Artificial intelligence and robotics

Law firms moving quickly in the field of artificial intelligence weigh the benefits, risks, and the unknown

“If you think about their ability to collect, analyze and summarize a lot of data, it’s a great start to any legal project,” says Bennett P Borden, partner at DLA Piper and data scientist. Image from Shutterstock.

Updated: In the fall of 2022, David Wakeling, head of market innovations at the London law firm Allen & Overy’s Markets Innovation Group, gets a glimpse into the future. Months before launching ChatGPT, he piloted Harvey, a platform built on OpenAI’s GPT technology and tailored to large law firms.

“As I was peeling the onion, I saw that it was very dangerous. I’ve been playing with technology for a long time. It’s the first time the hair has ever stood up on the back of my neck,” Wakeling says.

Allen & Overy soon became one of Harvey’s first adopters. Announcement in February 3,500 lawyers were using it in 43 offices. then in He walksAccounting firm PricewaterhouseCoopers has announced a “strategic alliance” with the San Francisco-based startup, which recently secured $21 million in funding.

Other major law firms have adopted generative AI products at a dizzying pace or are developing in-house platforms. DLA Piper partner and data scientist Bennett B. Borden calls this technology “the most transformative technology” since the computer. It is well suited for lawyers because it can speed up mundane legal tasks, helping them to focus on more useful work.

“If you think about their ability to collect, analyze and summarize a lot of data, it’s a solid start for any legal project,” says Borden, whose company uses Casetext’s AI-powered paralegal, CoCounsel, to conduct legal research, document review and analytics. Contract analysis. (in JuneThomson Reuters announced that it has agreed to buy Casetext for $650 million.)

However, generative AI forces companies to confront the risks of using new technology, which is largely unregulated. In May, Gary Marcus, a leading expert on artificial intelligence, warned the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law that even makers of generative AI platforms “don’t fully understand how they work”.

Corporations and legal technology firms face the unique security and privacy challenges that come with using software and its tendency to produce inaccurate and biased answers.

These concerns became apparent after a lawyer was found to have relied on ChatGPT for citations in a filing filed in March with New York federal court. the problem? The mentioned cases did not exist. The chatbot made it up.

Cautious and proactive

Harvey’s representatives did not respond to multiple requests for an interview. But to guard against inaccuracy and bias, Karen Buzzard, a partner at Allen & Overy in New York, says the firm has a robust program of training and verification, and lawyers are greeted with “rules of use” before using the platform.

“Whatever your level is — from the smallest to the highest — if you use it, you have to validate the output or you might embarrass yourself,” Wakeling says. “It’s really devastating, but isn’t every major technological change devastating?”

But other law firms are more cautious. In April, a Thomson Reuters survey of medium and large law firms’ attitudes toward generative AI suggested that the majority “take a cautious but pragmatic approach”. It found that 60% of respondents had no “current plans” to use this technology. Only 3% said they use it, and only 2% “actively plan to use it”.

David Cunningham, chief innovation officer at Reed Smith, says his company is being proactive when it looks at generative AI. The company is currently piloting Lexis+ AI, CoCounsel and will try Harvey in the summer, and BloombergGPT when it comes out.

“I can’t say we’ve become more conservative,” Cunningham says. “I would say we are more serious about making sure that we do that through guidance and policies and training and really focus on the quality of the output.”

He says the law firm’s pilot program is focused on commercial systems where the firm knows “the protection barriers, we know security, we know retention policies” and “we know the governance issues”.

“The reason we tread carefully is because the products are immature. The products have not yet achieved the quality, reliability, transparency and consistency that we would expect a lawyer to rely on,” he says.

There’s a stark difference between “public chatbots” like ChatGPT and CoCounsel, which are built on OpenAI’s GPT-4 massive language model but trained on code-focused datasets, says Pablo Arredondo, Casetext’s chief innovation officer, and where the data is. Secure, monitored, encrypted and audited.

He understands why some are taking a more cautious approach, but predicts the benefits will soon be “very tangible, undeniable, and I think you’re going to see an increase in adoption.”

New laws

Meanwhile, regulators are trying to catch up. In May, OpenAI CEO and co-founder Sam Altman Lawmakers urged in Congress to regulate this technology. He initially said OpenAI could withdraw from the European Union over a proposed artificial intelligence law, which includes requirements to block illegal content and to disclose copyrighted work makers are using to train their platforms.

In October, the White House issued a Outline of the bill of rights for artificial intelligence. which includes protection against “unsafe or ineffective” AI systems; algorithms that distinguish; practices that violate data privacy; a notification system so people know how to use AI and its effects; and the ability to opt out of AI systems entirely.

In January, the National Institute of Standards and Technology released an AI Risk Management Framework to encourage innovation and help organizations build trustworthy AI systems by managing, mapping, measuring, and managing risk.

But the public had to wait until June for Senate Majority Leader Chuck Schumer to outline a long-awaited strategy to regulate the technology. He provided a framework for regulation, and said the Senate will hold a series of forums with AI experts before drafting policy proposals. Then, in July, The Washington Post reported that the Federal Trade Commission was investigating OpenAI’s data security practices and whether they had harmed consumers.

However, DLA Piper partner Danny Toby says there is a risk of over-regulation due to panic and misconceptions about how advanced the technology is.

“I worry about systems that are outdated even before they are enacted or stifle innovation and creativity,” he says.

However, Marcus, speaking to lawmakers in May, said AI systems must be free of bias, transparent, protect privacy and “above all, be secure”.

“Existing systems are not transparent, do not adequately protect our privacy, and continue to perpetuate bias,” Marcus said. Most of all, we cannot remotely guarantee that they are safe.”

Others call for a halt to the development of large language models until the risks are better understood. In March, the Technology Ethics Group centered on artificial intelligence and digital policy File a complaint With the FTC demanding that it stop further commercial releases of GPT-4. The complaint came after an open letter signed by thousands of technology experts, including CEO of SpaceX, Tesla and Twitter Elon Musk, calling for a six-month halt to research on generative AI language models more powerful than GPT-4.

Ernest Davis, a professor of computer science at New York University, was among those who signed the letter and believes the moratorium is “a very good idea”.

“They release software before it’s ready for public use, just because the competitive pressure is so overwhelming,” he says.

But Borden says there is “no global authority” or global governance for AI, so even if a freeze is a good idea “it just isn’t possible”.

“Pausing AI is like pausing the weather,” Toby adds. “We have a imperative to innovate because countries like China are doing it at the same time. However, companies and industries have a role to play in shaping their internal management to make sure that these tools are adopted safely, just like any other.

Updated July 20 at 11:20 a.m. to include additional reporting and information about the FTC’s investigation of OpenAI and Senate Majority Leader Chuck Schumer’s announcement on a framework for regulation. Updated Aug. 9 at 11:23 a.m. to reflect Allen & Overy’s announcement in February that 3,500 attorneys use Harvey across 43 offices.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button