How Will AI Change Law & the Legal Industry? (Level 2)
The AI Act's risk-based approach to regulation and three conflicts between AI and law: the copyright issue, the black box issue, and the evaluation issue.
Let’s explore how AI will change law and the legal industry.
I propose we divide the question into three sub-categories.
Each one represents a level of abstraction from the practical stuff (Level 1) to the more speculative (Level 2), and finally towards metaphysical, futuristic thinking (Level 3).
Level 1: How will lawyers use AI in their work?
Level 2: How will regulation affect AI?
Level 3: How will AI affect society?
This week, we have reached Level 2. We are no longer considering how law firms can use AI but rather how laws will affect AI's development.
Level 2: How will regulation affect AI?
It’s hard to regulate something we don’t understand. Ideally speaking, before we make new laws to govern a technology, lawmakers, and practitioners should understand at least:
How the technology works.
What it is good for.
It’s capabilities, potential, and limits.
The risks and dangers implied in using it.
Unfortunately, we don’t understand any of these things when it comes to AI. Or rather, only very few people do, to some extent. Those who have a deep understanding of the technical part may not understand AI’s social implications and vice versa. Overall, there is a lot to take in. And one calendar year ahead, even today’s expert knowledge is probably null.
The "knowledge gatekeepers" are BigTech companies. They have "skin in the game” and can influence the opinion of a broad cross-section of the populace, both directly and indirectly through armies of influencers. In the media, tech leaders are often brought forward as crown witnesses to attest to AI’s development and future potential. It’s easy to forget that the tech leaders have financial stakes bundled up in the industry and are therefore per default not objective, trustworthy sources. We should expect them to exaggerate, if not downright lie, downplay some aspects of the technology, and overemphasize others.
As I see it, the major barrier to effective AI regulation is that objective and truthful information is hard to obtain. Governments around the world have to consult with independent experts and scientists to build a strong framework for regulating AI, while the bulk of AI talent flees to Big Tech companies and other places where cutting-edge knowledge rests to die. Yet, in my opinion, laws and regulations WILL play a strong role in the development and future use of AI.
In this post, I will give you a brief overview of the risk-based structure of the world’s most comprehensive AI law, EU’s AI Act, and highlight some of its common criticisms. Hereafter, I will go through three specific points of conflict between AI and law: the copyright issue, the black box issue, and the evaluation issue. How these conflicts are solved may have a palpable impact on the future direction of AI.
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.