Do we still need human judges in the age of Artificial Intelligence?

Credit: Pixabay/Geralt.

The Fourth
Industrial Revolution
is fusing disciplines across the digital and physical
worlds, with legal technology the latest example of how improved automation is reaching
further and further into service-oriented professions. Casetext for example—a  legal tech-startup providing Artificial Intelligence
(AI)-based research for lawyers—recently
secured $12 million
in one of the industry’s largest funding rounds, but
research is just one area where AI is being used to assist the legal profession.

Others
include
contract review and due diligence, analytics, prediction, the discovery
of evidence, and legal advice. Technology and the law are converging, and where
they meet new questions arise about the relative roles of artificial and human
agents—and the ethical issues involved in the shift from one to the other.
While legal technology has largely focused on the activities of the bar, it
challenges us to think about its application to the bench as well. In
particular, could AI replace human judges?

Before
going any further, we should distinguish algorithms from Artificial Intelligence.
In simple terms, algorithms are self-contained instructions, and are already
being applied in judicial decision-making. In New Jersey, for example, the Public
Safety Assessment
algorithm supplements the decisions made by judges over bail
by using data to determine the risk of granting bail to a defendant. The idea
is to assist judges in being more objective, and increase access to justice by
reducing the costs associated with complicated manual bail assessments.

AI
is more difficult to define. People often conflate it with machine learning,
which is the ability of a machine to work with data and processes, analyzing patterns
that then allow it to analyze new data without being explicitly programmed. Deeper
machine learning techniques can take in enormous amounts of data, tapping into
neural networks to simulate human decision-making. AI subsumes machine
learning, but it is also sometimes used to describe a futuristic machine super-intelligence
that is far beyond our own.

The idea of
 AI judges raises important ethical
issues around bias and autonomy.  AI
programs may incorporate the biases
of their programmers
and the humans they interact with. For example, a Microsoft
AI Twitter chatbot named Tay
became racist, sexist, and anti-Semitic within 24 hours of interactive learning
with its human audience
. But while such…

Read the full article from the Source…

Leave a Reply

Your email address will not be published. Required fields are marked *