Artificial Intelligence article proofreading
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/pr Build is passing Details

Follow up to a05fd066c9
This commit is contained in:
vincent 2021-09-06 21:58:31 +02:00
parent a2f3b128cf
commit c3b2900eba
Signed by: vincent
GPG Key ID: 6CD601F050AC5A49
1 changed files with 94 additions and 88 deletions

View File

@ -13,23 +13,23 @@
</h1>
<div id="introduction">
<p>
Technical improvements, the accumulation of large, detailed datasets and
advancement in computer hardware have led to an Artificial Intelligence
(AI) revolution. For example, breakthroughs in computer vision have
enabled automated decision making based on images and videos, the building
of large datasets and amelioration in text analysis coupled with the
gathering of personal data have given birth to countless AI applications.
These new AI applications have given lot of benefits to European Union
(EU) citizens. However, because of its inherent complexity and
requirements in technical resources and knowledge, AI may undermine our
ability to control technology and put fundamental freedoms at risk.
Therefore, introducing new legislation on AI is a worthwhile objective.
Technical improvements, the accumulation of large, detailed datasets and
advancement in computer hardware have led to an Artificial Intelligence
(AI) revolution. For example, breakthroughs in computer vision as well
as the building of large datasets and amelioration in text analysis
coupled with the gathering of personal data have given birth to
countless AI applications. These new AI applications have given many
benefits to European Union citizens. However, because of its inherent
complexity and requirements in technical resources and knowledge, AI may
undermine our ability to control technology and put fundamental freedoms
at risk. Therefore, introducing new legislation on AI is a worthwhile
objective.
</p>
<p>
In the context of a new legislation, this article explains how releasing
AI applications under Free Software licenses pave the way for more
accessibility, transparency and fairness.
AI applications under Free Software licences paves the way for more
accessibility, transparency, and fairness.
</p>
</div>
@ -54,74 +54,76 @@
<p>
These freedoms are granted by releasing software under a Free Software
license, whose terms are compatible with the aforementioned freedoms. There
exists multiple Free Software licenses with different goals. A software may
licence, whose terms are compatible with the aforementioned freedoms. There
exist multiple Free Software licences with different goals. Software may
be licensed under more than one license. Because in order to be freely
modified, an AI requires its training code and the data, both needs to be
released under a Free Software license to consider the AI as being Free.
modified, an AI application requires its training code and training data,
both need to be released under a Free Software license to consider the AI
as being Free.
</p>
<h2 id="accessibility">Accessibility</h2>
<p>
Accessibility for AI means making it reusable, so that everyone may tinker
with it, improve it and use for their own means. To make AI reusable, it can
with it, improve it and use for their own purposes. To make AI reusable, it can
be released under a Free Software license. The advantages of this approach
are plenty. By having open legal grounds, a Free AI fosters innovation,
are many. By having open legal grounds, Free AI fosters innovation,
because one does not have to deal with artificial restrictions that prevent
people from reusing work. Making AI Free therefore saves everyone from
having to reinvent the wheel, making researchers and developers alike able
to focus on creating new, better AI software instead of rebuilding blocks
and reproducing previous work again and again. In addition to improving
efficiency, by sharing expertise, Free AI also lowers the cost of
efficiency, by sharing expertise, Free AI lowers the cost of
development by saving time and removing license fees. All of this improves
accessibility of AI, which leads to better and more democratic solutions as
everyone can participate.
</p>
<p>
Making AI reusable also makes it easier to build specialized AI model upon
Making AI reusable also makes it easier to base specialised AI models upon
more generic ones. If a generic AI model is released as Free Software,
rather than training a new model from scratch, one could leverage the
rather than training a new model from scratch, one can leverage the
generic model as a starting point for a specific, downstream prediction
task. For example, one could use a generic computer vision model<a
task. For example, one can use a generic computer vision model<a
href="#fn-1" id="ref-1" class="fn">1</a><span class="fn">,</span><a
href="#fn-2" id="ref-2" class="fn">2</a> as a starting point for managing
public infrastructure which requires specific image treatments. Just like
public infrastructure which requires specific image treatments. Just as
with accessibility in general, this approach has a key advantage: generic
models with a lot of parameters and trained on large datasets may make the
downstream task easier to learn. This makes AI more accessible by lowering
the barrier to entry by making it easier to reuse works.
</p>
<p>
However, making both the source code used to train the AI and the
corresponding data Free is sometimes not enough to make it accessible. AI
requires a huge amount of data in order to identify patterns and
correlations which lead to correct predictions. In contrary, not having
enough data reduces its ability to understand the world. Furthermore, big
datasets and their inherent complexity tend to make AI models large, making
their training time-consuming and resources intensive. The complexity in
handling the data required to train AI models, coupled with the knowledge
required to develop them and manage computer resources demand a lot of human
resources. Therefore, it may be hard to exercise the freedoms offered by a
Free AI, even though its training source code and data might be released as
Free Software. In those cases, releasing the trained AI models as Free
Software would greatly improve accessibility.
</p>
<p>
<p>
However, making both the source code used to train the AI application and
the corresponding data Free is sometimes not enough to make it
accessible. AI requires a huge amount of data in order to identify
patterns and correlations which lead to correct predictions. On the
contrary, not having enough data reduces its ability to understand the
world. Furthermore, big datasets and their inherent complexity tend to
make AI models large, making their training time-consuming and
resource-intensive. The complexity in handling the data required to train AI
models, coupled with the knowledge required to develop them and manage
a huge computer capacity, demands a lot of human resources. Therefore, it may be
hard to exercise the freedoms offered by Free AI, even though its
training source code and data might be released as Free Software. In
those cases, releasing the trained AI models as Free Software would
greatly improve accessibility.
</p>
<p>
Finally, it should be noted that, just like any other technology, making AI
reusable by everyone can potentially be harmful. For example, reusing a face
detector released as Free Software as part of a facial recognition software
can cause human right issues. However, this holds true regardless of the
detector released as Free Software as part of facial recognition software
can cause human rights issues. However, this holds true regardless of the
technology involved. If a software use case is deemed harmful, it should
therefore be prohibited without an explicit ban on AI technology.
</p>
</p>
<h2 id="transparency">Transparency</h2>
<p>
AI transparency can be subdivided in openness and interpretability. In this
AI transparency can be subdivided into openness and interpretability. In this
context, openness is defined as the right to be informed about the AI
software, and interpretability is defined as being able to understand how
the input is processed so that one can identify the factors taken into
@ -147,20 +149,20 @@
used and how it was processed by the AI should be made available. Moreover,
trust and adoption of AI would consequently be higher. Furthermore, modern
AI technologies such as deep learning are not meant to be transparent,
because are composed of millions or billions of individual parameters<a
because they are composed of millions or billions of individual parameters<a
href="#fn-7" id="ref-7" class="fn">7</a>, making them very complex and hard
to understand. This calls for Free Software which seeks to analyze this
to understand. This calls for Free Software which can assist in analysing this
complexity.
</p>
<p>
Technologies released as Free Software to make AI more transparent already
exists. For example, Local Interpretable Model-Agnostic Explanations
exist. For example, Local Interpretable Model-Agnostic Explanations
(LIME)<a href="#fn-8" id="ref-8" class="fn">8</a> is a software package
which simplifies a complex prediction model by simulating it with a simpler,
more interpretable version, thus enabling users of the AI to understand the
parameters that played a role in the prediction. Figure 1 illustrates this
process by comparing predictions made by two different models. Captum<a
href="#fn-9" id="ref-9" class="fn">9</a> is library released as Free
href="#fn-9" id="ref-9" class="fn">9</a> is a library released as Free
Software providing an attribution mechanism allowing one to understand the
relative importance of each input variable and each parameter of a deep
learning model. Making AI more transparent is therefore possible.
@ -170,8 +172,8 @@
<figcaption>Figure 1: example of prediction explanations by LIME<a href="#fn-8" id="ref-8" class="fn">8</a></figcaption>
</figure>
<p>
Although a proprietary AI can be transparent, Free Software facilitates this
process by making auditing and inspection easier. While some data might be
Although a proprietary AI model can be transparent, Free Software facilitates
transparency by making auditing and inspection easier. While some data might be
too sensitive to be released under a Free Software license, statistical
properties of the data can still be published. With Free Software, everyone
is able to run the AI to understand how it is made, and look up the data
@ -184,40 +186,41 @@
Another benefit of Free Software in this context is that by granting the
right to improve the AI software and share improvements with others, it
allows everybody to improve transparency, thereby preventing vendor lock-in
where one has to wait until the software provider makes AI more transparent.
where one has to wait until the software provider makes the AI software more
transparent.
</p>
<h2 id="fairness">Fairness</h2>
<p>
In artificial intelligence (AI), fairness is defined as making it free of
In Artificial Intelligence (AI), fairness is defined as making it free of
harmful discrimination based on ones sensitive characteristics such as
gender, ethnicity, religion, disabilities or sexual orientation. Because AI
gender, ethnicity, religion, disabilities, or sexual orientation. Because AI
models are trained on datasets containing human behaviors and activities
that can be unfair, and AI models are designed to recognize and reproduce
existing patterns, they can create harmful discrimination and human right
that can be unfair, and AI models are designed to recognise and reproduce
existing patterns, they can create harmful discrimination and human rights
violations. For example, (COMPAS)<a href="#fn-10" id="ref-10"
class="fn">10</a>, an algorithm attributing scores which indicates how
likely one is going to recidivate their crime, was found to be unfair
towards African American<a href="#fn-11" id="ref-11" class="fn">11</a>,
because for them, 44.9% of cases were false positives. The algorithm
attributed a high change of recidivism despite the defendants not
class="fn">10</a>, an algorithm attributing scores which indicate how
likely one would recidivate, was found to be unfair
towards African Americans<a href="#fn-11" id="ref-11" class="fn">11</a>
because for them 44.9% of cases were false positives. The algorithm
attributed a high chance of recidivism despite the defendants not
re-offending. Conversely, 47.7% of the cases for white people were labeled
as low risk of recidivism despite them re-offending. Suspected unfairness
has also been found in healthcare<a href="#fn-12" id="ref-12"
class="fn">12</a>, where an algorithm was used to attribute risks scores to
patients, thereby identifying those needing additional care resources. To
have the same risks scores as white people, black people needed to be in an
worst health situation, in term of severity in hypertension, diabetes,
have the same risks scores as white people, black people needed to be in a
worse health situation, in term of severity in hypertension, diabetes,
anemia, bad cholesterol, or renal failure. Therefore, real fairness issues
exist in AI algorithm. Moreover, from a legal perspective, checking for
fairness issues is required by the Recital 71 of the GDPR, which requires to
<em> prevent, inter alia, discriminatory effects on natural persons on the
may exist in AI algorithms. Moreover, from a legal perspective, checking for
fairness issues is required by Recital 71 of the GDPR, which requires to
<em>prevent, inter alia, discriminatory effects on natural persons on the
basis of racial or ethnic origin, political opinion, religion or beliefs,
trade union membership, genetic or health status or sexual orientation, or
processing that results in measures having such an effect.</em>” We thus
processing that results in measures having such an effect.</em>. We thus
need solutions to detect potential fairness issues in datasets on which AI
is trained and correct it when it occurs.
is trained and correct them when they occur.
</p>
<p>
To detect fairness, one needs to quantify it. There are lots of ways to
@ -237,11 +240,11 @@
<ol>
<li>
Remove the sensitive attribute (e.g. gender, ethnicity, religion, etc.)
from the dataset. This approach does not work in real-world scenario
because removing the sensitive attribute is not enough to completely
from the dataset. This approach may not work in real-world scenarios
because removing the sensitive attribute might not be enough to completely
mask it, as the sensitive attribute is often correlated with other
attributes of the dataset. Removing it is therefore not sufficient, and
removing all attributes correlated with it leads to a lot of information
attributes of the dataset. Removing may therefore not be sufficient, and
removing all attributes correlated with it may lead to a lot of information
loss;
</li>
<li>
@ -249,7 +252,7 @@
by a sensitive characteristic;
</li>
<li>
Optimize the AI model for accuracy and fairness at the same time. While
Optimise the AI model for accuracy and fairness at the same time. While
the algorithm is trained on an existing dataset that contains unfair
discrimination, both consider its accuracy and its fairness<a
href="#fn-15" id="ref-15" class="fn">15</a>. In other words, add fairness
@ -258,30 +261,33 @@
</ol>
<p>
If those methods are used, having a perfectly accurate and fair algorithm is
impossible, but if the accuracy is defined on a dataset that is known to
contain unfair treatment of a particular group, having a less than perfect
accuracy may be deemed acceptable.
impossible<a href="#fn-14" id="ref-14" class="fn">14</a>, but if the accuracy
is defined on a dataset known to contain unfair treatment of a particular
group, having a less than perfect accuracy may be deemed acceptable.
</p>
<p>
Because a AI released as Free Software may be used and inspected by
everyone, verify if it is free of potentially harmful discrimination is
easier than if it were proprietary. Moreover, this synergizes with AI
Because as AI application released as Free Software may be used and inspected
by everyone, verification of whether it is free of potentially harmful discrimination
is easier than if it were proprietary. Moreover, this synergises with AI
transparency (see Section <a href="#transparency">Transparency</a>), as a
transparent AI facilitates the understanding of the factors considered for
making predictions. While necessary, releasing AI as Free Software does not
make fair, but make fairness easier to evaluate and enforce.
transparent AI applicationfacilitates the understanding of the factors considered for
making predictions. While necessary, releasing an AI application as Free Software
does not make it fair. However, it makes fairness easier to evaluate and enforce.
</p>
<h2 id="conclusions">Conclusions</h2>
<p>
In this article, we highlighted potential issues around the democratization
of artificial intelligence (AI) and implications for human rights. Possible
Free Software solutions are presented to tackle these issues. In particular,
we showed that AI needs to be accessible, transparent and fair in order to
be usable. While not a sufficient solution, releasing AI under Free Software
licenses is necessary for its widespread use throughout our information
systems by making it more scrutable, trustworthy and safe for everyone.
In this article, potential issues around the democratisation
of artificial intelligence (AI) and implications for human rights are
highlighted, and potential Free Software solutions are presented to tackle
them. In particular, it is shown that AI needs to be accessible, transparent
and fair in order to be usable. While not a sufficient solution, releasing
AI under Free Software licences is necessary for its widespread use
throughout our information systems by making it more scrutable, trustworthy
and safe for everyone.
</p>
<h2 id="fn">References</h2>