AI Sovereignty


The 2023 AI Index report from Stanford University’s Human-centred Artificial Intelligence is now out.
Rather than bother you with the details (it is available here: https://aiindex.stanford.edu/report/), I want to share with you some key points.
Depending on where one comes from, AI sounds like a different music.
Some of the elements of this report though, should get the attention of a non technical audience.


1. The private sector dominates research. This sign is generally not good news since it means that research is business driven and does not pursue a pure curiosity and improvement goal. We saw that in pharmaceutics decades ago. Do we want this to happen with AI as well?


2. There is a rise of issues linked to AI misusage. Either intentional or not, this is an issue.
Disinformation first with the usage of generative AI to create fake information.
There are biases in the data used to train the models. If some are now known (race, gender), others are to be discovered. Moreover, knowing a bias does not mean an action was taken to counter it. We are all subjected to biases whether we accept it or not.

3. Legislation has increased. The usage of AI will suffer more and more restrictions. From the origin of the data to the explainability of the model, rules will get more and more constraining.

4. Unsurprisingly, China is the leading country with the USA not far behind.

5. And finally, the fear of AI users. In fact, regular AI users do not understand or know what a model is. Worst of all, they are weary of this trend hiding behind tricky mathematics.

As a final thought about those points highlighted from the report, I would first emphasise that AI is a tool, not an end in itself. With this in mind, one should take measures toward a safer use of AI.
Companies relying on it should be able to control their input data and explain their models.
Governments and legislators should consider the best interests of users and anticipate AI-related abuses.
AI-related topics should be included in education programs as early as possible.
In the same way that open source software allows to keep control of a company’s information system, explainable AI models will allow companies to build trust and governance over their data use.


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.