Strengthening Society with AI as it Matters As the European Union’s historic AI Act starts to become a reality in our public institutions, the emphasis now will be on how society can encourage AI literacy, and it’s even a provision highlighted in Article 4 of the Act.
International IDEA (Institute for Democracy and Electoral Assistance) has become an important actor in this critical endeavor, engaged in developing actual strategies to make sure people and organizations have sufficient understanding on how to work with artificial intelligence.
The AI acte européen reflects the value of informed use of AI systems with its risk-based, tiered structure. Article 4 “places a specific obligation on the providers and deployers of AI to have a ‘sufficient level of AI literacy’ in their company and among those who operate these systems on their behalf”.
This is not only about technical expertise, but about an overall understanding of AI’s opportunities, risks and potential societal impacts as laid out in Article 3(56). The Act focuses on this fine-grained interpretation, which includes technical expertise, experience, education and all other objective circumstances of the use of AI.
International IDEA, which is known for promoting democracy and human rights, is already meeting this demand with programmers such as its AI for Electoral Actors training programme. This initiative is a demonstration of how broad-scale AI literacy can be nurtured. Participants acquire not only core AI expertise, but also explore its ethical, human rights, social and political dimensions.
This is exactly the multi-dimensional training the EU AI Act is calling for – it looks beyond purely technical aspects towards fairness, non-discrimination, accountability, transparency and human oversight.
The AI Act of the EU and on their compatibility with the kinds of transparent and responsible AI development and use developed by IDEA. Empowering these stakeholders by enabling them to critically analyse AI proposals, to ask for evidence-based reports on rights and ethics from vendors, and to set rules for transparency, directly serves the Act’s aim to build trust and make AI deployment safe.
As the EU’s AI Office further develops recommendations and builds a repository of best practices, efforts such as IDEA’s will act as important templates for organizations working to fulfill their AI literacy duties and help develop a more educated and resilient digital society.