"The use of artificial intelligence in public services has many advantages but also unintended negative consequences. We need to ensure that the technology works equally for everyone," says Pouria Akbarighatar.
He is a doctoral candidate at the University of Agder (UiA).
The candidate recently published a study on how artificial intelligence can contribute to providing equal rights for all users.
Guide for Equal Treatment
"The study proposes a framework for use by organisations that want to implement services that involve artificial intelligence. The framework will serve as a template or guide that minimizes discrimination and strengthens users' rights," says Akbarighatar.
Individuals should maintain control over the impact of artificial intelligence on their lives throughout the provision of services.
"There are already numerous guidelines for the responsible use of artificial intelligence published by academia, business, and government sectors. However, these guidelines are presented in long and confusing lists. While they may be correct individually, they lack a clear and comprehensive context," says the researcher.
Built on Moral Philosophy
The framework is built on the moral philosopher John Rawls' theory of justice as fairness.
"Robot arms and AI-controlled systems imitate human abilities, but the human aspect is crucial to ensuring that the technology works for the benefit of individuals. This includes strengthening privacy, minimizing security risks, and ensuring transparency for different groups," according to the researcher.
Ilias O. Pappas and Polyxeni Vassilakopoulou, both from UiA, are co-authors of Pouria Akbarighatar´s article.
The study will be presented at the European Conference on Information Systems (ECIS) at UiA from June 11th to 16th.
UiA's Department of Information Systems is organizing the conference, which expects around 800 participants from around the world.