
Artificial intelligence (AI) was formerly a science-fiction concept. However, it is getting more widespread. It can be found in mobile phone technologies and automobiles. It powers agricultural and healthcare tools.
However, concerns have been raised concerning AI’s and associated technologies such as machine learning’s responsibility. Timnit Gebru, a computer scientist with Google’s Ethical AI team, was sacked in December 2020. She had already highlighted concerns about the social consequences of AI prejudice.
In a report published in 2018, Gebru and another researcher, Joy Buolamwini, demonstrated that facial recognition software was less accurate in identifying women and persons of color than it was in identifying white men. Training data biases can have far reaching and unforeseen consequences.
There is already a significant body of research on AI ethics. This emphasizes the necessity of principles in ensuring that technology does not only exacerbate biases or even cause new societal evils. As stated in the UNESCO draft recommendation on AI ethics:
We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.
Many frameworks and recommendations have been developed in recent years to establish ethical AI aims and priorities.
This is unquestionably a positive step forward. When it comes to concerns of bias and inclusivity, though, it’s equally necessary to go beyond technical solutions. Biases might creep in at the level of the person who sets the goals and balances the priorities.
In a recent study, we propose that inclusivity and diversity must also be considered when determining ideals and defining frameworks for what constitutes ethical AI. This is especially true given the rapid rise of AI and machine learning research on the African continent.
Artificial intelligence in Africa’s context
Research and development of AI and machine learning technologies is growing in African countries. Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have taken place in 27 African nations to date, demonstrate the interest and people investment in the fields.
A key motivator of this research is the potential of AI and associated technologies to promote prospects for growth, development, and democratization in Africa.
Despite this, just a few African perspectives have been heard in worldwide ethical frameworks that try to direct research. This might not be a problem if the frameworks’ ideas and ideals are universally applicable. However, it is unclear whether they do.
The European AI4People framework, for example, is a synthesis of six other ethical frameworks. One of its guiding principles is respect for autonomy. Within the applied ethical field of bioethics, this principle has been criticized. It is seen as failing to do justice to the communitarian values that are shared throughout Africa. These are less concerned with the individual and more concerned with the community, even requiring exceptions to such a principle in order to allow for effective interventions.
Such challenges, or even acknowledgement of the possibility of such challenges, are largely absent from discussions and frameworks for ethical AI.
Just as training data can exacerbate existing inequalities and injustices, failing to recognize the possibility of diverse sets of values that vary across social, cultural, and political contexts can.
Better results come out of inclusive AI systems
Furthermore, failing to consider social, cultural, and political contexts can result in even the most perfect ethical technical solution being ineffective or misguided once implemented.
Any learning system requires access to training data in order to be effective at making useful predictions. This entails samples of the data of interest: inputs in the form of multiple features or measurements, and outputs that are the labels that scientists want to predict. In most cases, both these features and labels necessitate human understanding of the problem. However, failing to account for the local context correctly may result in underperforming systems.
Mobile phone call records, for example, have been used to estimate population sizes before and after disasters. Vulnerable populations, on the other hand, are less likely to have access to mobile devices. As a result, this approach may produce ineffective results.
Similarly, computer vision technologies for identifying various types of structures in an area will most likely underperform when different construction materials are used. In both of these cases, as we and other colleagues discuss in another recent paper, failing to account for regional differences can have a significant impact on everything from disaster relief to autonomous system performance.
In the future
AI technologies must not merely exacerbate or incorporate the problematic aspects of contemporary human societies.
It is critical to be sensitive to and inclusive of various contexts when designing effective technical solutions. It is also critical not to assume that values are universal. People from various backgrounds must be included in the development of AI, not only in the technical aspects of designing data sets and the like, but also in defining the values that can be called upon to frame and set objectives and priorities.