Artificial Intelligence (AI) through the Lens of Islamic Economics – VI

We underline some data privacy related guidelines and principles that do not violate the basic beliefs and culture of Islamic societies. They merit serious consideration by zakat and Islamic non-profit organizations, especially that have gone digital, where the beneficiary is usually in a weak bargaining position.

The narrative “data is new oil” is frequently used to underline the fact that data has become the most important resource of our times. As machines continue to gain intelligence by devouring more and more data – in the form of numbers, text, sound, images and what-have-you – they raise serious ethical questions that should be of concern to an Islamic economist.

Prior to looking at the ethical issues, let us spend a minute on a now familiar concept, big data. Intuitively speaking, big data is about the massive volume of data. As our society gets more and more digitalized, data grows bigger and bigger. Big data is not just about volume. It is also about the velocity of data. Unlike old days, we now have social media apps that throw up massive amounts of data in real time. And our data-devouring machines (e.g. robo-traders, self-driving cars) require fast feedback loops in order for the system to work. They need to constantly sense what is coming, to make real time decisions. One more distinct feature of big data is that it is largely unstructured text, images and sounds and our machines look for hidden patterns and signals in them.

Should big data be treated as a natural resource? Throughout human history, the privatization of natural resources has created large monopolies and contributed to massive wealth creation for a privileged few. The idea of monopoly is alien to Islamic economic ethics.

Now, the first ethical question: should big data be treated as a natural resource? Throughout human history, the privatization of natural resources – oil, coal, natural gas, forests and timber, minerals – have created large monopolies and contributed to massive wealth creation for a privileged few. A natural outcome of this is gross and ever-increasing economic inequalities. The idea of a monopoly is alien to Islamic economic ethics. However, if we look around, a trend of cannibalizing small players and enhancement of monopoly power should be clearly discernible. A small number of tech-giants have taken rapid strides in monopolizing this new resource. This demands for legislative action. Further, unlike oil that gets depleted, the use of data creates new data. It is therefore, never too late to curb the rise of the tech-monopolies.

Second, and we have already touched upon this issue briefly in Part III of this blog series, use of alternative data may raise serious privacy issues. We consider the AI use case of default prediction. In an interesting 2019 publication On the Rise of FinTechs: Credit Scoring using Digital Footprints the authors find that easily accessible variables from the digital footprint – the device type, the operating system, and the email provider – proxy for income, character and reputation and are highly valuable for default prediction. It may be extremely desirable to scrutinize whether such alternative data proxies for variables (e.g. race, religion) that may lead to discriminatory behavior on the part of lender.  Regulators must also watch closely whether such alternative data violate individuals’ privacy rights. Both are of concern to an Islamic economist.

In conformity with the objective of Shariah to protect the dignity of the individual (hifdh al-nafs), Islamic societies have always stood for absolute privacy of an individual. The purdah system is a testimony to the same social and ethical norm that seeks to protect the dignity of women in Islam. On the possibility of sharing relevant information about the individual borrower (as in the credit-scoring model), one needs to be careful about a major Islamic ethical norm – the right of an individual to be protected against gheebah, buhtan and nameemah – that govern information-sharing. Gheebah means information-sharing about a Muslim who is neither present nor approving, even when the data is accurate. When data is inaccurate, sharing of information amounts to buhtan. Nameemah refers to disclosure of data that may hurt the interests of the concerned party or lead to conflicts with a third party. Scholars assert that in view of the above norms, extreme caution should be exercised while sharing data and information about others in general. Information-sharing may be undertaken only under specific conditions, e.g. when this is certain to bring some benefit to a Muslim or ward off some harm. Shariah, however, provides a window of permissibility for sharing of personal data and information, when it can potentially impact decision relating to marriage, business etc. At the same time, the data and information shared should be the minimum that is required to address the problem at hand.

Data privacy has been a subject of regulation that vary widely across countries. Perhaps instead of getting into the diverse range of regulations in practice, it will be useful to underline some privacy related guidelines and principles[1] that do not violate the basic beliefs and culture of Islamic societies. They merit serious consideration by zakat and Islamic non-profit organizations, especially that have gone digital, where the beneficiary is usually in a weak bargaining position.

  • Organizations should respect the privacy of beneficiaries and recognize that obtaining and processing their personal data represents a potential threat to that privacy.
  • Organizations should protect all personal data they obtain from beneficiaries either for their own use or for use by third parties.
  • Organizations should exercise extra care and sensitivity towards “purdah culture” among women in obtaining and processing their personal data.
  • Organizations should ensure the accuracy of the personal data and keep such information up to date.
  • Organizations should obtain consent or inform beneficiaries as to the use of their data.
  • Organizations should not hold beneficiary data for longer than what is required.
  • Organizations should be accountable for holding the data and address any query/ complaint by a beneficiary regarding his/her personal data.

Next, we have an ethical concern regarding possible bias in the data used for AI. If we use biased data to train our machine learning AI, then we get biased AI. Often the bias is unconscious or is based on deeply-rooted societal norms. Shouldn’t we notice a gender bias when we find all machine-voices talking to us in lifts, cars or virtual assistants on a portal are female voices by default. Notwithstanding the widely held notion (example of another kind of bias) that there is a gender bias against working women in traditional Muslim societies, Islamic ideals stand for gender equality and there is no room for perpetuating a gender biased stereotype that assistants are often women. To take another example, let us think in terms of credit-scoring model again. Many Islamic MFIs feel that Grameen-type microfinance experiments were perpetuating a kind of gender bias that began with the noble ideal of uplifting the conditions of women in the society, but eventually led to exclusion of male members. That women, compared to men, make better microfinance clients, was perpetuated for a fairly long period, until the types of Akhuwat and IBBL disproved this myth with their approach of targeting the family as a smallest unit for economic and social empowerment. It is not hard to see how such biases in either direction can creep into an AI-based credit scoring algorithm.

Those who are concerned with and seeking to fight against highly destructive and pervasive societal biases (e.g. Islamophobia) will find the going increasingly difficult.

During these days of Covid-19 isolation, many of us tend to spend more time with YouTube. We would have noticed that if we search for a specific kind of content say, waqf or machine-learning, then YT continues to suggest to us similar contents for a reasonable time, until we let it know of our changed preference (say, in favor of some political news). What is more, if we hold a particular kind of political view, the algorithm will quickly learn how we feel about issues and we then continue to receive content that are palatable to our tastes and preferences or in conformity with our beliefs and opinions. As succinctly put by an observer, internet and AI have provided us with freedom to live in info or filter bubbles. More and more people are living in such filter bubbles that reinforce their own biases, even when they are factually untrue. Those who are concerned with and seeking to fight against highly destructive and pervasive societal biases (e.g. Islamophobia) will find the going increasingly difficult.

To be continued


[1] These are very similar to the principles and Operational Standards that have been developed by the Cash Learning Partnership (CaLP) for organizations engaged in cash transfers.

Related Articles

Responses

Your email address will not be published. Required fields are marked *