Artificial Intelligence (AI) through the Lens of Islamic Economics – V

We can take comfort with the forecast that AI will firmly stay within human control and conveniently, trash the forecast of some that machines will become incomprehensibly smarter than humans with artificial super intelligence (ASI). However, in the event ASI materializes, it will be beyond our control. We can only hope that, before the inevitable happens, we would have inculcated in the machines Islamic ethical and moral norms and values.

Let us talk a bit about “donor attrition” risk. This is a risk that all charity and non-profit organizations face. It is a major risk. It can significantly affect their ability to keep their programs funded. A global report on Fund-Raising Effectiveness[1] underscored this risk in simple words: for every 100 new donors in a year, the non-profits lost 99 existing donors! Add to this the estimate that it costs nonprofits about 10 times more to bring in a new donor than to keep an existing donor. And you have a real problem staring at you to be addressed. Fund-raising experts recommend a three-pronged action plan to tackle this – develop and use donor analytics, get feedback, and reach out to the lapsed donors.

Now, imagine yourself as an existing donor walking into the premises of an Islamic non-profit organization. A computer placed at the reception is able to instantaneously recognize you, cross-check its database of your past contributions, and identifies you as an individual with high propensity to donate. The next instant, the computer warmly welcomes you offering to serve you with info on latest campaigns that match your interest or to provide impact-feedback on past campaigns that you contributed to. Won’t this gesture influence your decision to donate again to the same organization? If you are a zakat payer, the machine can even help you with counselling and estimating your zakat liability. A machine that sees, listens and talks can certainly help the non-profit retain you as a donor and a continuous supporter of its programs.

In the examples, enhanced customer satisfaction is made possible, most certainly, because of hearing, talking and seeing machines.

Let’s consider another scenario. Now, imagine yourself as someone seeking a qard loan or a micro-murabaha financing from an Islamic MFI. Normally, you should be ready to visit the nearest branch of the organization with a plethora of documents for KYC compliance for opening an account. However, when you connect to the branch seeking an appointment, you are told not to put yourself into the inconvenience of a personal visit. The process of visual authentication and KYC is now possible remotely with computer vision. What is now required is this. You need to send a photo of your ID card. The computer will pick out your facial image, your name and other written text on the card. Next, you take a selfie with your mobile phone against the photo on the ID card. The computer will be able to compare the facial features on your selfie photo with the photo in the card. Your authentication is complete.

In the above two examples enhanced customer satisfaction is made possible, most certainly, because of a hearing, talking and seeing machine. Before we move on, let us discuss very briefly, how this is made possible. How does a machine deal with sounds and images. How is it able to hear, talk and see.

Just as recording and playing sound bytes is not same as natural language processing, capturing high quality images and videos is not same as computer vision. Just as NLP requires a computer to understand what we speak and talk back to us intelligently, computer vision demands a computer to recognize us as well as the different objects around us.

In order to understand speech, machines must match the sounds that adheres with the basic units of sound that our language is based on. There is a small set of distinct, indivisible sounds that exist in any given language, called phonemes. Every sentence can be broken down into a sequence of these phonemes. Each phoneme can be “identified” by the machine, since it creates a distinct waveform based on how the intensity of the sound varies as you pronounce a phoneme. The machine also learns to differentiate between words and sentences based on the “pause” between phonemes as you pronounce. Once it understands how you talk, the machine can always talk back.

Coming to images, let us understand that every bit of a picture is encoded as a grid of pixels and each pixel records numerically the intensity of the light at that particular spot. Basically, the machine sees gradations of light intensity in any image. While in black and white images pixels are encoded with a single gray-scale number, in color, each pixel is encoded on an RGB scale as a set of three values, the intensity of light in the three primary colors red, green and blue. Compared to a still-image, a video is simply the sequence of still images, that is, a recording of how each pixel changes in light intensity over time. Next, moving from recording an image to interpret a recorded image is a giant leap forward. This involves manipulating the pixel values and finding patterns in them, that is, identifying the relationship between pixels that are nearby. The machine picks out objects by finding patterns that represent the boundaries of objects. Machine reading of text or image or face involves identifying pixel patterns.

So, a machine can hear us talk, talk back to us, see and recognize us. Once you add the internet-of-things, it can do things as well. So, let us revert to the question we raised in the first blog. Can such an “intelligent” machine enter into a contract in a legal sense? Can an “intelligent” machine be held “accountable” for its actions? For example,

  • You are riding an AI-car and it hits other cars and pedestrians due to faulty “vision”
  • You were diagnosed diabetes by AI-diagnostic and you realized after being on medication for some time that it was a case of false-positive prediction
  • You liquidated half of your equity portfolio on a signal from the Robo-investment-advisor, only to see the markets bouncing back and experiencing a major climb upward.

Before we try to answer this question, let us differentiate between three different levels of AI. First, we have the artificial narrow intelligence (ANI), which does one thing at a time. For instance, the AI algorithm enables us to convert speech to text. Second, we have the artificial general intelligence (AGI) which is able to do everything we can at the same level of our mental abilities. And finally, the dreaded zone of artificial super intelligence (ASI) which dominates and far superior to human intellect. We simply would have no clue to what the machine is thinking. We are currently at ANI while experts disagree on whether and how soon AI will reach higher levels. We can take comfort with the forecast that AI will firmly stay within human control and conveniently, trash the forecast of some that machines will become incomprehensibly smarter than humans with ASI. However, in the event ASI materializes, it will be beyond our control (switch it off?). We can only hope that, before the inevitable happens, we would have inculcated in the machines Islamic ethical and moral norms and values.

Experts in ethics of AI consider three levels of ethical behavior by the machine. First, AI has ethical constraints programmed into it. Second, AI weighs inputs in a given ethical framework to choose an action. At the highest level, AI makes ethical judgments and defends the reasoning. It is relatively easy to see the first level in action. Our Islamic investments robo-advisor will not touch a pork-producing company with a stick! It knows wine and pornography are haram and beyond its reach. This is because of Shariah constraints programmed into it. It will never permit investments into any projects that violates the conditions imposed by the rab-al-mal in a mudaraba. If it is into zakat advisory, it will never “clear” a list of beneficiaries that include the non-poor (unless there are other defendable reasons to pay zakat to them). If it is to assess the performance of a nazir or mutawalli, it will raise a red flag over benefits flowing to projects that are not in conformity with intentions of the waqif.

If I am the horse-trainer and the horse refuses to budge an inch as the event for which it is trained moves into full swing, I perhaps deserve to be penalized for the loss-of-face or loss-of-money or loss-in-the-battle or any adverse consequence, since I could have trained better.

While we are still within the domain of ANI, the task seems to be easier. Machines cannot obviously be penalized for the consequences of their actions. They cannot be made to pay a penalty. They cannot be jailed! A logically justifiable course would be treat them at par with pets, as many experts seem to vote for. If I am the horse-trainer and the horse refuses to budge an inch as the event for which it is trained moves into full swing, I perhaps deserve to be penalized for the loss-of-face or loss-of-money or loss-in-the-battle or any adverse consequence, since I could have trained better. I bear the costs, while the horse is retrained! However, in case of AI, there may be additional complexities. Unlike a single horse trainer, many different programmer(s) may have contributed to the AI algorithm and fixing the responsibility of error may not be practically possible. A suggested solution is to go for system accountability[2], even though this may encourage people not to exercise enough care and caution, while creating the AI. System accountability may however, yield good results if ensured through government regulations and industry standards requiring developer companies to subject algorithms to rigorous scrutiny for ethical questions that may be lurking around the corner.

(To be continued)


[1][1] Available at http://afpfep.org/wp-content/uploads/2018/04/2018-Fundraising-Effectiveness-Survey-Report.pdf

[2] Matthew Biggins, AI and the Future of Ethics, available at https://medium.com/s/ai-dirty-little-secret/ai-and-the-future-of-ethics-e4286567e742

Leave a Reply