AI can certainly be the root of both good and evil. But can it fight corruption? The debate is up at the OECD Global Anti-Corruption & Integrity Forum later this month. Photo: Runa Aarset
7 Mar 2019

Is Artificial Intelligence the future tool for anti-corruption?

Artificial Intelligence (AI) can be an effective tool in anti-corruption work. Its potential for handling big data is unique, its ability to detect anomalies or patterns, for example in financial transaction data, unparalleled. Some of the ways AI is applied in society also raise sceptic voices who fear a society under ever more surveillance where privacy and individual freedom is at risk.

The risks and opportunities of new technologies for anti-corruption is up for discussion at the OECD Global Anti-Corruption & Integrity Forum conference in Paris on March 20-21. Ethical dilemmas, perils and promises, in short, AI’s potential and pitfalls as a tool in and for anti-corruption in development programming are up for debate. In a breakfast roundtable facilitated by CMI’s U4 Anti-Corruption Resource Centre, conference participants are invited to bring their insights, concerns and experiences to the table.

U4-affiliate Per Aarvik will present preliminary findings from a thorough study of literature and cases involving AI in anti-corruption efforts. The study has been initiated and funded by Sida. The goal is to explore opportunities and challenges, risks and ethical considerations. Before applying new technology, the possibilities and risks have to be explored.

-U4 Anti-Corruption Resource Centre and Sida share a common interest in exploring AI as an anti-corruption tool. AI’s capability of identifying patterns, classifying information or predicting outcomes from large and complex data makes it a potential game-changer. But the current buzz around AI may have led to misconceptions and overtly optimistic predictions about its potential benefits, says Aarvik.

Data is the new gold
There is a wide range of examples of effective, cost-efficient and clever use of AI, from tax authorities using AI to expose tax fraud to public offices using AI to perform the repetitive and time-consuming tasks of formal procedures. But Aarvik has yet to come across examples of AI applied specifically in anti-corruption projects in development programming. What he has found though are indications of the possibility to use new technology and AI to design processes in society that intentionally or not, reduce the risk of fraud or corruption.

-Within cash-based aid or wage systems for civil servants, the transition from physical cash to digital funds, either with the help of credit cards or mobile money has increased security and reduced fraud. When the flow of transactions is digitized, it is also possible to track.

-IBM researchers are working with the Kenyan government to climb the “Ease of doing Business” ranking. One measure has been to reduce the number of interactions with the government needed to start a business. During the project, the number of interaction points has been reduced “from 11 to just three simplified steps”. They further plan to investigate the role of AI and blockchain technology to improve government service delivery. The word “corruption” is not mentioned, still this may turn out to be an alternative path in using technology to fight fraud or reduce the risk for corruption. And yes – since the project started, the country has climbed from 92nd to 61st on the ranking, says Aarvik.

Mobile banking secures digital transactions that are not only easier to track, but also contributes to moving people from an invisible to a visible sphere. Even in Kenya, the home country of the well-known mobile money Mpesa, cash is still king, handling 8 of 10 transactions. The lack of digitized data is a major obstacle in introducing AI in anti-corruption work in developing countries.

A viable option to increase access to digital data might be to create it as you go along. In India, the authorities have saved recordings of 8 million calls from Kisan Call Center, a phone-based help service for farmers. In a research project where IBM was a partner, recordings of calls have been used to train a Hindi speaking AI chatbot. The aim is to see if AI can be used to assist farmers in increasing their crops and income by providing advice in their own spoken language. Similar concepts could be developed based on everyday corruption reports from ordinary citizens.

-We are also looking at other sectors to identify concepts that might be relevant for future anti-corruption projects.

The price we pay
When anything is possible, it is easy to be swept away by the possibilities. But could AI be more of a pandora’s box than we like to think?

-The sheer power and efficiency of the technology makes it tempting to put it to use in many ways and for different purposes -both good and bad- depending on one’s point of view. The more powerful the tool, the more important it is to analyze the reverse of the medal. China recently unraveled a new social experiment so massive that George Orwell’s 1984 comes to mind. Collection of big data about citizens enables a system where each citizen is punished and rewarded based on their tracked behaviours. In the US, AI has partly substituted public servants dealing with social security and welfare benefits. In Norway AI has been introduced to the tax authorities in helping to identify companies or individuals who should be subject to closer inspection. Even the immigration authorities are investigating if AI can assist in handling asylum applications, says Aarvik.

It is even possible to imagine an anti-corruption application based on AI turned against, say political opponents in a given country.

-Decisions based on AI may seem unassailable and hard to contest. The algorithms driving the decision made by AI might even be so complicated that their creators sometimes are not able to explain the rationale behind the outcomes. This is called the “Black Box problem”, and multiple methods are researched on how to solve it. Transparency in how algorithms work is one track, another suggestion is to use counterfactual processes to find out how AI make its decisions. That is to alter the inputs to the “black box” to see which inputs change the decision outcome, and therefore contain the key to how the algorithm is designed. Contesting decisions made by an AI application may be challenging and is one of the severe ethical considerations to research before implementing such technologies in governance – even if the argument is to reduce fraud or corruption, says Aarvik.

Is it simply the price we have to pay for anti-corruption efforts to be effective? Perhaps the discussions between practitioners, policy makers, activists and researchers at the OECD Global Anti-Corruption & Integrity Forum in March will bring us closer to the answer.