By Dr. Jun-E Tan
(Dr. Jun-E Tan is an independent researcher based in Kuala Lumpur. Her research and advocacy interests are broadly anchored in the areas of digital communication, human rights, and sustainable development. Jun-E’s newest academic paper, “Digital Rights in Southeast Asia: Conceptual Framework and Movement Building” was published in December 2019 by SHAPE-SEA in an open access book titled “Exploring the Nexus Between Technologies and Human Rights: Opportunities and Challenges in Southeast Asia.”)
This is the second in a series of articles on the human rights implications of artificial intelligence (AI) in the context of Southeast Asia, targeted at raising awareness and engagement of civil society on the topic.
In the previous article, we looked at the definitions of AI and machine learning, and discussed some considerations of their applications in the Southeast Asian context. In this article and the next, we will continue the discussion on potential human rights impacts, from the angles of 1) economic, social, and cultural rights (ESCR), and 2) civil and political rights (CPR). To provide adequate space to unpack the ideas, this article will focus on the first group of rights.
What are economic, social, and cultural rights (ESCR)?
Drawing from the International Covenant of Economic, Social and Cultural Rights (ICESCR), these rights include the rights to health, education, social security, proper labour conditions, quality of life, and participation in cultural life and creative activities. These rights are often considered as positive rights, which require action to fulfil (such as providing opportunities for decent work), as opposed to civil and political rights which require inaction (such as not restricting freedom of expression).
It is important to note that ESCR implications of AI is not a binary “good” or “bad”. Even in the same application, outcomes may differ for different people—some may be impacted positively and some negatively. For example, relying on AI for deciding on credit trustworthiness based on a large pool of data points may benefit the poor with a thin credit profile, as they buy fewer big items and cannot prove that they are trustworthy based on their credit history. However, using data points broader than credit history may discriminate against others based on non-related data points, sometimes in arbitrary ways—an example was given on a certain AI system giving lower points if an applicant typed in all-caps, which is apparently correlated with a higher risk of default.
Another point to consider is that “traditional” ways in which decisions may be made (relying on human decision-making, mostly) can also be fraught with biases and inconsistencies to begin with—are we improving the baseline, or making things worse? Are the AI systems amplifying these biases, or keeping them in check? As we can see, there is no clear-cut answer to this, it depends on the implementation.
To structure our discussion, we can look at the implications of AI on ESCR from two angles: 1) the cost of not implementing AI for development, and 2) the cost of implementing it badly.
Developmental benefits of AI
AI, when used strategically and appropriately, can provide immense developmental benefits. Economic growth is a much-touted benefit, but possibilities of AI to improve lives extend much further. Here are some examples of what the technologies can already achieve in Southeast Asia:
- Healthcare: In Singapore, a local startup Kronikare worked with AI Singapore to develop a system to capture, analyse, and diagnose chronic wound conditions. This system was then scaled up and is currently deployed in some hospitals and nursing homes in Singapore.
- Traffic: Malaysia City Brain, a collaboration between Alibaba, Malaysia Digital Economy Corporation, and the city council of Kuala Lumpur, aims to reduce traffic in the congested city. City Brain in Hangzhou has seen traffic speed up by 15% in some locations.
- Education: Ruangguru, an online education platform in Indonesia that connects students and teachers for online tutoring, and provides other services such as video content on a wide range of subject areas. It uses AI to personalise education for its 15 million students, 80% of whom are outside of urban areas.
- Food security: In Vietnam, startups are using AI and IoT sensors to increase agricultural productivity and save on water and fertiliser use. Sero, a Vietnamese startup, claims an accuracy rate of 70-90% for identifying 20 types of crop diseases, thus lowering the rates of crop failures.
However, across the eleven countries of Southeast Asia, the implementation of (and capacity to) implement AI is uneven. This can be illustrated using the AI Government Readiness Index by Oxford Insights and the International Development Research Centre, a ranking of governments according to their readiness to use AI for administration and delivery of services. Singapore tops the world ranking. Six Southeast Asian countries are within the top 100, including Malaysia (22), Philippines (50), Thailand (56), Indonesia (57), and Vietnam (70).
World Ranking | Country | Score |
1. | Singapore | 9.186 |
22. | Malaysia | 7.108 |
50. | Philippines | 5.704 |
56. | Thailand | 5.458 |
57. | Indonesia | 5.420 |
70. | Vietnam | 5.081 |
121. | Brunei Darussalam | 3.143 |
125. | Cambodia | 2.810 |
137. | Laos | 2.314 |
159. | Myanmar | 1.385 |
173. | Timor Leste | 0.694 |
Indeed, countries higher up on the list have (or are building) national strategies that aim to capitalise upon the advantages of the technology and to build enabling environments for supporting homegrown AI. Singapore with its National Artificial Intelligence Strategy aims to be a leader in the field by 2030, strengthening the AI ecosystem and providing funding support of more than S$500 million to drive AI initiatives. Other Southeast Asian countries coming up with overarching AI policies include Malaysia (with a National AI Framework coming up in 2020, and a National Data and AI Policy being proposed to the cabinet) and Indonesia (targeting completion of its AI strategy in 2020).
On the other hand, those on the lower side of the spectrum are still struggling with basic Internet access—only 30.3% of the population of Timor Leste is online, while Myanmar has 33.1% and Laos has 35.4%. With that, we see a divide between those who have access to the technologies, and those who do not.
While some governments may lag behind in their readiness for AI, corporations are already gearing up to provide services. In general, there is great appetite in the region to jump on the “smart” bandwagon, which includes the use of AI in improving products and services. The ASEAN Smart Cities Network (ASCN) mooted in 2018, has 26 cities across Southeast Asia aiming to use technology as an enabler for city development. One of the key goals of the Network is to link these cities with private sector solution providers.
In general, the plans and visions look promising: focal developmental areas of the ASCN are to improve social and cultural cohesion, health and well-being, public safety, environmental protection, built infrastructure, as well as industry and innovation.
Potential risks of AI affecting ESCR
The developmental benefits brought about by AI are contingent upon the implementation. This is also where many potential risks lie. Even though well-known cases of AI safety and harms have not surfaced in our region where the technology is still nascent, we would do well to observe other localities for known problems.
The “Automating Poverty” series from The Guardian, for instance, gives some chilling examples from India, UK, US, and Australia of how automated social security systems assisted by AI can be very dehumanising and penalise the marginalised further. The case from India, in particular, shows us some devastating consequences of faulty implementation in the context of a developing country. The complete transition from a paper system to a digital one has left the poor vulnerable to technological glitches, ranging from electricity blackouts and unstable Internet, to unexplained rejections by the system to disburse social welfare to the deserving. The system covers social protection and medical reimbursements for the poor, and errors have led to starvation-related deaths.
System bias and accessibility
Opaque decision-making with AI on social security can lead to dire consequences and human suffering with little recourse. Southeast Asia is weak in at least two aspects required for better AI-powered decision-making. The first is good training data for machine learning, which the region lacks, due to populations not yet connected to the Internet, or bad quality data. The second is that most of the countries are importers of AI technologies, which means that most of the engineers designing the systems may not understand the local context. As mentioned in the earlier article, these are fundamental problems that have repercussions on human rights.
When people depend on technology to access their economic, social, and political life, they are subject to the availability and stability of the technology. As pointed out earlier, the AI divide is there between the haves and the have-nots—those who have limited abilities to build their own technology will have to rely on using technology that may not be built in an accessible manner for them. Accessibility may be considered from many angles. In this culturally rich region that speaks many languages, it is important to cater to all, but such localisation exercises are costly and may not be implemented. Accessibility can also be obstructed by physical or mental disabilities, low education level and digital literacy, or even just a lack of basic infrastructure.
These fundamental issues need to be considered seriously before one jumps into AI solutions.
Technology is not a cure-all
Not all problems can be (or should be) solved by applying technology. As pointed out by The Guardian’s report on India, the root of inefficiencies associated with the previous system was corruption and poor management at a higher level, and not duplicate or fake cards, as was the problem targeted by Aadhaar. When the problem is a deeper, structural problem, a technological solution may divert attention from other reforms needed and create further problems.
In Southeast Asia, the fervour for all things AI has led to statements by top leaders promising to apply AI to all sorts of contexts. For example, Indonesia’s President Jokowi announced that his administration would replace some higher-level civil servants with AI, while in Malaysia, the Education Minister announced that machines would provide schoolchildren with career guidance in the future. It is debatable whether these are the most appropriate solutions to problems faced, and any such moves should be preceded by multistakeholder consultations and human rights impact assessments.
Worsening inequality, optimised by AI
Lastly, when AI is discussed in the context of this region, it is usually seen from the angle of economic growth or displaced jobs. These are two sides of the same coin—corporations gain profit when they are able to replace human workers with machines. Even when workers have not been replaced (yet), we see a trend towards informalisation of work with the gig economy (such as Grab, Go-Jek, or other platforms for freelancers), which is largely unregulated in Southeast Asia, leading to concerns about worker exploitation optimised by algorithms.
It has been noted at forums discussing AI in Asia that governments in the region tend to see AI as a vehicle for economic, rather than social development. It is therefore a concern that AI will be used to optimise profit-making for the technology owners at the expense of people and the planet—a scenario not so different from what we have now, but at a faster rate.
In conclusion
In terms of AI impacts on economic, social, and cultural rights, the short answer to the question of whether AI is good or harmful in the Southeast Asian context is, “it depends”. Civil society in the region should understand more and debate about potential benefits and harms, anchored in the challenges and particularities of local contexts.
Will we be uplifting the lives of vulnerable millions with the benefits of AI, or exposing them to systems that will further disempower them? What about their data and associated privacy? The last question will be discussed more in the next article when we talk about AI and civil and political rights. (Originally published in https://coconet.social/2020/ai-impacts-economic-social-cultural-rights/)