Interview with Chinasa T. Okolo

Interview Transcript: Chinasa T. Okolo
This interview has been edited for clarity and brevity.
Brooks Technology Institute (BTPI) Junior Fellow Kayla Schechter spoke with Chinasa T. Okolo, a Cornell alumna and current policy advisor on artificial intelligence (AI) governance, to discuss her research and insights into the promise of human-centered AI.
Chinasa T. Okolo, Ph.D., is a Fellow at The Brookings Institution and a recent Computer Science Ph.D. graduate from Cornell University. Her research focuses on AI governance for the Global Majority, datafication and algorithmic marginalization, and the socioeconomic impact of data work. Dr. Okolo has been recognized as one of the world’s most influential people in AI by TIME, honored in the inaugural Forbes 30 Under 30 AI list, and advises numerous multilateral institutions, national governments, corporations, and nonprofits.
Kayla: What drew you to pursue research in technology innovation and AI ethics? Did you always see yourself in this field?
Chinasa T. Okolo: When it came to the AI ethics side of things, I was really interested in understanding how communities in the global South perceive and also interact with AI. That’s what really drove my dissertation research, where I worked with community health care workers in rural India to understand how they perceive and value AI, and then also how methodologies like AI explainability could be tailored to make AI more accessible to them. When we think about technology innovation in general, and even AI ethics, a lot of this work is tailored towards Western contexts, and it leaves out a significant part of the world’s population who are actively leveraging these technologies. It may not necessarily be at the same scale, but there’s a lot that’s happening that can be very useful in understanding rural contexts in the US and other wealthier nations.
When it comes to seeing myself in this field, I would say, yes because I was fortunate to attend Pomona College, a liberal arts college, for undergrad. For me, being exposed to students with so many different backgrounds and being able to take classes that were primarily not STEM informed my approach to really understanding AI and also technology. [This experience] also helped me understand that my voice was important in this field and that I would be able to contribute and grow as a researcher.
Kayla: Is there any experience that particularly resonated with you while doing your field work?
Chinasa T. Okolo: The work I did in India was really important because at the time, the community healthcare workers were already overburdened. There’s a lot of research that shows this was happening, within India and also throughout the rest of the global majority. Because the COVID-19 pandemic happened, they had extra duties added on to their work and they were underpaid. They were also some of the only healthcare professionals that many people in their respective communities would have access to. I’m just really grateful I had the opportunity to get firsthand experience because it’s very rare for computer scientists to do field work. It really shaped how I think about AI development as well as the harms and risks of trying to impose AI on these vulnerable communities.
Kayla: Why would you say development and adoption of AI in the Global South is so important and why do you think it’s so overlooked by the global majority?
Chinasa T. Okolo: When it comes to understanding development adoption, I think it’s important to understand that AI and technologies in general work very differently in these low resource contexts, particularly, in the global majority. This also impacts how these [AI] technologies are negatively impacting these populations.
Due to the introduction of tools like Chatgpt, we’ve seen a very wide interest; organizations, companies, institutions and even governments around the world are trying to understand how they want to adopt AI. But the thing is, we know that a lot of these tools which are being promoted as being able to solve problems, whether they be for healthcare, education or agriculture, just don’t work. When we think about things like facial recognition, we see a very high emphasis of bias within these respective technologies. In the global South, or the global majority, in particular, we know that these populations are more likely to be marginalized by these respective technologies. Unfortunately, we don’t see equitable participation from countries in this region in developing technologies. However, when we do see this development happen, we see [AI] actually being able to address problems with more specificity and more efficiency, in ways that actually meet the needs of these local communities. This is something that I like to encourage in my work, to ensure that the development of AI is being led and driven by those that are local to these respective contexts.
In general, I think that the development adoption itself is often overlooked because when we think about the global majority, a lot of companies don’t necessarily see consumers in these regions as being financially important to their respective bottom lines. We know that consumers in Africa, for example, have much less buying power than those in Northern America and in Europe. This is a big factor in why we see these AI technologies, and other technologies in general, not serving [Global South] communities. We’ve also seen neo-colonial links or existing colonial ties happen where a lot of resources, particularly labor and natural resources such as minerals, are extracted from these regions. Resource extraction also impedes the ability of global majority countries to actively participate in and engage in AI development and adoption.
Kayla: What do you think responsible AI development looks like? How do you think we can overcome biases stemming from datafication and algorithmic marginalization?
Chinasa T. Okolo: I think responsible AI development [means] considering how and where you should apply AI. Currently, we have this “AI hype” that’s overtaking many industries; there are so many people essentially selling snake oil. There’s a book actually called AI Snake Oil that came out recently which is really important in distilling some of these issues. This is something that all governments have to be aware of. We see people trying to apply AI [in contexts where] it should not be applied to or where we know that AI technologies are unlikely to work, particularly in things like emotion recognition and facial recognition. Some of these domains are built on pseudoscience, and so it’s hard to automate or even try to develop tools because the underlying science itself is so bad. That’s what I think responsible development should be part of. Tools concerning IQ estimation, emotion recognition, even trying to recognize someone’s gender or sexual identity- it’s very hard to do that even as a human yourself and so I think it becomes even more so harder for AI systems to do that.
When it comes to the ethical side of it, it’s really about considering how you engage communities and ensure that there is equitable representation in the actual design and development process of the AI models and systems themselves. On the deployment side of things, I think that we don’t necessarily see a lot of consumer or end user participation, aside from them being subject to these AI technologies. Understanding the risks and dimensions of harm that happen when you actually implement AI technologies for real-world use cases- that’s what the ethical part of this is for me.
When it comes to overcoming these biases, I think there are so many things that can happen. Unfortunately, when we think about the technical measures, it’s honestly a little bit hard because you can increase the diversity, quote unquote, of your respective data sets, [however], bias can also be encoded in the AI model architecture itself. It is not really clear how this happens. I did some work in AI explainability and also interpretability; even with these methods, it’s so hard to unpack how AI models themselves function. Developers have to be very intentional in how they design these systems, along with ensuring participation and diversity in their respective data sets.
I think that when it comes to datification and algorithmic marginalization, a lot of it happens because companies are driven without regard for consumer rights. If there aren’t governance frameworks that can provide guardrails or limitations for their respective work, this is very likely to happen. I think that a big part of overcoming these biases and the harm that is enacted on vulnerable populations, as well as everyone who is subject to AI systems, is having governance frameworks in place to provide recourse from algorithmic harms as well as ways to help companies guide AI development responsibly.
Kayla: Lastly, do you have any book recommendations for people interested in AI ethics and global development?
Chinasa T. Okolo: Definitely. I read a really interesting book, Code-Dependent: Living in the Shadow of AI by Madhumita Murgia, at the end of last year, which talked about AI impacting various aspects of society. [The book] was very interesting because it got into some topics that I find a lot of other books don’t go into, particularly when we think about the labor that impacts AI systems; a lot of these workers are based in eastern Africa and also throughout Southeast Asia and Latin America.
There’s another book I would recommend: The New Empire of AI by Rachel Adams. I have had the chance to interact with Rachel a bit, and her work is really interesting; she founded and leads the Global Center for AI Governance based in South Africa. She’s been able to do a lot of empirical work to understand how governments across the world are advancing responsible AI. This work is through the African Observatory for AI, and then they also formed a Global Index on Responsible AI. Her book discusses issues of AI and global inequality which I think is really great, especially when we consider the global side of AI because there are so many books that don’t talk about biases and are very Western-centric.
Lastly, one other I would recommend is The Tech Coup by Marietje Schaake. She is a fellow at Stanford, at their human-centered AI initiative (HAI). Her book is very relevant to what we see going on in government right now and how the US government’s actions impact the rest of the world.