I am not an ethics specialist, though a data scientist now, I am an engineer by training three times. The third time was in Japan and on one September my father came to visit me. We had many conversations, but by far one of my favourites was around his theory of expansion. He doesn’t call it that, may not even recall the conversation, but I do. The theory of expansion states that: “Like the Universe’s overwhelming tendency to expand, so too does everything living in it seek to mimic that expansion” – Goloatshoene Moiloa.
He went on further to say that humans were special because they do not merely just expand in their physical form but in their mental capacity as well. It is out of this desire to expand that AI continues to evolve to meet the limits of the human imagination and also, inadvertently venture outside of it.
We have many examples of this. In 2020 alone advanced technologies were behind: Britain delaying effective Covid19 spread prevention methods, a Telegram bot app that removed the clothing off pictures of women, “universal” technical tools like Twitter and Zoom perpetuating racial discrimination and language models being praised for their ability to smash certain academic benchmarks when their real life applications continuously prove to be a threat to already discriminated, marginalised, oppressed and vulnerable communities (read: bot suggests suicide to imitation mentally ill patient).
These kinds of things make people angry and motivate a call to highlight the problematics of the algorithms that produce these results (and ever so occasionally point fingers at the institutions that allow these algorithms to persist). Three points of intervention commonly identified are:
- The Machine Conceptualisation (MC) phase which deals with problem definitions of things we try to solve
- The Machine development (MD) phase which deals with conducting analysis
- The Machine release (MR) phase which deals with best practises of developmental release as well as policy development
For the MC stage primary concerns are around the ethics of the problems we are trying to solve and the metrics we deem fit to represent them. Also under scrutiny at this stage are the resources that motivate the pursuance of a particular agenda, this includes sourcing the right data and understanding under what circumstances the data is true and relevant. Concerns within the MD phase include the secretive nature of most of the algorithms we utilise to make decisions and how certain assumptions and biases regarding particular problems are hidden within these secrets.
In the MR phase, concerns are two-fold. Both in the release of datasets and models into the open source world to be used and re-used in different contexts but also in the development of best practise, policy and other regulatory strategies to better manage AI’s unintended effects on the world. The idea is that if we cover our bases in these three areas then we would have dodged any major threats of pending doom by AI.
Though important, these techniques fail to recognise that any mitigation strategy exists within the realm of the hegemony and any machine developed within the hegemony with its rules and practices in attention is developing a tool that serves the status quo and most likely at the expense of someone or something else. Mind you, this hegemony is established in an AI history of commodifying people and their information and a history of war from AI tools birthed of statistics originating from eugenics, and with scientific principles founded upon rationality and functionalism which both seek to identify observed phenomena and mechanisms of the world as independent and modular as well as logical and reasonable in a manner that violates, generally speaking, indigenous ways of knowing but more importantly indigenous ways of being.
When we understand this, we understand that the results of this, (highlighted in paragraph 2) come as no surprise. When a technology by virtue of the origins of its birth has embedded in it harmful points of view, the technology is incapable of tending to the diversity of paradigms that the technology is meant to serve. We also realise then that if we want AI to be good, then the people developing it need to be. And a large part of being “good” is being held accountable for the points of view we hold and pass onto our machines by expanding our understanding of the reality of different narratives to whom our machines will apply.
Building “good” AI means expanding its founding principles to involve developing practices that do not center AI development around the people who already benefit from the status quo but instead centering those who stand to be harmed by it. It means expanding the function of AI beyond individual profit and capital gain and allowing it the space to function in a world that is not logical and within reason.
We cannot afford chaos
The push for Machine driven decision making is motivated by a desire for insight into the world, how to navigate it and to better understand the opportunities for getting out of it what we want. Our imagination for wanting has been expanded beyond anything we ever thought we might’ve guessed. But with increased opportunities for procuring newly imagined wants and the excitement surrounding the possibility of their materialisation as quickly as possible means that understanding whether these wants and their methods of realisation are good or bad is a complicated process given different contexts. A process, that not only requires us to consider the careful steps to take with these technologies on our journey into executing a human-machine hybrid future but also very much in considering how the history of the technology itself has shaped the way in which we think about what it is that we want and what we are willing to lose to get it.
The universe expands in a manner that is chaotic, let us not be tempted to replicate its chaos…we cannot afford chaos.
Written by Pelonomi Moiloa (Data Scientist at Nedbank)
Put FinChatBot to the test!
At FinChatBot, our conversational AI solutions are created to solve business problems. Our passion for technology ensures that we remain on the cutting edge of development with regards to Conversational AI. Our software constantly learns and assimilates information to provide a personalised experience for each customer.
The tone, language, and approach is customisable to meet the ethos of your brand. We accomplish this mission by using existing messenger services (especially WhatsApp) to meet customers where they’re at, with a platform that they are comfortable with.
If you are interested in integrating conversational AI into your business to solve business problems and better serve customers, click here to request a demo.