The Ethics of AI

Vimarsh
STEMATIX
Published in
8 min readMay 20, 2021

--

The Growth of AI in the past decade or so has given us the hope to be able to solve some of the most challenging problems the world is facing today — be it from solving climate change to even cancer and other diseases. Artificial intelligence (AI) and robotics are having and will have a significant impact on humanity in the near future. It has raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

What exactly is AI?

The general description of AI is basically to build computational systems that try and mimic the capabilities of the human brain.

Computers have beaten world champions of popular games including Chess, Jeopardy and Go. In the infamous Go championship between Deepmind’s AlphaGo and Lee Sedol (considered one of the best players), Alpha Go secured a 4–1 win against considered one of the most abstract board games. OpenAI created a Hide and Seek game and let the AI play — the results were surprising. the neural network was able to figure out problems with the code and how to trap the bad guys and even make their own base — none of which was it originally trained for. During the Alpha Go matches, the researchers observed that the AI just cared about winning the match and not by what margin. All of these tells us that AI is intelligent, it is different from other programs — it can figure out intricacies and small wins not seen by humans or for which it was not programmed at all.

This was just a matter of the game, but what about when it gets real humanly intelligent. If Moore’s law is to be believed, computers are going to get powerful, even more than our own minds at a point in time. What will we do then? Can a general intelligence AI to whom we give control over our weather, medicine, etc betray us?

The modern age kind of started with the industrial revolution. People made machines and productivity increased and so did quality of life (not for all). It also but did increased carbon emissions and led to increasing global temperature. Plastic was also one such innovation. It revolutionized packaging and product manufacturing but later we learned how bad it is for the environment, taking up to tens of thousands of years to degrade. It is now finding its way as a form of micro granules in our food too.

Then came the internet in the 1990s. People thought of it as a revolution — you could connect with people across the globe, you could share your thoughts and gain knowledge, share history and culture. People thought by knowing each other better the hatred will reduce, but we now know how that turned out. It brought even more hatred, racism, data harvesting, anxiety with social media — a lot of loss in productivity. It also gave giant corporations huge control of our daily lives and communication methods.

There is no denial hence that with the advent of AI — facial recognition stuff, voice assistants even really good language models like those of GPT-3, it could also bring horrible things.

Surveillance

Once built, facial recognition systems for automatic attendance and verification are now being able to spy on people and have mass surveillance, especially by authoritative governments (like of China), also being used to track and crack down upon ethnic minorities . All the data collection is now digital and with AI it can track many humans accurately and a lot at once. Not just that, these systems were used to catch traffic violations etc, but now are being used to spy and tackle. There is a thin line being surveillance for good and surveillance for promoting agendas.

Manipulation#

We have already seen with text generation models and conversational AIs that it has become super easy to write extremely convincing fake news articles and posting on social media. Most humans and robot fact-checking systems would fail at that level. Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal”. And with such a scale that the internet has, it is very easy to amplify the news and even topple governments (probably) in the future. Who decides what is considered fake news? Are opinions considered manipulation?

Artificial Intelligence or a model needs to be fed with information — data. It tries to find out patterns on its own ( unsupervised learning ) or understands patterns you have provided it with ( supervised learning ). That data you fed the learning algorithm with might contain biases. It might not be your intention, but it could be that for a particular type of behaviour or even personalities- some data points would be missing. Hence, the outcome of that would also contain biases. It might represent the real world appropriately but still is unfair for other users, who might not have their choices properly addressed. This is especially true with people of colour or different countries. Because of the lack of data, the recommendations also contain biases and so do the social media feed algorithms. There are ways to counter this by artificially feeding specific data, but the bias still remains.

AI systems also create a conformational bias for many. It basically means, the system will feed you with information that you align with often strengthening your viewpoint. It raises, thus, the question — should AI should have a cognitive bias? Should it feed the user what he wants, or should it remain neutral whatever the case maybe? Who decides what is being neutral?

Autonomous Systems

AI is not just software in a box but is being used actively in warehouses, shipping containers, cars, our devices, and even internet traffic. Who controls these autonomous systems? There is definitely potential for hacking attempts but even more so giving control of critical systems is scary and requires efficient monitoring. It also raises the question of jobs and what will happen to them? I had heard this example somewhere — During the 1950s when cranes came, people wondered what will happen to those workers who were lifting bags of material in and out of the ship? But most of these people got new jobs, they became crane operators from manually doing labor. The systems became hyper-efficient. With the standardization of container sizes, now even fully automated systems are being tested to load and unload containers off a ship. But they will still need some human with the real ability to judge all factors and just follow what he wants to in order to monitor it.

The Curtain

A lot of what we train these machines on, we ourselves do not know what it thinks of it till we feed it with input data and look at the result. All will seem perfect with all kinds of input data with predictable and desirable outcomes, but what if there is something that we do not know that the AI thinks. In some situations especially in self-driving vehicles, there could be cases where the system has never seen such a thing and gets totally confused — there could always be such edge cases. Google just launched their system which uses your camera and detects common skin conditions. Even though it has all biases of colour figured out and the ability to identify 288 skin conditions, there could always be one where it does not give a proper answer. If it warns the user about it — it could lead to probably some severe issues due to wrong treatment. The AI is always opaque until you feed it the data you want the result of.

What can be done?

On thinking about it, I recently read this article by MIT Technology Review on how Ethics teams could take inspiration from Buddhism. On further reading, I realised it is not just about principles from a religion but those which should have been fundamentally engraved into our society.

Buddhism proposes a way of thinking about ethics based on the assumption that all sentient beings want to avoid pain. Thus, the Buddha teaches that an action is good if it leads to freedom from suffering.

According to me, AI systems should be built with the first purpose to not harm anyone. Not another human nor a group of people nor another robot. The goal should be to help everyone and reduce their sufferings. Not only of those who develop the AI but also those who will be using and those who might not. It shouldn’t adversely affect someone who is not using that technology. People, corporations and countries should be accountable. There are so many places where AI is being used to harm people without any significant condemnation. Also, just talking about accountability is not going to do anything, but acting on it is even more important. Only if we, as a global community, set up common goals and the developers develop deep principles of humanity and respect the diversity, could these problems be easily solved.

Not everything has ended us up in doom. Vaccines are one of our most effective developments to ever occur. From eradicating diseases like Polio and smallpox to make the spread of diseases like measles negligible. AI can definitely help massively in sectors like Healthcare where there is a shortage of doctors in developing nations, in Self-driving technology to reduce accidents and make the supply chain hyper-efficient. After all, what COVID has taught us is that enterprises are moving towards hyper-efficiency.

We just need to make sure that the tech that once made to help doesn’t fall into wrong hands ( CRISPR gene editing coming first to mind) or be used to do wrong on/with others. Regulation is necessary, we should also not be differentiating on the benefits one gets if they use some form of AI or not. It shouldn’t directly matter. I also am not too concerned about developing like a superhuman intelligence level AI — we as human species have gone far and beyond to make sure we dominate the world. Even after exploring, if it sounds unsafe to our survival; we are not going to do it. Or will we? only time will tell — but I do believe we are not going to let go of the control of that intelligence.

Thanks to Aryan and Krish for reading the draft, giving inputs and helping in edit.

Update on 21st May 2021: Researchers found a bias in Google’s Skin condition identifier app. The algorithm was developed based on training data with less than 4% dark skin types. ( tweet )

Originally published at https://www.vimarsh.info on May 20, 2021.

--

--