A few months ago, when the Covid pandemic started, many governments published a set of guidelines for physical distancing. It said that people should stay at least 1.5 meters away from each other. Now, the problem for a typical person was, one did not know how 1.5 meters looks like in day to day life, and they did not have any tools to figure that out quickly. So, in government advertisements, they showed how people could apply a thumb rule to do so. Ads said that if two people extend their arms side by side, then their arms should not touch each other. That simple technique helped people follow guidelines quickly and helped in reducing the spread significantly.
Had it not been explained, it would have been challenging for people to follow these guidelines, and that would have aggravated the situation further.
I am telling this story because it has some learning in it. It tells us that unless we have some practicality involved, a tool or technique that can help people to follow the guidelines, mere guidelines could be rendered completely useless.
It is the same case with AI ethics and its guidelines. Guidelines do not address the root causes of issues that are results of AI; they merely provide "do's and don'ts" at a very high level. Because of such a high-level view, they are enjoyable to listen to and debate about but are nowhere near practical to implement.
Guidelines do not address the root causes of issues that are the results of AI.
But, let us first understand what the problem is. Why are people concerned about AI? Once we understand key issues, we can find out probable root causes and then see what really needs fixing - all by using first-principle methods.
So, what are the top ethical concerns related to AI? Job losses, misuse of AI, biased solutions, increasing inequality & concerns relating to uneven distribution of wealth, humanity & human values, the legal status of machines, mass surveillance & manipulation.
However, if you look carefully, you will realize they are not mutually exclusive. If we categorize these concerns in a mutually exclusive manner by boiling them down to three main categories:
- Amplification of wrong: we are concerned whatever is already wrong will become a million times more wrong with AI.
- Profit motives of corporations: we are concerned that with the help of powerful technology, which only wealthy can access, it will intensify several adverse effects.
- Philosophical questions about human life and humanity: we are concerned due to the increasing automation, the existential crisis will amplify at a mass scale.
Now, keeping these three categories in mind, let's see what possibly can be done.
Amplification of wrong needs reduction and elimination of wrong, reducing the amplification factor is pointless. For the profit motive of corporations to be curbed, we need social pressure. We need to change who and what we celebrate. If we celebrate affluence, everyone will want to go toward it, and in the process, the profit motive will increase. And finally, for philosophical purposes too, our social construct needs changing such that what we value changes.
We need to change who and what we celebrate. Social changes need social movements.
Well, then how to solve these issues practically?
Social changes need social movements. You don't need government or policy help here, it is all about social and peer pressure, the policy will follow.
However, when it comes to amplification of wrong, we need effective mechanisms to make fundamental design changes. Guess who is responsible for these designs - not governments or the law, it is creators of AI. These creators go by what they feel is right and do not necessarily understand how to deal with consequences or handle them beforehand. We need practical and usable frameworks and tools to teach these things. Doing so can enable creators to deal with risks and consequences during the design process and not post-facto.
Practical and usable frameworks and tools can enable creators to deal with risks and consequences during the design process.
Another complementary way to drive this behavior is through industry standards and best practice norms - not recommendations but scores. Better scores can mean better responsible behavior, and it can be applauded. Again government policy may only formalize this, but real execution will be by the industry body and ultimately by the creators.
For several years, businesses are less seen as the force for good and more for money or wealth creation tools. It cannot change with policy but with social pressure and internal changes within the business. When more employees turn sane and start checking every move, things can change significantly.
We have several ways to make AI ethics practical. There is an easy way, and there is the right way. The government and policy route seems more comfortable, but it is not only lengthy but also less potent and less practical. Businesses taking ownership seems more plausible, but ultimately, creators and teams will have to drive the change. Industry bodies can help by facilitating all this.
Remember, tools are practical, policies and guidelines are not.
Ultimately, we need to influence and empower real people, not pseudo entities. To influence and enable them to be considerate, we need tools and techniques, which they can use to build better systems. Here are a few things that can help practically at a grass-root level:
- Provide creators with usable tools for their AI design and development work.
- Teach them, enable and empower them to use those tools effectively. Mobilize necessary resources and programs to do it.
- Work with industry bodies, policymakers, and media to promote and popularize these tools.
Remember, tools are practical, policies and guidelines are not. Therefore, it is highly essential to channel our focus on things that could be applied and prioritize accordingly.
We also need media on our side to amplify our voice and create much needed social pressure. Government and corporations come last, if there is enough pressure they will change; if we waste time on them, we will be helpless in the future with no time to spare.
Do you really want to make AI better and the force for good?
Then start fixing the root causes!
You cannot solve problems with the solutions, fix the root causes instead!