So, how do we deal with these issues? There are three possible ways to do this
Create a thorough process regarding the collection and management of data to train machine learning.
Provide tools to analyze and diagnose how the model is working.
Provide training and education for deploying products that use machine learning.
These problems have been there before, but what’s new is that humans don’t do statistical analysis anymore. It is done by machines, and the models produced by those machines are too complex and large for humans to analyze.
While “model transparency” is often raised as a concern about bias, it’s not just a matter of “there is a bias.
More than that, it’s the fact that it’s difficult to discern if there is a bias.
This is fundamentally different from previous organizational decision-making processes and automation that had clear, logical steps that could be audited.
The idea of being able to understand or audit how decision-making works in an existing system or organization may be right in theory, but wrong in the real world.
It’s not easy to audit how decisions are made in a large organization. There may be a formal process for decision making, but it’s unlikely that the people inside actually follow that process every day. We don’t make decisions in a clear, logical, or even systematic way.
Even humans are black boxes. If you put that together with thousands of people in an organization, the problem is compounded and multiplied.
It’s easy to say that a system, an organization or a human being, follows rules backed by clear logic that you can audit, understand, and make changes to. But we know that this is not the case in the past. This is Gosplan (National Planning Commission of the former Soviet Union, Five-Year Plan for the National Economy. It’s the same with the superstition of
On a simpler level, it’s the same problem as the endless number of drivers who follow outdated car navigation maps and end up in the river. Of course, the maps used there should be constantly up to date. But to what extent does TomTom (the company that makes the car navigation system) have to shoulder the responsibility for your car floating in the river?
The problem of AI bias, which is often complained about by top researchers at universities and research institutions, is somewhat easy to understand.
The biggest threat, however, would be for technology consulting firms and software vendors to take the components, libraries, and tools made in the open source world and use them without understanding them, sell them to unsophisticated people who would buy them without questioning them if they were marked “AI”, and then give them to minimum wage employees who will tell them to do whatever the system says.
This isn’t a particular problem for AI. This kind of problem has existed before, even in the era of database-based systems. And it’s not even a software issue. This is actually a human problem.