Wallace Soares
Disclaimer: some of the information provided in this article can be and it was found on a paper that can be found here.

We've been seeing an increase in Artificial Intelligence and Machine Learning usage in multiple areas. Pattern recognition, result prediction, the machine is becoming an active being into our lives. Machines now have been influencing on decision making in lots of areas in our society. Take for example the usage of AI in hiring process: companies are now relying on machine learning models to speed up the hiring process by cutting off candidates who didn't previously match their needs. This may initially be effective and unbiased depending on the initial model. But once the process reaches years of development, the model naturally adapts itself to select patterns that, historically, demonstrates better results. Theres is a trap here. Numbers, most of the time, do not represent us completely. Mathematicians know better than anyone that numbers can't represent our society completely. There are social contexts and dilemmas that we need to face, as developers. Another famous example is the surveilling system implemented in a city in US. The police wanted to prevent crimes to happen before hand and try to predict — based on ML — where is most likely to occur. Since the historical data showed more registers in poor areas the decision making was socially biased and requested more support on those areas. This model didn't prevent the crimes to happen and in fact was because those areas with less police support were experiencing a raise in violence. Not to mention the amount of people being arrested only by being suspicious.

The authors define the social dilemma as a 'collective action problem', which is a decision making problem faced when the interests of the collective conflict with the interests of the individual making a decision. Basically so, the social problem is a conflict between the AI developer, the society and the businesses. The ones inside, and deep inside this question, is the AI developers. they need to deliver better codes and results so the businesses gets their profits so they can pay the AI developers. But this gets into conflict to the society desires. If the outcome has a racist output in consequence, despite the better results, it should be calibrated and faced. No matter what. And don’t think for a second that I’m overreacting. This outcome is happening today. As I said there are HR companies that are running to get theirs algorithm socially unbiased. It’s already here.

This leads to a subject that is no often faced by developers: ethics. Ethics is not an area that we, as developers, are often crossed in it. Since computers are purely mathematic machine they deliver, only, the “true”. And what I mean by true is the most correct numerical outcome. But, as we said before, numbers do not tell histories. They are purely mathematics. Free of social contexts. And as humans we cannot be free of that. As humans we have to deal socially. Considering the possible outcomes. It should not be only performance wise. We need to have an ethical code.