The Good, the Bad and the Ugly: The Tale of Information Technologies in Our Everyday Life
During WWII one of the first things that Nazis did when invaded the cities was to access the towns’ registry. Many European governments started collecting and archiving personal information about their citizens in the registry years prior to the war. Using these records, Nazis were able to access detailed information about Jewish people to track them down and eventually prosecute them. The Netherlands suffered the most with the largest numbers of Jewish victims in terms of percentages and absolute numbers. A study that was done years after found that one of the reasons for the vulnerability of Jewish people in the Netherlands, was the detailed census collected by the Netherlands’ government. This information consisted of names, addresses, religious affiliation, individuals’ ancestors, and other sensitive and personal data. In contrast, France had a more restricted data collection system since 1872. This was a conscious decision that was made by the France government at that time. Therefore, it was harder for Nazis to track and gather personal information about the Jewish population in France compared to the Netherlands. This example is a clear illustration of how being in control of one’s privacy is not only a human right, but its violation can potentially cost the lives of millions of people. In this last reaction paper, I would like to review the harm, the benefit, and ethics of today’s practices in information and algorithmic technologies, data gathering, and internet law.
Humans have not learned their lesson from history. As technology advances, the way we collect, store, and process personal data changes too. Giant tech companies and online hackers have become more aggressive with the collection of users’ data. Many governments have tried to combat this trend by passing legislations, imposing restrictions, and prosecuting wrongdoers. Mutual Legal Assistance Treaties (MLATs), content moderation, and Network Enforcement Act, are examples of how governments regulate the online world. Although these governing models are somewhat successful, these systems are not only vulnerable to cyber-attacks such as Zero-day vulnerabilities, Backdoor attacks but also, they might introduce inherent bias known as algorithmic biases. These systems are not perfect and will never be, and humans tend to forget or ignore these flaws.
One of the most detrimental impacts of algorithmic biases is among face recognitions and risk assessment systems in courtrooms. Northpointe, one of the leading tech companies in this realm has developed a system that forecasts whether criminal defendants are likely to commit crimes if released. In 2016 ProPublica obtained a study and found out that the inherent bias in Northpoint’s system led to an inaccurate prediction in the Artificial Intelligence (AI) and Machine Learning systems. Among the 40 percent incorrect prediction, the report discovered significant racial disparity. These biases are costing individuals their reputation, freedom, and dignity. Therefore, it is important to take them for what they are; human-made systems with the possibility of flaws.
One of the biggest concerns that come up during the discussions about information technologies and data privacy is the undeniable power of giant tech companies. Personal data is extremely valuable and profitable. Users’ information is used widely for advertisement and sold to third party companies. Personalized advertisement feeds from these kinds of information. Many feel this approach has led to the manipulation of their free will. During the 2018 US election, Facebook exposed its users’ data, without their consent, to third parties for political advertisement. This scandal is an example of exploitation of users’ information to manipulate their judgment. This incident has left many questioning our democracy in the age of monopolization of giant tech companies.
I am going to leave you with one question at the end of this section. During the aftermath of 9/11, many find the government’s aggressive surveillance controversial and problematic. To some extend such an approach was a violation of citizens’ privacy and human rights. We are constantly concerned with how much personal information we provide to the government. How come we are not as sensitive when it comes to sharing the same data with private companies?
AI and Machine Learning have come under scrutiny over the last couple of years. While much of the focus in this reflection paper was on the negative consequences of data collection and artificial intelligence, here I want to share a personal story on how AI has helped me find what I need.
Growing up in a culture that celebrates silky, smooth, straight hair, I never learned how to take care of my curly hair. My mother also never did as we started to straighten our hair at a very young age. It was after I immigrated to Canada, I was inspired to wear my hair in its natural way and keep it healthy. From two years ago I entered the journey of celebrating my healthy and natural hair. After many tries and damages to my hair, I have improved significantly with the help of Instagram, YouTube, and recently TikTok. The personalized posts connected me to many products with my specific concerns. I was introduced to appropriate products, routines, and other people with the same taste. Lately, I have noticed more and more small businesses have learned to use these platforms, specifically Tiktok, to introduce their product and connect with their appropriate consumers.
This is just a simple example of how personalized algorithm has turned in to customized search engine. It has become so much easier to access off-mainstream products and materials specific to the taste of minorities.
In closing, humans were able to adapt to new technologies and inventions. However, we are reaching a point in our history that these developments are progressing faster than what individuals can adapt to. It is time to approach these changes through a public policy perspective to avoid the harms and promote human rights.
 Jedidiah Bracy, “Carissa Véliz on privacy, AI ethics and democracy” (4 December 2020) at 00h:26m:26s, Online (podcast): The Privacy Advisor Podcast < https://iapp.org/news/a/podcast-carissa-veliz-on-privacy-ai-ethics-and-democracy/>