When is AI Bad for Your Business – The Privacy Perspective

Published on
AI-generated image

AI is (or should be) on every manager’s top of mind these days. As ChatGPT is becoming mainstream and students as well as employees are using it to help them in different ways, from writing essays to writing ads or follow-up emails, it is time to discuss the implications of using AI tools to improve, automate and change the way we do business.

First of all, although it should be on every manager’s top of mind, not every manager should implement AI tools in the organisation he or she leads. Not every organisation needs AI, just as not every organisation needs automation. Remember that often automation only accelerates the failure of bad processes, so managers should understand first what processes take place in the organisations they lead before talking about automation.

Let’s look at some definitions.

  • ·       Per EU AI Act, ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches like Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; Statistical approaches, Bayesian estimation, search and optimization methods. Such software, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
  • ·         Personal Data, according to the UK & EU GDPR, means any information that can lead to the direct or indirect identification of an individual. Including user IDs or descriptions like “the man in a black suit entering the store on 1 Windstreet, X city, at noon”.
  • ·         Processing of personal data, according to the UK & EU GDPR, means anything you do with personal data. Collection, storage, modifications, access, deletion, input – if it involves personal data, it’s processing.

If you input personal data to an AI algorithm, there is a significant risk for discrimination for the people involved. This discrimination comes from the way an AI algorithm works – it deals with uncertainties that are addressed with probabilities. Just to give you an example – if you present an AI algorithm with images of people from a certain ethnicity that are caught doing crimes, the algorithm will conclude that with a high probability, all people from a certain ethnicity are doing crimes. Humans “learn” to factor in many other variables and a context, AI algorithms don’t do this by default.

If you don’t provide accurate, relevant, contextual data to an AI algorithm, there is a very high risk for the algorithm to discriminate people based on sensitive information such as gender, orientation, religion, ethnicity, nationality, age, etc. Just look at what happens when you search for “unprofessional hair” on Google – things improved, but not so long ago the algorithm performed very badly from an ethical perspective. And examples can continue: Amazon’s algorithm didn’t like women, Uber used an algorithm for drivers that was simply racist.

With the explosion of AI-algorithms, programs and services the risks are increasing exponentially. If you plan to develop or use an AI solution, start asking yourself:

  • How do I make sure that the data that is inputted to the algorithm is correct, relevant and contextual?
  • -        How do I make sure that the input data is not biased?
  • -        What is the learning and correction mechanism that I am using?
  • -        How do I monitor the evolution of outputs?

An AI algorithm, program, solution needs constant monitoring. It takes a lot of time and resources, you don’t just simply throw an algorithm into the market and let it do it’s thing, it doesn’t work like that at all. For example, if you want to use an AI algorithm to help recruiters during online interviews to see whether candidates lie, based on their facial movements, make sure you take into consideration the fact that the algorithm might deal badly with people from different cultures.

I worked with a company that automates hotel check-ins, to avoid cues, by using either a mobile check-in solution or POS devices at the hotels’ locations. It’s solution uses an AI algorithm that compares the scan of the photo in the customer’s ID document with the photo of the customer’s face, taken at the hotel by the POS device or by the customer’s smartphone. The Data Protection Impact Assessment that we did showed a significant risk in identifying people from different ethnicities in specific cases where the surrounding light wasn’t good. The algorithm was trained better, customers also received instructions on how to scan their IDs and how to make selfies, and hotels were instructed regarding the recommended surrounding light.

Before developing, connecting to, or implementing AI-based solutions, companies should thoroughly check the discrimination risks brought by the algorithm. Sometimes it’s not worth taking the risks, despite the “glamor” brought by the use of an AI-based solution. Also, not every marketed AI-solution is in fact an AI-solution. Many times, companies brand their classic algorithms as AI-algorithms just to ride the AI wave. They don’t understand that by doing so, they are or will be subject to many international regulations although they shouldn’t be.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now