Data

AI’s race to the bottom: Why we can no longer ignore the exploitative practices in data labeling 

How would you feel if your AI solution was built on the back of exploitative working conditions and practices? Well, welcome to the world of data labeling and business process outsourcing, which has become a vital part of the rapidly growing AI and Machine Learning industry.. 

While AI has only recently exploded into the public consciousness, its development especially over the past decade has created a steady and ultimately insatiable demand for data. However, it wasn’t just raw data that the development of AI was and still is dependent on, but annotated or labeled data, which relies on vast amounts of manpower and labor. This is where the data labeling industry and business process outsourcing companies have emerged as such important cogs in the AI machine, as they provide labeled datasets for machine learning models to learn from. 

We’ve seen firms such as Scale AI and Sama, to name just two, gain success with huge valuations by providing labeled data quickly and cheaply. But AI’s secret, or in reality not-so-secret sauce, is becoming increasingly dependent on unethical and exploitative working conditions and practices. 

Recently, Time reported OpenAI was using workers in Kenya for less than $2 an hour and other firms use workers in the Philippines, Vietnam, and Venezuela, on even worse pay at barely 90 cents an hour. They work in atrocious conditions with intrusive CCTV systems monitoring workers’ performance. One damning report found that “a timer ticked away at the top left of the screen, without a clear deadline or apparent way to pause it to go to the bathroom.” Another unfortunate occurrence is that there appears to be a lot of unpaid work that many large companies refuse to address. Training on labeling platforms, learning how to do something, fixing mistakes, or providing samples for these large customers is often left unpaid. These dynamics are commonplace in what is now being labeled as click farms. 

So it has to be now that the sector actively takes a stand against such practices to avoid this race to the bottom. Society can undeniably experience and realise huge benefits through the continued development of AI, but to achieve this on the backs of undervalued and poorly treated workers is simply wrong. Also, make no mistake about it, the current state of play means certain individuals are set to make billions of dollars off the back of these unethical practices.

For ourselves at Kognic, we require all workforce partners that we engage to adhere to a strong set of ethical guidelines that provide a higher threshold of minimum pay; better working conditions for both training and production; timelines aligned with standard business schedules and calendars (e.g. a 40 hour week with holiday time off) and other important expectations such as high-speed internet connections. With thousands of data labelers working for our customer’s AI efforts – through our platform – our goal has been to elevate all stakeholders in AI with fair pay, fair conditions, fair contracts, fair representation, and fair management.

The rapid rise in AI we’ve seen over recent months has led to many raising ethical questions and concerns about its advancement. This has focused on its potential to pose significant risks to humanity and society, varying from threatening people’s jobs and livelihoods to its ability to spread misinformation. However, the ethical concerns around data labeling haven’t received adequate attention. The issue ultimately goes right to the heart of whether we use AI as a tool to improve society or not, just as much misinformation or a threat to jobs. 

If we are to create a future where AI contributes positively to society, we have to always adopt a human-first approach. This technology has to be designed, developed, and implemented with humans always at the forefront, yet currently, we’re falling at the first hurdle. How can we trust AI to be used for the betterment of the human experience if its development is reliant on the exploitation of people it’s supposed to benefit? 

Leaders within the AI industry are failing in their duty and responsibilities, hiding behind claims that these roles are voluntary or that the current state of the market makes it impossible to avoid such practices. This is not the case, we do have the capability to enact change for the better by refusing to work with click farms and by only engaging with business process outsourcing companies that don’t exploit workers. In fact, the notion that we can change the world around us for the better and benefit the wider society is at the core of positive technological developments. If AI is to be one of those developments, we have to leave these exploitative practices behind us. 

Author

  • Daniel Langkilde

    Daniel Langkilde, CEO and co-founder of Kognic, is an experienced machine-learning expert and leads a team of data scientists, developers and industry experts that provide the leading data platform that empowers the global automotive industry’s Automated Driver Assist and Autonomous Driver systems known as ADAS/AD. Daniel is passionate about the challenge of making machine-learning useful for safety-critical applications such as autonomous driving. Prior to founding Kognic, Daniel was Team Lead for Collection & Analysis at Recorded Future, the world's largest intelligence company maintaining the widest holdings of interlinked threat data sets. At Recorded Future, Daniel got extensive experience in delivering machine learning solutions at global scale.

    View all posts

Related Articles

Back to top button