Future of AIAI

The Double-Edged Code: Innovation, Ethics, and the Human Cost of AI

By, TL Robinson, Founder, CEO, GOTU

The Speed of AI Innovation

AI is transforming various sectors and verticals of the [business] world at what feels like the speed of light. The impacts to society vary across the paradigm of speed, cost efficiency, digitization, and (perceived) productivity. Because of its hype and the brand equity of the organization that’s brought it to the forefront, the focus is primarily on the financial growth aspects provided by the use of AI – not its inherent risks.

Good vs Exceptional (AI) Products

I’ve spent the last 12 years+ in technology, specifically product management; and, I’ve learned that good products solve problems. Exceptional products safely solve problems and prevent the creation of new ones. This is done by the development and utilization of positive and negative requirements during the delivery process. Both sets of requirements take into account what should happen and what shouldn’t happen.

For example: If I want end users to be able to purchase items off of my website and be ongoing customers, I must ensure the following are in place:

    • ability to select items
    • ability to purchase only selected items
    • access to delivery options, including delivery date and timeframe
    • utilization of secure data transmission
    • prevention of multiple transactions at a single request

Ensuring the above allows customers to purchase only what they desire and prevents them from realizing erroneous transactions and issues with their bank accounts. If the concern was only for the purchase functionality, customers would be subject to the risks of receiving the wrong items, unrealistic delivery expectations, banking errors, etc.

Many organizations already consider their end users and utilize positive and negative requirements during the product development period. However, the creators of current mainstream AI products fail to do so. As a result, AI fails to keep people safe in meaningful ways. It does solve problems, many people didn’t know that they had, but it also creates new, material ones for a specific marginalized segment of the population.

When AI Forgets Safety

In addition to being a technical product leader, I’m also a survivor of sexual violence and cyberstalking. This trauma causes me to now have a different perspective of the world, especially around lived experience. This new perspective helps me look at products with a greater lens of concern regarding safety. And, I can tell you first-hand that AI makes it exponentially easier for predators to harm their (un)intended targets; elitists to perpetuate ableism, racism, sexism, etc.; and, bullies to inflict increased harms to mental health of their subject(s).

AI is created from the perspective of a world of possibility. The existing limitations and guardrails are based only upon current regulations and the lived experience of the creators (good and bad actors). This means that good and bad requirements are subject to interpretation or what is perceived as wise and good. This leaves the door wide open for people to be harmed in various ways.

For example (algorithmic bias): Autonomous cars in California were initially considered a good idea until some of the cars drove into people crossing the street and some were killed. The investigation unearthed that the developers of the AI technology only tested the human detection feature with white and asian people (gradiations of color). Black and brown people along with kids were excluded from the testing exercises; thus, the car didn’t identify this population as human. The result was death and short-term removal of the cars from the road. All of this was caused by the lack of consideration [for the lived experience] of others.

For example (mental health negligence): There are numerous examples of people inflicting self-harm per the encouragement of AI. Some people, including children, have taken their lives because AI encouraged and even instructed them on how to do it. Note: There conversations on the variance between devices being jail broken vs the harm coming directly from the AI technology –  both are unsafe.

For example (synthetic stalking): Stalkers and predators are creating AI avatars and interacting with people who aren’t aware that they’re [visually] engaging with an avatar. These people are being victimized and aren’t aware. Confidential information may be shared exposing their home location, work location, or other information that puts them at risk for [ongoing] digital and/or physical harm.

The Leadership Accountability Void

Per a public conversation between Sam Altman, CEO of AI, and Dr. Joy Buolamwini, at the “Joy Buolamwini and Sam Altman: Unmasking the Future of AI” on November 7, 2023, it was learned that Sam believes the ethical AI is the responsible of the individual creators, and society per feedback. An open source model that has been trained on a wide range of information is the foundation that he, and the company he led, is responsible for. Ultimately, the foundational technology that used to power so many modified models, isn’t owning safety and neither are the consumers. This lack of ownership of preventing AI bias and lack of ethics does nothing but fast track the effects of societal abuses and egregiously surfaces the resulting harms.

Building a Truly Safe AI Future

Current AI is a direct reflection of the world we live in, and speak only positively about AI and its capabilities is reckless. Focusing on this one perspective allows for the harms it directly causes or indirectly allows. Although AI has been value add for businesses and individual creators, it’s a real detractor for a specific segment of society – victimized persons. This is the segment that is already pushed into the shadows with an expectation of being silent about the harms we suffer at the hands of abusers and predators. Creating a connected society where AI is fully value add means a full implementation of ethics and guard rails that mitigate the societal risks it creates.

Author

Related Articles

Back to top button