EthicsFuture of AI

Ethical AI and Roe v. Wade

Michelle, 30 years old and living in Alabama, is facing a difficult decision. For personal reasons, she needs to seek an abortion. Michelle has lost access to legal abortion in her state, but fortunately, she has a sister in Colorado and the means to travel there to have the procedure. 

Despite this, an alarming thought arises. Michelle uses an app to track her periods. Paranoia ensues. The app knows she’s missed her period — who else is on the other side of that data? 

While this example is hypothetical, it represents real concern for women across the US now that the Supreme Court has overturned Roe v. Wade. Many states have or are moving quickly to criminalize abortion. Now, many of us may worry, have I lost my personal privacy, too?

I’ve built a career developing software solutions. I currently head up a team of machine learning scientists and engineers that build conversational AI, and even I have to admit, this new infringement on my personal rights gives me pause. Do I, too, need to be more thoughtful about how my data can be used?

Where AI shows up in the everyday 

AI leverages computing capabilities to perform tasks that normally require human intelligence: visual perception, speech recognition, decision-making, and translation between languages. It’s inextricably linked to data. It’s data, and often copious amounts of it, that leads to the quality of AI models we use today.

I believe it’s reasonable to say that the consumer fields of technology, data, and AI are designed with the intent of enhancing our lives through the application of this knowledge. Advances in AI are benefitting all kinds of industries — from transit to safety, education to healthcare. It’s providing us with increased knowledge and services that make life easier to navigate. 

AI can provide crucial educational content to remote areas, help farmers maximize their earnings, positively influence gender equality, and inform us on our health decisions. An increasing number of apps we use take our personal information to provide us with more valuable, AI-powered insights that lead to more personal experiences with brands — including apps that capture a correlated measurement of female health cycles like menstruation and pregnancy. 

What could misuse of data captured by AI look like? 

Working in the field of AI, I know that a company’s good intentions for managing customer data doesn’t always equate to safe-keeping. While it doesn’t affect the AI model’s performance, necessarily, it does pose a threat to personal privacy. 

One particularly overreaching way to secure this data is with a geofence warrant. This draws a geographical fence on a map, and requires a company to identify every digital device within a drawn boundary during a given time period — without asking for data for an individual person or even identifying them. If state governments decide to aggressively prosecute people for seeking illegal abortions, requesting this data could become a legal issue for people like Michelle. 

This may sound like a stretch, though. Don’t state governments have better things to do than prosecute these “crimes” in civil lawsuits? Early data suggests no. 

States that have implemented total abortion bans are generally focusing on charging abortion providers with felonies, but experts warn that prosecutions for pregnancy loss will increase.  

To support their case, they will rely on the intimate data of their patients — a text message, internet search data, and application data included. One of the most private decisions a woman might make could become public through the misapplication of what she believed was personal and private information. 

What is the responsibility of technology, data, and AI providers?

It’s become a common expectation for our data to be protected. Often, we “accept terms” without reading the fine print. But 79% of Americans aren’t confident that companies won’t misuse personal information

The Center for Democracy and Technology is calling on tech companies to “step up and play a crucial role in protecting women’s digital privacy and access to online information.” Most companies haven’t responded directly to how they will regulate demands for data in future abortion-related cases — but they could push back. 

This is clearly a risk in the application of AI, moving quickly from theoretical to practical. So the question may become: how do we regulate the application of AI without limiting the research and advancement of AI?

The future of AI and protecting personal privacy

We have largely shied away from data transparency out of fear, in my opinion, that being too transparent will scare people away. Instead, what may happen is a lack of trust by consumers.

Tech companies have an ethical responsibility to participate in the privacy (or lack thereof) repercussions of Roe v. Wade plays out. Instead of waiting for laws around the personal privacy of data to catch up, we need to invest in ways to protect our customers’ data: how we store, anonymize, or let data expire as a part of business practices. 

Companies can improve their practices around securing customer data by properly examining the data licensed for analyzing, providing clear privacy policies in summary language that customers can understand, and ensuring protection from misuse and hacking. 

Advances in AI demonstrate the ability to rival and exceed the intelligence and outcomes that individual humans can provide. But, as with any new adoption of technology, there are hurdles. For this era, it’s the proper use of data.

I expect underground apps that connect women with abortion resources to appear. Think end-to-end encryption of communication that secures messaging apps like WhatsApp and Signal, protecting women from having their data read by the company, phone provider, or ultimately, the government. These apps won’t be an opportunity for marketing, for collecting data, or for the purpose of building a business. They will be about serving a community where privacy is as important as access, advocacy, and support. 

Author bio: Jessica Popp has over 25 years of experience in software solutions and development. She now leads the strategic vision at Ada as CTO, building VIP experiences for customers by leveraging the power of machine learning. Jessica lives in Colorado with her husband, and enjoys hiking, running, and sewing modern art quills. 

Related Articles

Back to top button