Future of AIDataEthics

How AI In Star Trek Can Help Us Address Real-World Issues

When it comes to artificial intelligence (AI), countless conference sessions and seminars have dedicated inconceivable amount of hours asking what-if questions, with terrifying examples from across science fiction acting as the bleak backgrounds.

Terminator’s Skynet, Agents in The Matrix, and Ava in Ex Machina are just some of the fictional antagonists which have stemmed from humanity’s own creations.

But one franchise has spent over 50 years diving deeper than its contemporaries to depict scenarios of AI enhancing life, and in some cases not so – and that is Star Trek.

Gene Roddenberry’s utopic vision of the future has led to some of the most thought-provoking media to come to life. Topics of race and discrimination, death, and morality are some of the cornerstone topics that kept it relevant across multiple iterations for so long.

Star Trek is the epitome of science fiction franchises depicting AI for better and worse, and within the hundreds of hours of television and movies reveals answers to some of the biggest AI-related issues we are still yet to face.

AI as property

Star Trek: The Next Generation series two, episode nine ‘The Measure of a Man’

Story

Among the USS Enterprise-D’s crew is Lieutenant Commander Data, a rare android created by renowned cyberneticist Dr Noonan Soong. Upon a routine trip, Data is asked to leave the crew to go with Commander Bruce Maddox, a Starfleet scientist who plans to disassemble Data to learn more about him and potentially create more. When Data refuses, Maddox orders Data leaves, demanding that because the android officer is considered property, he cannot refuse the order. Data’s commanding officer, Captain Jean-Luc Picard opts to defend the android, instigating arbitration of the matter within a court of law.

Outcome

Maddox argues that Data is the creation of man and therefore simply a machine. Despite this, Picard successfully persuades the arbitrator that all beings are created and that being a creator does not guarantee property rights of one’s creation.

Real-world Implications

The importance of this example is that Data is deemed to be sentient. While proving sentence is understandably difficult and based on one’s sense of morality, Picard references René Descartes’ proposition: Cogito, ergo sum – I think, therefore I am. While AI creations do not currently come anywhere close to the sophistication of sentience as depicted in Star Trek – the concept of Cogito, ergo sum is one real-world courts will likely have to deal with, especially when as individuals we cannot be absolutely certain of anything in technology.

Authorship rights

Star Trek: Voyager season seven, Episode 20 ‘Author, Author’

Story

The USS Voyager’s chief medical officer and Emergency Medical Holographic (EMH) program, known as The Doctor, wrote holonovel entitled Photons Be Free. This fictional work is about the USS Vortex and its crew which were closely based on the Voyager’s crew. Because of the close depiction of the Voyager crew, chief helmsman Tom Parris bemoaned the EMH to alter the novel. Broht & Forrester, a publisher released the altered version without The Doctor’s consent, which was contested by the EMH. The publisher argued that become as EMH is not a person, it holds no authorship rights. A Federation Tribunal is called.

Outcome

During the Tribunal, the Doctor’s Captain, Catherine Janeway, argues that because the Doctor held the capacity to disobey orders when programmed to only follow orders, the EMH is capable of thinking for itself, and should, therefore, be considered a person. While the arbitration panel was unwilling to declare an EMH a person at the time, he noted that this would likely change, and ruled that the EMH is more than just a hologram and would, therefore, encompass the legal definition of ‘artist’- meaning he can bring

 proceedings against the publisher who infringed his authorship rights.

Real-World Implications

Authorship rights for AI are heavily debated today. IBM’s AI Watson produced a film trailer for the 2016 film Morgan. A Microsoft backed team used AI to create a painting in the style of Dutch painter Rembrandt. And Springer Nature published a machine-generated book, which was about chemistry (let’s hope it released it with authorisation).

Should those who feed the data into the AI system be granted authorship of the created content, or should it belong to the actual system itself? While this Voyager episode dealt with regards to sentient holograms, we will be forced to deal with this at some point in the near future – but at what point? And how will we deal with it when it comes? Only time will tell.

Malevolent AI

The entirety of Star Trek: Discovery season two; and Star Trek: Picard season one

Stories

Star Trek: Discovery’s season two arc was stopping an AI system from the future from returning to the past to manifest, grow, and wipe out all life in the galaxy, while Star Trek: Picard’s season one arc revolves around a secret Romulan society hellbent on preventing artifficual life from calling on an ancient race of AI entities from returning to wipe out life in the galaxy. For reference, these two seasons of television came out within a few years of each other – is an issue we’ll likely face in the future the focus, or just lazy writing?

Real-World Implications

Every conference, event or thought piece on AI has touched on the idea that AI is this thing that will one day surpass and subjugate humanity. While it is unlikely that an AI entity will come back from the future, or through a wormhole in space in order to achieve this, the thought is a very real one – likely because of the impact of science fiction works. Most of the time this idea of a malevolent AI stems from a system gaining conscience – while we’re no way near achieving this sense of programming, the idea that an individual or entity would create an AI system capable of being programmed to kill a human is a very real one, regardless of worldwide conventions preventing such an occurrence. Aside from touching on the killing all life issue, both shows also touch, however briefly, on the concept that AI will surpass humans in a design sense and in a performance sense. This is something that we will likely face sooner rather than later.

AI’s creating life – parental rights?

Star Trek: The Next Generation season three, episode 16 ‘The Offspring’

Story

The aforementioned Lieutenant Commander Data features yet again – this time creating an android, which he considers to be his child. Data’s child, named Lal, develops beyond even Data’s capabilities – including emotions, with Starfleet seeking to separate parent from child in order to study her. The story ends with Lal effectively dying, her brain unable to cope with what she is.

Real-World Implications

While this is yet another similar situation we will likely have to face at some point, it could be construed that Data in fact invented Lal, rather than fathered her. These are semantics yet to be determined in our reality.

Summary

We are currently standing amid the advent of AI. We have passed the point of inception, and find ourselves rooted in a point of time where AI’s uses and functions are growing at an exponential pace, but nowhere near the point of which is depicted in Gene Roddenberry’s idyllic conception of the future.

While we are yet to emulate Star Trek and its uses of AI, we have the chance to utilize the show’s AI successes, and learn from their failures, in order to better our AI uses and inventions. And while AI characters like The Doctor and Data are superior in some ways, they both teach their fellow shipmates and the audience that all life should be respected and appreciated, no matter how alien, and no matter how organic.

Author

  • Ben Wodeck

    Ben Wodecki is a reporter for Intellectual Property Magazine. He has previously written for IPPro Magazine, Captive Insurance Times, The Telegraph India, and t2 Magazine.

    View all posts

Related Articles

Back to top button