AI & Technology

Ethical Tech is Good Business: In a world filled with AI, trust and safety matter more than ever.

By Robert Levitan, Board Co-Chair, Ethical Tech Project; and Nishant Bhajaria, Board Advisor, Ethical Tech Project

The modern tech industry has been shaped by several waves of technology. The first was the shift from memory to processing in the 80s. The second was the mainstreaming of the internet, which fed the dot.com era. The third was the adoption of social media, which turned online identities into multi-layer networks. The fourth was companies migrating to the cloud for efficiency and redundancy. Taken together, these four waves created new customers for online services very rapidly, but even more than that, they enabled the free movement of ideas, goods, and services. This dynamic gave us the success stories of Google, Facebook, Amazon, and LinkedIn, all of which rely on connectivity, engagement, and growth.ย ย 

The current AI wave is the next cycle of this phenomenon, promising to change our work and our lives, powered by vast amounts of data and immense processing power.ย ย 

The good news is that the lessons and gains of the last four decades promise to amplify AI’s benefits. Theย bad newsย is that all the mistakes in the past, including data breaches, inappropriate access, disinformation, and non-consensual use cases, will have a much bigger impact on consumer trust, regulatory compliance, and the quality of AI-powered services.ย ย 

Ethics are not a “do the right thing” altruistic imperative, but rather a critical vector toย determineย whether AI will, in fact, deliver the financial benefits that investors and current markets have priced into AI company valuations.ย ย 

The way forward is clear: Building a solid governance model for AI will not just make for more ethicalย services butย will also lead to better business outcomes resulting from improved data quality, more reliable models, and higher levels of consumer adoption.ย ย 

The Trust Gap Is Realย 

The numbers paint a striking picture of how much work remains to be done in building AI confidence:ย 

โ€œOnly 6% of companies fully trust AI agents to autonomously run their core business processes.โ€ย (Harvard Business Review, July 2025)ย 

โ€œSecurity and privacy worries loom largest as barriers to wider adoption.โ€ย (Harvard Business Review)ย 

โ€œOnly 26% of consumers trust brands to use AI responsibly.โ€ย (Statista, 2024)ย 

โ€œAn 82% failure rate means AI projects are not getting into production because people are scared.โ€ย (Trustwise, 2025)ย 

And yet, amidst this uncertainty, one headline stands out:ย โ€œAnthropicโ€™sย Safety First Approach Won Over Big Business.โ€ย (Fortune).ย This is not a coincidence.ย 

Safety as Competitive Advantage: The Market Proves Itย 

Market data confirms what the trust gap predicts. According to Menlo Venturesโ€™ 2025 State of Generative AI in the Enterprise report, Anthropic now commands 40% of the enterprise LLM API market shareโ€”more than triple its 12% share in 2023. Google climbed to 21% (a 3x increase from 7%). Meanwhile, OpenAIโ€™s share fell from 50% to 18%. The companies gaining ground are those most associated with responsible, safety-first development.ย ย 

The World Economic Forumโ€™s AI Governance Alliance put it plainly at Davos 2026: โ€œIncreasingly, trust is the true limit of AI innovation.โ€ Organizations that canย demonstrateย resilient, responsible AI practices are the key differentiators. As Cognizantโ€™s Chief Responsible AI Officer noted, โ€œBy operationalizing responsible AI and demonstrating it with evidence, organizations can scale faster, meet cross-border requirements and convert trust into competitive advantage.โ€ย ย 

Responsible AI Is a Strategic and Financial Imperativeย 

The business case for ethical tech is no longer theoretical. IBMโ€™s Cost of a Data Breach 2025 report found that the average cost per data breach is $4.44 million, with 97% of organizations reporting incidents related to AI without adequate access controls. Whatโ€™s more, โ€œshadow AIโ€, employees using unauthorized AI tools, increases breach costs by an additional $670,000 on average. In an environment where reputational risk is existential, responsible governance is the surest form of insurance.ย 

At the same time, McKinsey estimates that AI offers a long-term productivity growth potential ofย $4.4 trillionย from corporate use cases. Companies that embed fairness, transparency, privacy, and accountability into their AI systems are better positioned to capture that valueโ€”attracting investment,ย retainingย customers, and navigating the complex and rapidly evolving regulatory landscape, including the EU AI Act.ย 

As one analysisย putย it:ย trust is the new currency.ย A trusting consumer is more willing to share data, make purchases, and remain loyal. Brands that invest in mitigating bias, ensuring algorithmic transparency, and protecting privacy not only avoid costly penalties,ย but theyย also build a competitive advantage that compounds over time.ย 

Despite increased awareness of the measurable benefits of responsible AI, the World Economic Forum found that 81% of companiesย remainย in the nascent stages of implementation. This governance gap puts AI investments at risk, but it also means the companies that act now will have a decisive first-mover advantage.ย 

Leading Rather Than Followingย 

Whether you are a company building AI systems or integrating them into your operations, you are more likely to succeed when trust, transparency, and security are at the core of your AI strategy.ย 

In an uncertain world, responsible AIย isnโ€™tย just the right thing toย do,ย itโ€™sย the best business strategy.ย ย 

Author

Related Articles

Back to top button