
In the rush to integrate AI into the workplace, many organizations have overlooked a key principle: sustainability.
For internal communications, sustainable AI means systems that uphold ethical standards, minimize environmental impact, and generate long-term business value. This includes:
- Trustworthy design that protects data and empowers teams.
- Tools customized to address organizational needs.
- Infrastructure that conserves resources and promotes environmental goals.
Sustainable AI is essential for maintaining a competitive edge in today’s market. Agentic AI has massive potential to streamline how we connect across an organization. However, to unlock that value, it must be designed with intention and architected responsibly.
Rightsizing AI Efforts
Despite billions in AI investment, nearly half of enterprise AI pilots never make it into production. In 2025, 46% of AI proof of concepts are abandoned, and 42% of companies report scrapping most of their AI initiatives (up from just 17% the year before)
What’s behind this high abandonment rate? In my opinion, it’s because one-size-fits-all models often fail to solve real problems.
Just using a public LLM is simply neither responsible nor sustainable because it’s being used for very simple tasks that do not require resources of this magnitude. It’s like using a cannon to kill a mosquito.
The smarter and sustainable approach is to use smaller, task-specific models aligned with high-impact use cases, such as summarizing feedback, interpreting engagement data, or generating structured outputs.
The good news is that a rightsized approach isn’t just more effective and secure—it’s more sustainable and responsible. Smaller, targeted models typically use less energy, introduce less organizational friction, and are easier to govern, scale, and replace responsibly.
It Starts with Building Trust
Employees want to trust AI, but they need transparency to achieve this trust. While 71% of workers trust their employer more than governments to manage AI ethically, only about half believe their company has clear guidelines for the use of AI.
A 2024 Edelman Trust Barometer found 79% of global respondents expect CEOs to address the ethical use of technology publicly.
The takeaway is that security and governance can’t be afterthoughts. According to SailPoint’s 2025 report:
- 96% of tech professionals view AI agents as a growing security risk.
- Only 44% of organizations have formal AI policies in place.
- 82% already use AI agents, and 98% plan to expand adoption within a year.
Greener AI for a Smarter Enterprise
You can’t discuss sustainable AI without addressing its environmental impact. According to the World Economic Forum, training one large model can consume as much energy as 130 U.S. homes do in a year. By 2027, AI data centers are expected to consume over 6 billion cubic meters of water annually.
How are enterprise organizations responding? IBM’s State of Sustainability Readiness Report 2024 showed:
- 88% of business leaders planned to increase investment in sustainability-focused IT over the next 12 months.
- 90% believed AI will positively impact sustainability goals.
- But 56% said their organization was not yet actively using AI for sustainability initiatives.
The costs are clear, but AI can also be a force for good. Forward-looking enterprises are embracing task-specific models and multi-agent systems to reduce computational waste, and are utilizing edge computing and optimized cloud workloads to lower energy consumption. Virtual events and AI-enabled hybrid work setups can help companies reduce their carbon footprint.
Responsible AI Fuels Growth
I agree with the assessment from the 2025 DataIQ panel: “Done well, responsible AI becomes a catalyst rather than a constraint.”
Sustainable AI is beneficial for both the planet and business. According to Accenture’s survey of 850 C-suite executives:
- “AI Achievers” who embed responsibility throughout the AI lifecycle see 50% higher revenue growth and stronger ESG performance.
- These firms are 53% more likely to “apply Responsible AI practices from the start, and at scale.”
- 36% view regulatory readiness as a strategic differentiator
That’s the ROI of doing it right: trust, adoption, and performance.
Design with and for People
Too often, AI tools are designed in isolation from the teams who use them. Successful deployments are co-created with communicators, built with guardrails, and scaled with feedback.
Responsible design also means AI must adapt to organizational needs, operate in well-governed environments, and prioritize relevance, trust, and long-term value.
Turning Compliance into Competitive Edge
The EU AI Act began phased enforcement in August 2024, with most high-risk obligations becoming applicable across the EU by August 2026. The first (but certainly not the last) legal framework requires risk assessments, transparency, and documentation for high-risk systems.
Leading organizations aren’t treating this as a burden. They’re using it to strengthen their systems and differentiate in the market. As DataIQ notes, smart governance turns ethics into an advantage.
Sustainability is the Path Forward
AI that works is AI with purpose. In internal communications, that means strengthening clarity, cutting through noise, and empowering people. But none of that is possible without transparency, human-centered design, and systems that don’t trade efficiency for trust.
Sustainability isn’t a constraint. It’s the clearest path to creating real business value. When AI is designed to reflect how people think, work, and connect, it can enhance both businesses and lives.