Generative AI means different things to different people. It’s changing everything from the way we work to the way we live. But just like other major tech shifts before it — including the rise of social media through hyper-personalized content delivery systems — the impact hasn’t always been positive.
And while it may be the world’s latest obsession, blindly implementing GenAI-based tools and solutions does not bring meaningful change or help drive an organization’s mission forward — especially without a strategic framework and a set of guiding principles that address potential downsides.
Important questions to ask are: What exactly is GenAI doing for your organization? How is it integrated into your strategic priorities? What are your guardrails? And how does it impact your customers?
According to a recent Harvard Business Review article, AI-Generated “Workslop” Is Destroying Productivity, organizations should develop their own policies and recommendations on best practices, top tools, and norms. As leaders, it’s our job to develop guidance for team members to help them use this new technology in ways that best align with the organization’s strategy, values, and mission. This is what ultimately leads to real value in the end.
At Givelify, as we started piloting GenAI-based tools several years ago and exploring how to incorporate AI-generated content into our products, we saw the quality of work decline and operational inefficiencies creep in. From code generation to meeting summaries, ideation, and content creation, the impact was clear.
We moved quickly to address this by creating a set of guiding principles and training, rooted in our Four Keys — Integrity, Heart, Simplicity, and WOW — to support responsible use of these tools across the team. Because we implemented this almost immediately, we began to see the positive impact of GenAI in our business and also set the tone for other machine learning systems and products that we build. Ultimately, these tools should provide operational efficiencies, enable teams in positive ways, and deliver meaningful products or services that delight your customers. And for us, that touches a giving community of nearly 2 million donors and nearly 75,000 organizations.
How Core Values Can Shape Your Business Guardrails
Every business has a set of core values or principles that serve as a framework for decision-making and a barometer for its culture. Whether it’s the quality of products and services, people and culture policies, quality standards, or process controls, these values define what is acceptable and what is not.
For example, at Givelify, integrity is one of our Four Keys, and it’s non-negotiable — meaning it’s a standard we never compromise on, no matter the situation. One way we bring integrity to life is by ensuring our systems are never compromised and operate with 100% reliability.
We applied this guiding principle to AI adoption by requiring a collective human review whenever an AI-generated work may impact our integrity, regardless of the extent to which AI contributed to it. This encourages the person using the AI system to take ownership of the output before it’s reviewed. For example, instead of letting coding agents auto-deploy code to production, teams could conduct peer reviews of AI-generated code in virtual or in-person meetings.
Another way we uphold integrity is by ensuring our products do not create adverse outcomes for our users — our giving community. Therefore, AI systems, recommender systems, and machine learning models that Givelify builds are designed from the ground up to prevent negative reinforcement loops. This includes thresholds in our software, our training data, simulations, user tests, and continuous quality control checks of outputs to prevent harm. This is also why we are intentional about identifying which support issues can use AI chatbots and which ones should just recommend content for our support team.
Tools that summarize meetings or generate content can be transformative. But minor inaccuracies or “hallucinations” can quietly undermine sound decision-making. Integrity demands that we verify before we trust, ensuring technology serves truth, not convenience. We can’t blindly trust — we must review and verify, keeping accountability and transparency at the forefront. If a team member uses AI as a thought partner, they share that with the team upfront.
Finally, for us, integrity also means asking tough questions before chasing easy gains. It means understanding AI’s limits, biases, and risks, and then putting in the right human and automated checks and balances as we collaborate to ensure we’re creating joyful experiences for our giving community and positive ones for our team.
How to Prevent Skill Degradation from Over-Reliance on GenAI
As GenAI becomes increasingly capable, the temptation to rely on it grows as well. But over-reliance can lead to skill erosion. When team members outsource writing, coding, communication, or judgment to machines, they risk dulling their own edge. There is no easy way to prevent this since every individual has to make their own choices. However, as an organization that leads with heart, there are things we can do:
- Hone in on the net value add. Create opportunities for each person to focus on their next tier of contribution. For example, writers can focus on developing strategic content as well as thoughtful research and actionable insights that will resonate with our giving community. Engineers can deepen their expertise in system design, core mathematics, and computer science fundamentals to strengthen the overall architecture design and solve deeper computational problems.
- Create frameworks that require people to review key deliverables generated by AI. Define which types of outputs require human verification, outline the review steps, and assign clear ownership. This keeps individuals and teams accountable for accuracy, tone, and alignment with our key principles.
- Give people time to build things themselves (first-principles training). Create dedicated time and space – such as one day a month — for team members to work on passion projects without the use of AI. This allows them to write or build independently, essentially training and strengthening their skills.
How to Manage AI Hallucinations and Limitations with Simple, Repeatable Practices
Rather than focusing on the most complex transformations, processes, or deliverables — which often lead to hallucinations or implementation overruns — we anchored our approach in our third guiding principle: simplicity.
By starting with simpler tasks — summarizing customer support issues rather than generating replies, and supporting copy creation instead of automating notifications — we set a strong foundation. This allowed us to hone expertise, enforce guardrails, ensure quality, and establish repeatable workflows.
Any tool must be easy for the operator to use and set up. For example, if a documentation-summarization tool is inconsistent or inaccurate, we don’t implement it. But if it reliably finds relevant information quickly and clearly, then it adds value.
This approach helps prevent content overload. The last thing an organization needs is a scenario where AI must be used to manage the very output created by other AI systems. This runaway effect adds complexity and clutter, rather than simplicity and clarity.
If AI tools don’t reduce clutter, improve efficiency, reduce cost, and remain simple to use, it’s worth asking why they are needed at all. AI is not infinite; it has fundamental limitations. Strip away human qualities that bring depth and meaning, and you end up with products or services that may look polished but lack substance.
Create Better Experiences Through Intentionality and Design
GenAI should do more than just get the job done — it should create moments that delight our team and giving community. That’s exactly what our fourth key principle, WOW, is about: going above and beyond to deliver a five-star experience.
When used responsibly and guided by integrity, AI has the power to enhance the experiences we deliver to our customers, helping us create moments that truly WOW. Every decision should be intentional — finding ways to inspire and delight our team and community at every interaction.
This requires carefully evaluating the AI tools we choose. What problem are we actually solving? What framework ensures alignment with our values? AI can make tasks easier, but if it risks losing essential skills, perspective, or quality, the trade-off isn’t worth it. We must be deliberate in what we adopt, how we enhance it, and how we enable our teams — always ensuring that GenAI delivers meaningful results and creates experiences that delight our customers.
Looking Ahead: Align AI with Principles for Lasting Impact
If AI tools and systems aren’t aligned with a company’s values or principles, they will contribute to the chaos they were intended to solve. That can have ripple effects, extending beyond operational inefficiencies. When AI tools are adopted without thoughtful integration into an organization’s values, the results can range from subtle erosion of trust — internally or with customers — to significant disruptions in work.
GenAI is changing the way we work, but how it changes us depends on the principles we bring to it. Your organization’s values and philosophies must anchor how you adopt new tools, ensuring innovation strengthens, not undermines, your mission.
Hari Krishna is the Chief Technology Officer at Givelify, an online and mobile giving platform. As CTO, he leads the company’s technology and fintech strategies, focusing on innovative and emerging technologies.

