The Freeland AI Ethics Framework
Six named principles that govern how we build, deploy, and evaluate every AI solution we create. Published publicly. Held to publicly.
Ethics that live on a webpage are easy. Ethics that shape every decision are different.
Most AI companies claim to care about ethics. Few publish the specific principles that actually guide their work – the ones they’d be held accountable to if they violated them.
We publish ours because we believe accountability requires specificity. “We use AI responsibly” is a slogan. “Here are the six specific things we commit to, and here’s what each one means in practice” is a standard you can actually hold us to.
Who this framework applies to
- +Every solution we build for clients
- +All AI tools we use internally
- +Our own marketing, content, and music production
- +Any AI recommendation we make to a client
“If you wouldn’t deploy this for your own family, don’t deploy it for a client.” This is the gut-check we apply to every project before it goes live.
– Tom Freeland, Founder
Our named ethical commitments
Each principle has a name, a plain-language definition, and specific implications for how we work. None of them are optional.
Human Dignity First
Every person affected by an AI system we build has inherent dignity that no efficiency gain, cost reduction, or technical achievement can override. If a solution compromises the dignity of any person it touches – a patient, a congregation member, a job applicant – we won’t build it.
Non-negotiableRadical Transparency
The people interacting with AI-assisted content, communications, or decisions deserve to know. We commit to clear disclosure when AI plays a meaningful role – in what we build for clients and in our own operations. We don’t hide the machine behind the message.
Always disclosedHuman Judgment Retained
AI assists, advises, and accelerates. It does not decide. For every solution involving significant decisions about people – hiring, clinical support, resource allocation, pastoral guidance – a qualified human being retains final authority and full accountability.
No autonomous decisionsHonest Assessment Always
We tell clients the truth about what AI can and can’t do – even when the honest answer means less work for us. We don’t oversell capabilities, hide limitations, or build solutions we don’t believe in. Our reputation is built on being the consultants who tell you what you need to hear.
Truth before salesCommunity-Rooted Impact
We measure success by what changes in the community – not what runs on the server. Every AI implementation we build must produce tangible, explainable benefit for the people it serves. If we can’t explain who benefits and how, we haven’t built the right thing yet.
Measurable benefit requiredOngoing Accountability
Ethical AI is not a checkbox – it’s a practice. We review our own work for bias, drift, and unintended harm. We update this framework when better standards emerge. We invite our clients and community to hold us to it. This document has a version date and will be updated.
Living commitmentWhat these principles look like day to day
Abstract principles mean nothing without concrete behavior. Here’s how they actually shape our work.
Before every project
We run a brief ethics check – who is affected, how, and whether any of our six principles require modifications to scope or approach.
In every proposal
We include explicit notes on human oversight, disclosure requirements, and any ethical considerations specific to the client’s context.
During implementation
We flag potential bias, fairness, or transparency concerns before they become live problems – not after a client discovers them.
After deployment
We ask: is it working as intended? Are the right people benefiting? Are there unintended effects? We stay engaged past delivery.
When we’re wrong
We acknowledge mistakes, explain what happened, fix what we can, and update our practices. Accountability means owning it when we fall short.
When clients push back
If a client wants us to build something that violates these principles, we decline – and we explain why in plain terms.
Hold us to this. Publicly.
We publish this framework because we want to be held accountable to it. If you work with us and believe we’ve violated one of these principles, we want to hear it – directly, honestly, and with the specific concern named.
This page carries a version date. We will update it when our practices improve or when better ethical standards emerge in the field. The version history will be public.
Framework version: April 2026. Next scheduled review: October 2026.
Concerns or feedback?
If you believe we’ve fallen short of this framework – or if you have suggestions for how to strengthen it – reach out directly. Ethics feedback gets a personal response from Tom within 2 business days.
Contact Us DirectlyGenerate your own ethics pledge
Use our free Pledge Generator to create a public AI ethics commitment for your own organization. Free, takes 3 minutes.
Use the Pledge Generator