Can Algorithms Earn Trust?

Can Algorithms Earn Trust?

I would love to dive into a question that blends my experience in federal government contracting, software development and my metaphysical curiosity.

Can algorithms truly earn trust?

Let’s start with the basics.

What Is an Algorithm?

At its core, an algorithm is just a set of instructions—a digital recipe for how systems process data to make decisions. But here’s the catch: algorithms aren’t neutral. They inherit the perspectives of their creators, the biases in their training data, and the values embedded (or omitted) in their design.

It’s like formulating a shampoo bar: the ingredients, measurements, and intentions shape the final product. Similarly, algorithms carry intention in code. And when those intentions lack compassion or clarity? Trust begins to erode.

Think of an algorithm like karma in code. What you feed it shapes what it becomes. ~ Linda Rawson

What Is Trusted AI?

Trusted AI is a movement aimed at ensuring that artificial intelligence systems embody ethics, transparency, and accountability. It goes beyond minimizing bias—it’s about building systems with soul.

It challenges us to ask:

  • Can we explain how this decision was made?
  • Is it fair across contexts?
  • Does it reflect our highest values—not just performance goals?

Trusted AI is about earning, not demanding, confidence from the people it serves. ~ Linda Rawson

Why Are Algorithms Untrustworthy?

Many AI systems today fall short. They’re trained on biased data, operate as opaque black boxes, and often lack meaningful oversight. In federal contracting, we’ve seen automation used to deny benefits or predict outcomes—sometimes without explanation or recourse.

Take TennCare Connect in Tennessee: A $400 million system built to streamline Medicaid eligibility ended up wrongfully denying thousands due to programming errors. It misassigned households, lost critical data, and terminated coverage with little accountability. In 2024, a federal judge ruled it violated due process, a stark reminder of what happens when technology lacks empathy and rigor.

Without soul, algorithms become tools of harm, replicating bias, erasing nuance, and overwhelming oversight. ~ Linda Rawson

Is There Hope?

Absolutely. Agencies are adopting new frameworks—like the EU AI Act, the NIST AI Risk Management Framework, and U.S. federal guidelines for responsible AI. The Department of the Air Force, for instance, has issued ethics guidance around generative AI in contracts.

Change is happening. And small, values-driven contractors have a unique opportunity to lead with integrity. We can infuse our proposals, tools, and systems with conscience.

You Are Not Alone

If you care about these issues but don’t know where to begin, then start here. Trusted AI isn’t just for coders. It’s for strategists, creators, visionaries, and every professional shaping the future of tech.

This is a conversation and your voice matters. Together, we shape the ethics of automation. ~ Linda Rawson

Final Thoughts

So, can algorithms earn trust?

Yes—if we build them with clarity, empathy, intention, accountability, and wisdom.

Yes—if we center human values in every line of code and every contracting decision.

Let’s build the future with conscience, code, and community.

Linda Rawson

Hi, I’m Linda Rawson. Founder of GovConBiz.

I help entrepreneurs build a business and lifestyle they love!

I am personally responsible for my company, DynaGrace Enterprises, winning millions in federal government contracts.

I can help you so the same.

Work with me