Dr. Heather

Trust in Technology, Integrity in Innovation

I’ve always been captivated by questions.

Questions that challenge the status quo. Questions that demand answers from the systems we trust. Questions that ask what’s possible—and what’s at risk.

I grew up with technology. From clunky desktops to the .com bubble and the very first rollout of “Artificial Intelligence,” I watched as it evolved rapidly, transforming how we live, work, and connect. But as technology grew smarter and more powerful, so did the questions. Could we trust it? Who controlled it? What were the consequences?

I saw
technology as more than just code and circuits

I saw it as a reflection of human values, of our hopes and fears, our biases and beliefs.

This curiosity led me to explore the intersection of technology and trust. I wrote my Master’s thesis on Radio Frequency Identification (RFID), a technology that sparked widespread concern over privacy and control. People worried about being tracked, about losing autonomy. I became fascinated by the psychology of trust—what makes people feel safe adopting new technology? What inspires confidence, and what erodes it?

This fascination
with trust shaped my career

My career began at IBM, a company influential for its early innovations in AI and essential milestones in defining AI ethics. I started in engineering, optimizing supply chains with data-driven systems. But as AI evolved, I saw the scale of its impact, not just in predicting behavior but influencing outcomes.

I helped build one of the first Corporate AI Ethics Boards at IBM

At IBM, I helped build many of the foundational AI ethics and Responsible AI practice. We were pioneering a new field with no rulebook, no regulations, and no safety net. We had to navigate uncharted responsible waters, balancing innovation with integrity. Our mission was clear: to build systems people could trust, systems that respected human values, and systems that enhanced lives without compromising ethics.

Over time, I learned
how to bridge worlds

I spoke the language of engineers and policymakers alike, translating complex yet crucial responsible challenges into practical solutions. My work expanded into global AI governance, contributing to key working groups such as the World Economic Forum (WEF), the National Institute of Standards and Technology (NIST), IAPP, and XPRIZE.

I saw firsthand the power of AI to scale decisions—decisions that could impact lives in ways we hadn’t fully imagined. Whether it was hiring algorithms, medical diagnostics, or financial risk assessments, the potential for both progress and harm was staggering. And the difference between the two came down to one thing: trust.

Now, at

I’m taking that mission even further

I joined as employee number one to build the Office of Responsible AI and Governance from scratch. I came to create a culture, to unite diverse groups who believe that ethics in AI are not just essential but are a competitive advantage. My work has earned me recognition as one of the “100 Brilliant Women in AI Ethics,” but beyond accolades, it’s about impact.

I’ve always been captivated by technology’s potential, but its impact on people drives me. That impact is measured not just by what technology can do but by the trust it builds—and the lives it improves.

This isn’t just my career, It’s my

This isn’t just my career, It’s my

To make AI not just intelligent but responsible. To build systems that empower without exploiting. To bridge the gap between innovation and integrity. And to do it all while keeping humanity at the heart of every decision.

That’s the world I’m working to create. One where technology moves us forward. And one where trust is built—one line of code at a time.

Values

Proactive
Transparency

I believe in transparency both now and for the future. In a world where AI decisions can feel like black boxes, I’m committed to making technology’s impact clear, comprehensible, and accountable.

Value-centered Technology

Technology should serve and enrich people, not the other way around. I lead with a user-centric mindset, designing AI systems that prioritize human dignity, autonomy, and well-being.

Pragmatic Ethics

AI ethics should be grounded in real-world challenges and clear, defined solutions.

Asking Questions

I believe that growth happens at the edge of discomfort. We must challenge assumptions and addressing responsible gray areas head-on to push the industry forward.

Long-Term Sustainability

It’s not about the launch, it’s about the legacy. AI systems don’t exist in a vacuum; they shape societies, cultures, and futures. I’m committed to building technology with long-term accountability.

Testimonials

Join the Conversation

Connect with me on LinkedIn to explore to latest news, discussions, and insights surrounding Responsible AI, AI Governance, and More.