Dan Taber, PhD
AI, Safety, & Strategy
Research-driven work on how AI systems shape behavior, risk, and trust
I’m a research and strategy leader focused on AI, trust & safety, and the responsible use of emerging technologies. My work sits at the intersection of research, policy, and product decision-making, with an emphasis on evidence-based approaches to risk and impact.
Over the past several years, I’ve led and built research programs in responsible AI and online safety, working closely with product, policy, and leadership teams. I’m particularly interested in how organizations translate abstract principles — fairness, accountability, safety — into real-world systems and decisions.
About Me
I'm Dan Taber, a researcher and strategist helping shape the responsible development of AI. I’m also exploring how individuals can lead meaningful, adaptive careers in a time of rapid technological change.
I began my career in academia before transitioning into tech, where I've led Responsible AI and Trust & Safety programs at Spotify and Indeed. Throughout my work, I’ve bridged technical and human perspectives, collaborating across disciplines and speaking widely about how organizations and individuals can approach AI development with thoughtfulness and integrity.
My own path wasn't linear - I started as a humanities major and grew into tech leadership through curiosity, experimentation, and lifelong learning. That journey continues to shape my research and writing, especially as I explore how professionals navigate uncertainty, growth, and reinvention in the AI era.
Interested in similar questions? I’d love to hear from you.
RESEARCH & STRATEGIC INSIGHTS
Long before Harvard Business Review, I always thought data science was ‘sexy’. With expertise in machine learning, experimental design, and causal inference, I help organizations turn complex data into strategic action. My work spans from assessing algorithmic fairness to modeling the impacts of complex systems, always focused on translating rigorous analysis into actionable insights.
RESPONSIBLE AI
I’ve led the development of Responsible AI programs at global tech companies like Spotify and Indeed, creating scalable frameworks for ethical AI implementation. My expertise goes beyond the corporate sector; as a Berkman Klein Fellow, I co-founded AI Blindspot to help organizations identify and address structural inequities in AI systems. This blend of corporate and civil society experience shapes my holistic approach to responsible AI strategy.
STRATEGIC COMMUNICATION
No matter your role, telling your story effectively is key. I specialize in translating complex AI and technology concepts into clear, actionable insights for business leaders, bridging the gap between technical capabilities and strategic implementation. Through speaking engagements, workshops, and advisory work, I help organizations not only understand AI’s potential but also turn it into real-world opportunities.
“Ideas not coupled with action never become bigger than the brain cells they occupied.”