Responsible AI is often framed in terms of ethical models and fair data—but the foundation for responsibility lies in infrastructure. In this talk, we’ll explore how platform-level capabilities like environment isolation, auditability, workload reproducibility, and resource-aware orchestration are essential to delivering AI that’s not just performant, but trustworthy. We’ll also examine how infrastructure decisions directly impact the quality and reliability of model evaluations—enabling teams to catch bias, ensure compliance, and meet evolving governance standards. If you’re building or scaling AI systems, this session will show how infrastructure becomes the enabler of responsible AI at every stage.

Taylor Smith
Taylor Smith is a Senior AI Advocate at Red Hat, where she champions open source innovation and the responsible adoption of AI at scale. With a background in software development, Kubernetes, Linux, and technical partnerships, she focuses on helping organizations build and operationalize AI using modern infrastructure. Taylor is passionate about making AI more accessible, trustworthy, and grounded in real-world use cases.