I co-host the Computer Architecture Podcast with Lisa Hsu. Check out our episodes on your favorite podcast player (iTunes, Spotify, etc.). We wrote a blog for SIGARCH with key takeaways across our episodes.
Dr. Tushar Krishna is an Associate Professor at Georgia Tech and a member of the ISCA, MICRO, and HPCA Halls of Fame. We discuss cross-stack design and tooling for large-scale distributed AI systems, the role of interconnects, and how to make AI systems more programmable and scalable.
Dr. Babak Falsafi is a Professor at EPFL and founding president of the Swiss Data Center Efficiency Association. We discuss measuring datacenter efficiency, sustainability metrics for compute, and visioning the future of computer architecture.
Dr. Caroline Trippel is an Assistant Professor in CS and EE at Stanford. We discuss high-assurance computer architectures, hardware-software co-verification, and how formal techniques can scale to modern systems.
Dr. Ricardo Bianchini is a Technical Fellow and Corporate Vice President at Microsoft Azure, leading compute workloads, server capacity, and datacenter infrastructure. In this five-year anniversary episode, we discuss the tech transfer playbook for bridging research to production at hyperscale.
Dr. Arkaprava Basu is an Associate Professor at the Indian Institute of Science. We discuss memory management and software reliability for CPUs and GPUs, optimizing memory systems for chiplet-based GPUs, and ensuring software reliability across heterogeneous systems.
Dr. Dan Sorin is a Professor of ECE at Duke University and co-founder of Realtime Robotics. We discuss codesign for industrial robotics, accelerating motion planning in hardware, and the lessons learned from pivoting a hardware startup.
Vijay Janapa Reddi is an Associate Professor at Harvard and VP and Co-founder of ML Commons. We discuss Architecture 2.0 — using AI to design and verify computer systems — and the broader role of benchmarking and ML for systems.
Dr. Carole-Jean Wu is Director of AI Research at Meta and a founding member and VP of ML Commons. We discuss sustainability in a post-AI world, the carbon footprint of large-scale AI, and how computer architects can drive measurable environmental impact.
Karu Sankaralingam is a Professor at the University of Wisconsin-Madison and a principal research scientist at Nvidia, with a hardware startup along the way. We discuss the hardware startup journey from business case to software, dataflow computing, and what it takes to commercialize a new architecture.
Dr. Gabriel Loh is a Senior Fellow at AMD Research and Advanced Development, with prior tenure at Georgia Tech. We discuss system design for exascale computing, the rise of advanced memory technologies and 3D stacking, and the architectural implications of HBM, chiplets, and disaggregated memory.
Dr. Vivienne Sze is an Associate Professor in EECS at MIT. We discuss energy-efficient algorithm-hardware codesign, the role of hardware in enabling video, ML, robotics, and digital health applications, and the importance of energy as a first-class architectural constraint.
A special episode marking the 50th anniversary of ISCA, with Dr. David Patterson (Berkeley/Google), Dr. Norm Jouppi (Google), and Dr. Natalie Enright Jerger (Toronto). We discuss the past, present, and future of computer architecture — the field's evolution, industry-academia collaboration, and the impact of domain-specific architectures.
Jim Keller is the CTO of Tenstorrent, with prior senior architect roles at Intel, Tesla (Autopilot), AMD, Apple, and DEC. We discuss the future of AI computing, what makes AI workloads architecturally different, and how to build and nurture great hardware teams.
Professor Brandon Lucia is at Carnegie Mellon University, where he leads work on physically-constrained computing systems. We discuss intermittent computing on energy-harvesting devices, programming abstractions for unreliable execution, and the unique challenges of computing at the extreme edge.
Professor Yungang Bao is a professor and Deputy Director at the Institute of Computing Technology, Chinese Academy of Sciences. We discuss hyperscale cloud trends in China, agile open-source RISC-V hardware design, and how to grow a healthy hardware ecosystem.
Professor Todd Austin is at the University of Michigan, known for robust and secure system design. We discuss durable security and privacy-enhanced computing, threats like Spectre and Meltdown, and the Morpheus architecture for self-encrypting computers.
Professor Sarita Adve is the Richard T. Cheng Professor of Computer Science at UIUC. We discuss domain-specific systems for AR/VR and extended reality, the ILLIXR benchmarking infrastructure, and the unique constraints of always-on, energy-bounded immersive computing.
Professor Fred Chong is the Seymour Goodman Professor at the University of Chicago and Chief Scientist of Super.tech. We discuss quantum computing architectures, the path from physical qubits to fault-tolerant systems, and the role of compilers and abstractions in early-stage technology.
Professor Christina Delimitrou is an Assistant Professor at Cornell, recipient of the 2020 IEEE TCCA Young Architect Award. We discuss datacenter architectures for cloud microservices, the QoS challenges of microservices vs. monoliths, and ML-driven cluster scheduling.
Professor Mark D. Hill is Professor Emeritus at the University of Wisconsin-Madison and Partner Hardware Architect at Microsoft. We discuss cross-layer hardware-software optimization in the post-Moore era, the playbook for impactful collaborations, and what makes for a healthy academic research portfolio.
Professor Jim Larus is Dean of the School of Computer and Communication Sciences at EPFL. We discuss the design of the DP3T privacy-preserving COVID contact-tracing protocol, the power dynamics between national governments and Big Tech, and the differences between academic and industrial research.
Dr. Bill Dally is Chief Scientist and SVP of Research at Nvidia and Professor of Computer Science at Stanford. We discuss the future of computing innovation in the post-Moore era, the role of domain-specific accelerators, and the design philosophy behind hardware-software codesign.
Dr. Kim Hazelwood is West Coast Head of Engineering at Facebook AI Research (FAIR), and previously a tenured associate professor at the University of Virginia. We discuss systems for ML at scale, workload diversity beyond CNNs and RNNs, and reframing 'reliability' for long-running ML training jobs.
Copyright Suvinay Subramanian © 2016 - Present