Tutorial: AI-Driven Accelerator Programming with LLMLift and Autocomp

๐Ÿ›๏ธ ASPLOS 2026
๐Ÿ“ Clemente Room, The Landing Hotel (Pittsburgh, PA, USA)
๐Ÿ“… Monday, March 23, 2026 (8:30 AM โ€“ 12:00 PM)
Stop hand-tuning accelerator codeโ€”let AI do it!
Learn how we use LLMLift and Autocomp to turn weeks of manual tuning into a few hours of automation.
Hands-on demo with Trainium compute and LLM credits provided by AWS!

Overview

Programming and optimizing hardware accelerators is notoriously difficult, requiring deep knowledge of both hardware and domain-specific languages (DSLs). Recent advances in AI, in particular large language models (LLMs), offer a new paradigm: leveraging pretrained models for verified code translation and hardware-aware optimization. This tutorial introduces LLMLift + Autocomp, two complementary frameworks that automate accelerator programming through LLM-driven compilation.

LLMLift arXiv GitHub

LLMLift (NeurIPS 2024) demonstrates how LLMs can perform verified liftingโ€”translating general-purpose code into DSLs (including accelerator DSLs), while also generating formal proofs of correctness.

Autocomp arXiv GitHub

Autocomp (MLArchSys 2025 Outstanding Paper) shows how LLMs can drive automated optimization. Autocomp uses structured prompting, hardware feedback, and iterative search to generate high-performance accelerator code that surpasses expert hand-tuned implementations across diverse backends such as Gemmini, AWS Trainium, and NVIDIA GPUs.

Through a mix of lectures and hands-on sessions, participants will learn how to use LLMs as reasoning engines for verified code transformation and as autonomous optimization agents for new hardware. The tutorial will cover principles of prompt engineering for compiler tasks, integration of LLMs with theorem provers and performance feedback loops, and practical workflows for applying these techniques to custom accelerators and DSLs. We hope attendees will leave with a better understanding of how LLMs can bridge the gap between hardware design and software optimization, as well as the ability to apply LLMLift and Autocomp to their own projects!

Schedule

8:30โ€“8:40
Introduction
  • Motivation: The growing complexity and importance of accelerator programming
  • Challenges in DSL design, compiler optimization, and hardware specialization
  • The emerging role of AI in systems research
8:40โ€“10:00
LLMLift
8:40โ€“9:00 Talk: LLMLift: Verified Code Transpilation with LLMs
9:00โ€“10:00 Hands-on: Translating Python to Accelerator DSL
10:00โ€“10:30 Coffee Break โ˜•
10:30โ€“11:50
Autocomp
10:30โ€“10:50 Talk: Autocomp: A Portable Code Optimizer for Tensor Accelerators
10:50โ€“11:50 Hands-on: Building a Trainium Code Optimizer
11:50โ€“12:00
Q&A and Future Directions
  • Extending LLM-driven synthesis for new hardware platforms
  • Open research challenges: verification, LLM post-training, compiler integration, context engineering, etc.

Organizers

Charles Hong
Charles Hong
Ph.D. Candidate
UC Berkeley
Sahil Bhatia
Sahil Bhatia
Ph.D. Candidate
UC Berkeley
Alvin Cheung
Alvin Cheung
Associate Professor
UC Berkeley
Yakun Sophia Shao
Yakun Sophia Shao
Associate Professor
UC Berkeley

Expected Audience and Outcomes

This tutorial is designed for researchers and practitioners of all levels in computer architecture, programming languages, and machine learning systems. Participants should have basic familiarity with AI accelerators and accelerator programming, but no prior experience with LLM-based code generation is required.

By the end of the session, attendees will:

  • Understand how LLMs can be integrated into ML code generation/optimization and accelerator bringup
  • Learn the design principles behind LLMLift and Autocomp
  • Gain hands-on experience using LLMs for verified code translation and optimization
  • Explore open research directions at the intersection of AI, code generation, and hardware design

Key Themes

Papers & Code

Additional Resources

Acknowledgments

We thank Amazon Web Services (AWS) for generously sponsoring this tutorial with Trainium compute resources, Bedrock credits, and engineer support for the hands-on sessions.