Tutorial: AI-Driven Accelerator Programming with LLMLift and Autocomp

📍 ASPLOS 2026, Pittsburgh, PA, USA
đź“… Monday, March 23, 2026 (Morning Session)
⏰ Half-day tutorial (3.5 hours)

Overview

Programming and optimizing hardware accelerators is notoriously difficult, requiring deep knowledge of both hardware and domain-specific languages (DSLs). Recent advances in AI, in particular large language models (LLMs), offer a new paradigm: leveraging pretrained models for verified code translation and hardware-aware optimization. This tutorial introduces LLMLift + Autocomp, two complementary frameworks that automate accelerator programming through LLM-driven compilation.

LLMLift arXiv

LLMLift (NeurIPS 2024) demonstrates how LLMs can perform verified lifting—translating general-purpose code into DSLs (including accelerator DSLs), while also generating formal proofs of correctness.

Autocomp arXiv GitHub

Autocomp (MLArchSys 2025 Outstanding Paper) shows how LLMs can drive automated optimization. Autocomp uses structured prompting, hardware feedback, and iterative search to generate high-performance accelerator code that surpasses expert hand-tuned implementations across diverse backends such as Gemmini, AWS Trainium, and NVIDIA GPUs.

Through a mix of lectures and hands-on sessions, participants will learn how to use LLMs as reasoning engines for verified code transformation and as autonomous optimization agents for new hardware. The tutorial will cover principles of prompt engineering for compiler tasks, integration of LLMs with theorem provers and performance feedback loops, and practical workflows for applying these techniques to custom accelerators and DSLs. We hope attendees will leave with a better understanding of how LLMs can bridge the gap between hardware design and software optimization, as well as the ability to apply LLMLift and Autocomp to their own projects!

Organizers

Charles Hong
Charles Hong
Ph.D. Candidate
UC Berkeley
Sahil Bhatia
Sahil Bhatia
Ph.D. Candidate
UC Berkeley
Alvin Cheung
Alvin Cheung
Associate Professor
UC Berkeley
Yakun Sophia Shao
Yakun Sophia Shao
Associate Professor
UC Berkeley

Tentative Outline

Note that this is a tentative outline and is likely to change as we get closer to the date of the tutorial.

1. Introduction (20 min)

  • Motivation: The growing complexity of accelerator programming
  • Challenges in DSL design, compiler optimization, and hardware specialization
  • The emerging role of LLMs in systems research

2. LLMs for Code Translation and Verification: LLMLift (30 min)

  • Overview of verified lifting and its traditional symbolic approaches
  • LLMLift framework: Python as an intermediate representation (IR)
  • Integrating LLMs with theorem provers for verified code transpilation
  • Demonstration: Translating and verifying tensor programs
  • Discussion: Extending to new DSLs and verification backends

3. LLMs for Accelerator Code Optimization: Autocomp (30 min)

  • Background: Optimization challenges in tensor accelerator programming
  • Autocomp framework:
    • Two-phase (plan + implement) prompting and beam search
    • Hardware-in-the-loop optimization using feedback (performance and correctness)
    • Methods for increasing diversity of generated responses
  • Demonstration: Optimizing Gemmini, Trainium, and CUDA kernels using Autocomp
  • Discussion: Portability, promising optimization targets, and future improvements to Autocomp

4. Coffee Break (30 min)

5. Hands-on Session (80 min)

  • Interactive walk-through:
    • Translating Python to accelerator DSL using LLMLift
    • Iteratively optimizing a Trainium kernel with Autocomp's workflow
  • Example exercises and open-source repository walkthrough

6. Open Discussion and Future Directions (20 min)

  • Integrating LLM-driven synthesis into existing compiler stacks
  • Opportunities for combining LLMLift and Autocomp (end-to-end verified optimization)
  • Q&A on open research challenges: verification, LLM post-training, hardware feedback, etc.

Expected Audience and Outcomes

This tutorial is designed for researchers and practitioners of all levels in computer architecture, programming languages, and machine learning systems. Participants should have basic familiarity with compilers or accelerator programming, but no prior experience with LLM-based code generation is required.

By the end of the session, attendees will:

  • Understand how LLMs can be integrated into compiler and synthesis workflows
  • Learn the design principles behind LLMLift and Autocomp
  • Gain hands-on experience using LLMs for verified code translation and optimization
  • Explore open research directions at the intersection of AI, compilers, and hardware design

Key Themes

Papers & Code

Additional Resources