Research

My research focuses on the intersection of AI/ML and hardware design, with an emphasis on applying automation (recently, driven mainly by LLMs) to hardware design and AI compilers.

Tutorial: Automating Accelerator Programming with LLM-Driven Code Translation and Optimization

📍 ASPLOS 2026, Pittsburgh, PA 📅 Monday, March 23, 2026 ⏰ Half-day tutorial

Programming and optimizing hardware accelerators is notoriously difficult, requiring deep knowledge of both hardware and domain-specific languages (DSLs). This tutorial introduces LLMLift + Autocomp, two complementary frameworks that automate accelerator programming through LLM-driven compilation.

LLMLift demonstrates verified code lifting—translating general-purpose code into accelerator DSLs with machine-checkable correctness proofs. Autocomp shows how LLMs can drive automated optimization using structured prompting, hardware feedback, and iterative search to generate high-performance code that surpasses expert implementations across Gemmini, AWS Trainium, and NVIDIA GPUs.