mlir gif

Our Vision

Our vision is to build a future where machine learning and compiler technologies work in unison to unlock the full potential of heterogeneous computing. We aim to create intelligent, adaptive compilation flows that not only accelerate real-world DSP and ML workloads, but also evolve alongside emerging hardware architectures. By embedding machine learning into every layer of the compilation process—from high-level code transformations to low-level scheduling—we strive to make compilers smarter, more efficient, and deeply aware of both application semantics and hardware intricacies. We envision a world where ML-driven compilers can dynamically optimize for latency, energy, and accuracy across CPUs, DSPs, NPUs, and beyond. Through DSP-MLIR and its integration with MLIR, we are laying the groundwork for this vision—enabling domain-specific optimizations, high-level abstractions, and seamless deployment across diverse platforms. Our ultimate goal is to make real-time, high-performance computing accessible and efficient for all applications, from audio and radar to edge AI and communications.

Introduction

Within a few years of the MLIR framework emergence, we can see the MLIR dialects for domains like Deep Learning (Onnx-MLIR, TPU-MLIR, torch-mir, HDNN, etc.), Quantum Computing (Quantum MLIR Dialect), etc. As this is already been used to create domain specific language/compilers through the abstraction of dialects, and it promises seamless integration of these dialects through its reusable framework, it serves a foundational step towards unification of various domain under one framework/infrastructure.


Some of Our Work

DSP-MLIR: A Domain-Specific Language and MLIR Dialect for Digital Signal Processing

Challenge

Digital Signal Processing (DSP) is foundational to applications in telecommunications, audio processing, medical imaging, and more. However, compiling high-performance DSP code remains a challenge due to limitations in traditional compilers, which operate on low-level representations and lack awareness of domain-specific patterns and optimizations. Existing DSP libraries and vendor-specific compilers are hardware-locked and fail to expose high-level abstractions necessary for cross-kernel optimizations. This limits both performance and developer productivity, especially as real-time DSP increasingly overlaps with deep learning in modern edge and embedded platforms.

DSP-MLIR is an open-source compiler framework built on MLIR, specifically designed to optimize the development and performance of DSP applications. It introduces:

  • DSP-DSL – a high-level, Python-like domain-specific language that simplifies DSP programming and reduces code size by up to 5× compared to C.
  • DSP-dialect – an MLIR dialect with 90+ operations tailored for DSP, including filters, FFTs, windowing, and more.
  • Domain-specific optimizations – 16 high-level transformations based on DSP theorems and operation fusion, unlocking performance gains unavailable to traditional compilers.

DSP-MLIR integrates seamlessly with the MLIR ecosystem and supports progressive lowering to standard and affine dialects, eventually compiling down to efficient LLVM IR via Clang.

Impact

DSP-MLIR outperforms state-of-the-art compiler flows (GCC, Clang, Hexagon-Clang) with:

  • 12% average speedup on CPUs and DSPs.
  • 10% reduction in binary size.
  • No increase in compilation time.
  • 4×–5× reduction in development effort through DSP-DSL.

These benefits are achieved without hardware-specific tuning, enabling DSP-MLIR to complement existing low-level libraries like QHL, with future support planned for automatic lowering to such libraries.

DSP-MLIR is part of ongoing MLIR-based research to unify DSP and deep learning compilation in next-generation edge processors.

Repository: https://github.com/MPSLab-ASU/DSP_MLIR