Advanced Optimizations with LLVM

Posted on in programming

cover image for article

Welcome back, fellow developers! In our previous articles, we explored the core components of LLVM, guided you through its installation, and demonstrated how to write a simple compiler using LLVM. Now it's time to take our knowledge to the next level by delving into advanced optimizations. Optimizations are a crucial aspect of compiler development, and LLVM provides a robust set of tools and techniques to improve the performance of your code. In this article, we'll explore how to leverage LLVM's powerful optimization passes to make your programs faster and more efficient. So, open up Vim (or your preferred IDE), and let's dive into the world of code optimization!

Understanding LLVM Optimization Passes

What are Optimization Passes?

Optimization passes are modules that perform various transformations and optimizations on the Intermediate Representation (IR) code to improve performance and reduce code size. These optimizations can be categorized into several types, including:

  • Code Simplification: Reduces complexity by removing redundant instructions.
  • Loop Optimization: Enhances performance by optimizing loop structures.
  • Inlining: Replaces function calls with the function body to eliminate call overhead.
  • Vectorization: Converts scalar operations to vector operations to exploit data-level parallelism.

How Optimization Passes Work

LLVM applies optimization passes in a sequence, known as a pass pipeline. Each pass transforms the IR code, and the output of one pass becomes the input for the next. This modular approach allows developers to fine-tune the optimization process by selecting and configuring specific passes.

Using Built-in Optimization Passes

LLVM provides a comprehensive set of built-in optimization passes. Let's explore how to apply these passes to your IR code.

Applying Standard Passes

LLVM's opt tool allows you to apply optimization passes to IR files. For example, to apply the -O3 optimization level, which includes aggressive optimizations, use the following command:

opt -O3 input.ll -o output.ll

This command reads the IR file input.ll, applies the -O3 optimizations, and writes the optimized IR to output.ll.

Command Line Examples

Here are a few examples of applying specific optimization passes using the opt tool:

  1. Function Inlining:

    opt -inline input.ll -o output.ll
  2. Loop Unrolling:

    opt -loop-unroll input.ll -o output.ll
  3. Dead Code Elimination:

    opt -dce input.ll -o output.ll

Developing Custom Optimization Passes

While LLVM provides a rich set of built-in passes, there may be scenarios where you need custom optimizations tailored to your specific requirements. Let's explore how to develop and integrate custom passes with LLVM.

Writing a Custom Pass

To write a custom pass, you need to create a new C++ file and implement the pass logic. Here is a basic example of a custom pass that counts the number of instructions in each function:

#include "llvm/Pass.h"
#include "llvm/IR/Function.h"
#include "llvm/Support/raw_ostream.h"

using namespace llvm;

namespace {
  struct InstructionCountPass : public FunctionPass {
    static char ID;
    InstructionCountPass() : FunctionPass(ID) {}

    bool runOnFunction(Function &F) override {
      unsigned int instructionCount = 0;
      for (auto &BB : F) {
        instructionCount += BB.size();
      errs() << "Function " << F.getName() << " has " << instructionCount << " instructions.\n";
      return false;

char InstructionCountPass::ID = 0;
static RegisterPass<InstructionCountPass> X("instr-count", "Instruction Count Pass", false, false);

Integrating Custom Passes with LLVM

To integrate your custom pass with LLVM, you need to build it as a shared library and load it using the opt tool. Here are the steps to do this:

  1. Create a CMakeLists.txt File:

    cmake_minimum_required(VERSION 3.10)
    find_package(LLVM REQUIRED CONFIG)
    add_library(CustomPass MODULE InstructionCountPass.cpp)
    target_link_libraries(CustomPass LLVM)
  2. Build the Custom Pass:

    mkdir build
    cd build
    cmake ..
  3. Load the Custom Pass with opt:

    opt -load ./build/ -instr-count input.ll -o output.ll

Profiling and Performance Tuning

Optimization is an iterative process that involves profiling and tuning. Let's explore some tools and techniques for profiling and tuning your LLVM-based projects.

Tools for Profiling

  1. LLVM Profiler (llvm-profdata): Collects and analyzes profiling data to guide optimizations.
  2. Perf: A powerful Linux profiling tool that provides detailed performance metrics.
  3. Valgrind: A suite of tools for dynamic analysis, including profiling and memory debugging.

Interpreting Profiling Results

Profiling provides insights into the performance characteristics of your code. Key metrics to analyze include:

  • Execution Time: Identify hotspots and optimize the most time-consuming functions.
  • Memory Usage: Detect memory leaks and optimize memory-intensive operations.
  • Instruction Count: Reduce the number of instructions to improve performance.


Advanced optimizations are a crucial aspect of compiler development, and LLVM provides a robust set of tools and techniques to enhance the performance of your code. In this article, we've explored how to use LLVM's built-in optimization passes, develop custom passes, and profile your projects for performance tuning.

In the next part of this series, we'll delve into extending LLVM with custom passes and backend development. Stay tuned to our blog at for more in-depth tutorials and insights into LLVM and other modern software development practices. If you have any questions or need further assistance, feel free to reach out. And remember, whether you're optimizing your code or cracking a dad joke, always strive for excellence. Happy coding!

Part 4 of the Exploring LLVM series

Slaptijack's Koding Kraken