Explaining Bazel Build Failures with OpenAI: Automating Log Summarization

Posted on in programming

Bazel is fast, reproducible, and battle-tested at scale — but when something breaks, good luck deciphering its logs. Between action cache messages, output groups, and 500-line stack traces, figuring out why a build failed often feels like solving a riddle wrapped in a C++ binary.

In this article, we’ll build a Python-based tool that parses Bazel logs and uses OpenAI to generate plain-English summaries of what failed and how to fix it. Whether you're debugging in CI or working locally, this will help you and your team go from “What just happened?” to “Ah, got it” — in seconds.

Why Bazel Logs Are Tough to Parse

Bazel’s architecture is designed for scale and reproducibility, not readability. Common pain points include:

  • Giant unified logs for multiple targets
  • Redundant error messages for nested rules
  • Unhelpful messages like FAILED: Build did NOT complete successfully
  • Error messages buried deep in third-party dependency output

With a bit of log preprocessing and a smart prompt, we can turn that into something developers can actually use.

What We'll Build

A Python CLI tool that:

  1. Runs a Bazel build and captures the logs
  2. Cleans and filters the logs
  3. Sends them to OpenAI for summarization
  4. Outputs a clean explanation and suggested fix

We’ll also show how to integrate it into your bazel build workflow and optionally trigger it in CI (e.g. Buildkite, GitHub Actions, or Jenkins).

Step 1: Create the Bazel Summarizer Script

Let’s create a module: bazel_log_summary.py

import openai
import os

openai.api_key = os.getenv("OPENAI_API_KEY")

def summarize_bazel_log(log_text):
    prompt = f"""
You are a senior build engineer. Analyze the following Bazel build output and 
summarize the root cause of the failure. Include helpful details such as which 
target failed, what rule caused the issue, and suggest a fix.

Build log:
{log_text}
"""

    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response['choices'][0]['message']['content'].strip()

Step 2: Add CLI Tooling

import click
from bazel_log_summary import summarize_bazel_log

@click.command()
@click.argument('logfile', type=click.Path(exists=True))
def cli(logfile):
    with open(logfile, 'r') as f:
        log_text = f.read()

    summary = summarize_bazel_log(log_text)
    click.echo("\n--- Bazel Build Failure Summary ---\n")
    click.echo(summary)

Save it as bazel_summarize.py and run with:

python bazel_summarize.py bazel.log

Step 3: Capture Bazel Logs Automatically

Bazel doesn’t log to file by default — let’s fix that.

Run your build like this:

bazel build //your/target/... 2>&1 | tee bazel.log

If you want to always log output, you can alias Bazel:

alias b="bazel build $@ 2>&1 | tee bazel.log"

Or wrap it in a build.sh script for local devs.

Step 4: Filter Out the Noise (Optional but Recommended)

import re

def filter_bazel_log(raw_text):
    lines = raw_text.splitlines()
    errors = []

    for line in lines:
        if "ERROR:" in line or "FAILED:" in line or "no such package" in line:
            errors.append(line)
        if re.search(r"(BUILD|TEST) FAILED", line):
            errors.append(line)

    return "\n".join(errors[-300:])  # last 300 lines of high-signal error text

Then use:

clean_log = filter_bazel_log(log_text)
summary = summarize_bazel_log(clean_log)

Step 5: Optional CI Integration

In your GitHub Actions or Jenkins job:

bazel build //...
EXIT_CODE=$?

if [ $EXIT_CODE -ne 0 ]; then
  python bazel_summarize.py bazel.log
  exit 1
fi

With OpenAI configured via secrets, you’ll get an AI explanation in the CI logs alongside the failure.

What the Output Looks Like

Here's a real example (sanitized for brevity):

--- Bazel Build Failure Summary ---

The build failed because of a missing dependency in the target `//services/api:main`. The error was caused by a missing Go import in `handler.go`.

Suggested fix:
- Run `go get github.com/pkg/errors`
- Rebuild the target with `bazel build //services/api:main`

Note: Ensure that `go_module` is declared correctly in the BUILD file.

Clear. Actionable. Unblocked in seconds.

Conclusion

Bazel may be one of the most powerful build tools in modern engineering — but its logs? Not so much. With a bit of Python and OpenAI, you can build your own AI-powered debugger that turns walls of terminal output into useful insight.

Looking to go further? You could:

  • Add Slack or email notifications with summaries
  • Automatically open GitHub issues for repeated failures
  • Train a custom model on your build output patterns

Want more Bazel + AI ideas? Explore them all on Slaptijack

Slaptijack's Koding Kraken