AI-Assisted Log Analysis: Building a Git Hook That Explains Your Build Failures

Posted on in programming

CI/CD build failures can be brutal — especially when the logs are long, noisy, and cryptic. Developers often waste precious time parsing through thousands of lines of output just to find the root cause. What if your tools could summarize the problem in plain English?

In this article, we’ll build a Git-based automation system that captures build failure logs and runs them through OpenAI to produce intelligent summaries. Whether you use it as a post-push hook, part of your CI pipeline, or even in a local dev loop, this is your AI-powered shortcut to faster debugging.

Why Build Failure Summaries?

Let’s be real: most build logs are a chaotic mess.

  • Redundant output
  • Stack traces mixed with noise
  • Errors buried under 200 lines of test output
  • Build tools that report “exit 1” with no explanation

AI is shockingly good at reading this mess and surfacing the “what went wrong” and “what you can do about it.”

With a minimal integration effort, you can:

  • Help new developers get unblocked faster
  • Accelerate root cause detection
  • Reduce Slack messages like “hey, anyone seen this error before?”

What We’ll Build

A tool that:

  1. Detects a failed build (triggered by Git or CI)
  2. Extracts relevant logs from a local or CI run
  3. Sends logs to OpenAI for summarization
  4. Outputs a clean explanation of the failure

We’ll start with a local prototype and then show how to integrate it into CI (GitHub Actions or Jenkins).

Step 1: Create the Log Summarizer Script

Inside your ai_git_hooks/ CLI project, add log_summary.py:

import openai
import os

openai.api_key = os.getenv("OPENAI_API_KEY")

def summarize_logs(log_text):
    prompt = f"""
You are a build engineer. Analyze the following build logs and summarize what 
caused the failure. Be concise and helpful. Suggest a fix if possible.

Logs:
{log_text}
"""
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response['choices'][0]['message']['content'].strip()

Step 2: Add a CLI Entry Point

In cli.py:

from ai_git_hooks.log_summary import summarize_logs

@cli.command()
@click.argument('logfile', type=click.Path(exists=True))
def summarize_log(logfile):
    """Summarize a build failure log"""
    with open(logfile, 'r') as f:
        log_text = f.read()

    summary = summarize_logs(log_text)
    click.echo("\n--- Build Failure Summary ---")
    click.echo(summary)

Now you can run:

ai-git-hooks summarize-log build.log

Step 3: Capture Logs Automatically After a Build

Here’s a pattern you can use with local test builds:

In .git/hooks/post-commit:

#!/bin/sh

echo "Running local build check..."
./scripts/build.sh > build.log 2>&1

if [ $? -ne 0 ]; then
  echo "Build failed. Analyzing logs..."
  ai-git-hooks summarize-log build.log
fi

Make it executable:

chmod +x .git/hooks/post-commit

This hook will run your build script after every commit, and if it fails, provide a summary using OpenAI.

Step 4: Use in CI (GitHub Actions Example)

In your workflow YAML:

- name: Run build
  run: |
    ./scripts/build.sh > build.log 2>&1 || echo "Build failed" >> build-status.txt

- name: Summarize logs with OpenAI
  if: failure()
  run: |
    pip install openai
    python .github/scripts/summarize_log.py build.log
  env:
    OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Create .github/scripts/summarize_log.py:

import sys
from ai_git_hooks.log_summary import summarize_logs

with open(sys.argv[1], 'r') as f:
    logs = f.read()

print("\n--- Build Log Summary ---\n")
print(summarize_logs(logs))

This will summarize logs directly in the GitHub Actions output.

Step 5: Filter the Noise (Optional)

You can pre-clean logs before sending them to OpenAI:

  • Remove ANSI color codes
  • Truncate to last 200 lines
  • Drop non-error lines using simple heuristics (grep -i error)

Example:

def clean_log_text(log_text):
    lines = log_text.splitlines()
    filtered = [line for line in lines if "error" in line.lower()]
    return "\n".join(filtered[-200:])

Limitations

  • Cost: Sending large logs to GPT-4 can use a lot of tokens
  • Latency: Responses may take 5–10 seconds
  • Truncation: You may need to chunk logs or downsample noisy output
  • Accuracy: LLMs aren’t always perfect — use with judgment

Bonus: Suggest Commands to Fix

Extend the prompt to include this:

If you detect a common build tool error, suggest a shell command to fix it.

Examples:

  • npm install for missing deps
  • pytest --maxfail=1 to shorten test failure feedback
  • brew install or apt-get for missing compilers

Conclusion

With a few lines of Python and a little help from GPT, we’ve taken the pain out of reading build logs. Whether you use this locally, in CI, or both, it’s a powerful way to reclaim time, reduce frustration, and help your team move faster.

Next in our AI dev workflow series: how to apply LLMs to internal developer portals — and make Backstage, OpsLevel, or your in-house portal smarter.

Want the full toolkit? Explore our AI + GitOps dev stack at Slaptijack

Slaptijack's Koding Kraken