Building an AI-Powered Pre-Push Policy Validator with OpenAI

Posted on in programming

Pre-push hooks are your last line of defense before questionable code hits the remote repo. Traditionally, they’re used to enforce tests or linting, but they can be brittle and overly rigid. What if, instead, your push triggered a context-aware AI that reviewed your code against team policies, security best practices, or even stylistic conventions?

In this article, we’ll build an AI-powered pre-push Git hook using Python and OpenAI. This intelligent hook will inspect your changes, flag risky patterns, and either warn or block the push based on semantic understanding — not just regex.

Why a Pre-Push Hook?

Git’s pre-push hook runs after local commits are made but before they’re sent to a remote repo. It’s ideal for:

  • Final code validation
  • Compliance and security checks
  • Preventing common footguns (debug=true, hardcoded tokens, open S3 buckets)
  • Team-enforced policies (e.g., test coverage, naming conventions)

Unlike a pre-commit hook, which fires per commit, pre-push runs once — giving it more room for complex operations without slowing down everyday dev work.

What We'll Build

A Python-powered Git hook that:

  1. Captures all commits in the push range
  2. Collects the combined diffs
  3. Sends the diffs to OpenAI with a custom policy enforcement prompt
  4. Blocks the push if violations are detected

We'll add this functionality to our ai-git-hooks CLI tool and install it as .git/hooks/pre-push.

Step 1: Capture the Push Range

When Git triggers a pre-push hook, it passes the ref and commit range to stdin. We’ll read that input and determine what’s about to go remote.

In policy_validator.py:

import subprocess
import openai
import os
from git import Repo

openai.api_key = os.getenv("OPENAI_API_KEY")

def get_push_range():
    stdin = os.sys.stdin.read().strip()
    if not stdin:
        return None

    parts = stdin.split()
    if len(parts) < 4:
        return None

    local_sha = parts[1]
    remote_sha = parts[3]
    return local_sha, remote_sha

def get_push_diff(local_sha, remote_sha):
    repo = Repo(".")
    return repo.git.diff(remote_sha, local_sha)

Step 2: Enforce AI Policy Checks

Still in policy_validator.py:

def validate_with_ai(diff: str) -> str:
    policy_prompt = f"""
You are a code reviewer and security expert. Analyze the following Git diff and 
identify any code that violates best practices, leaks secrets, or violates team 
policies like logging sensitive data or skipping tests.

Respond with either:
1. "All clear ✅" — if no issues found
2. A list of issues with brief explanations

{diff}
"""

    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": policy_prompt}]
    )
    return response['choices'][0]['message']['content'].strip()

Step 3: Wire It Into the CLI

In cli.py:

from ai_git_hooks.policy_validator import get_push_range, get_push_diff, validate_with_ai

@cli.command()
def pre_push():
    """Run AI-based policy validation before push"""
    range_info = get_push_range()
    if not range_info:
        click.echo("Could not detect push range.")
        return

    local_sha, remote_sha = range_info
    diff = get_push_diff(local_sha, remote_sha)

    if not diff.strip():
        click.echo("No changes to validate.")
        return

    result = validate_with_ai(diff)
    click.echo("--- Policy Check Result ---")
    click.echo(result)

    if "all clear" in result.lower():
        exit(0)
    else:
        click.echo("❌ Push blocked due to policy issues.")
        exit(1)

Now you can run:

ai-git-hooks pre-push

Step 4: Install the Hook

Create .git/hooks/pre-push:

#!/bin/sh
ai-git-hooks pre-push

Then:

chmod +x .git/hooks/pre-push

Now every push will go through a last-mile AI safety check.

Example Violations This Can Catch

  • Secrets in code: API_KEY = "sk-..."
  • Dangerous configs: allowPrivileged=true
  • Logging sensitive data: console.log(user.password)
  • Disabled tests or skipped validations
  • Insecure file uploads or open CORS settings

You can tune the prompt to reflect your team's internal policies or environment.

Optional: Per-Repo Config

Support a .ai-policy.toml file with toggles like:

[checks]
secrets = true
logging_sensitive_data = true
test_coverage = false

Then append specific rules to the LLM prompt dynamically.

Conclusion

With a single Git hook and an OpenAI prompt, you've added a powerful layer of intelligent code review to your workflow — right at the edge of your CI/CD boundary. This is automation that actually understands your code, not just matches patterns.

Coming up next: combining everything we've built into a GitHub App that mirrors your local automation on every PR.

Want more tooling like this? Check out our AI+Git productivity stack on Slaptijack

Slaptijack's Koding Kraken