Building Linting and Formatting Pipelines for Vibe-Coded Projects
You've probably felt it: that rush of productivity when an AI agent spits out 200 lines of perfectly functional code in seconds. It feels like magic until you realize the AI just hallucinated three imports, left five unused variables, and completely ignored your project's naming conventions. This is the reality of vibe coding is a development approach where AI coding agents generate code based on high-level prompts and natural language descriptions. When you're "vibing," you're focusing on the intent and the flow, but the actual output can be messy. If you don't have a strict safety net, your codebase will quickly become a graveyard of AI-generated anomalies that no human wants to maintain.
The solution isn't to stop using AI, but to build a ruthless automated gatekeeper. You need a pipeline that treats AI code with a healthy dose of suspicion. By the time a pull request hits a human reviewer, it should already be formatted, type-checked, and stripped of "dead code." This ensures the human is reviewing the logic, not arguing about where the curly braces go.
The Foundation: Why Standard Linting Isn't Enough
In a traditional project, a linting warning is a suggestion. In a vibe-coded project, a warning is a red flag. AI agents are notorious for producing "ghost code"-variables that are declared but never used, or imports that look correct but don't actually exist in your dependency tree. If you let these slide, you're introducing technical debt at a speed humans can't match.
To stop this, you have to move from a "warning" mindset to a "failure" mindset. The most critical change is implementing the --max-warnings 0 flag in your CI pipeline. This tells the system that any single warning is a build failure. It forces the AI (or the human prompting it) to clean up the mess before the code can even be considered for merging.
For those working in the JavaScript/TypeScript ecosystem, Biome is becoming the tool of choice. Unlike the traditional pairing of ESLint and Prettier, Biome acts as a unified linter and formatter. Using a single biome check command removes the "configuration drift" that happens when your formatter and linter disagree on a rule, which often confuses AI agents and leads to endless edit loops.
| Tool | Primary Role | Vibe-Coding Value | Speed/Overhead |
|---|---|---|---|
| Biome | Linter & Formatter | Zero config drift; high speed | Ultra-fast |
| TypeScript | Type Checking | Catches any type hallucinations |
Moderate |
| Semgrep | Security Scanning | Detects hardcoded AI secrets | Moderate |
| Golang-cilint | Go Ecosystem Linting | Extreme granularity (30+ checkers) | Fast |
Layering Your Quality Gates
You can't just throw every tool at your code at once, or your feedback loop will be too slow. The key is a layered approach. Think of this as a filter: the fastest, cheapest checks run first, and the slowest, most expensive checks run last.
The first layer is always Linting and Formatting. This should run in under a minute. If the code is ugly or contains unused variables, there's no point in checking if the types are correct or if the security is tight. It's the most basic form of hygiene.
The second layer is Type Checking. For TypeScript users, this means enabling Strict Mode (the --strict flag). AI agents love the any type because it's a convenient escape hatch when they aren't sure about a data structure. Strict mode shuts that door. If you're retrofitting this into an old project, don't try to fix everything at once. Use a PR-level tsconfig to enforce strict typing only on the files that were actually changed in the current pull request.
The third layer is Security Scanning. AI agents sometimes suggest insecure defaults or, worse, might accidentally include a placeholder API key they found in their training data. Tools like Semgrep or Gitleaks can catch these before they hit your main branch. This layer typically takes a few minutes and should run in parallel with your type checks to save time.
Practical Implementation: The GitHub Actions Workflow
How do you actually wire this up? The goal is to keep the feedback loop under two minutes for the most common failures. If a developer (or an agent) has to wait ten minutes to find out they missed a semicolon, the "vibe" is dead.
- Pre-commit Hooks: Use a tool like
pre-committo handle basic formatting and import ordering locally. This catches the "dumb" mistakes before the code even leaves the machine. - Fast-Path CI: In GitHub Actions, trigger the linting and TypeScript compilation immediately on every push. These jobs are the "required status checks."
- Parallel Security: While the types are being checked, trigger your security scanners (e.g., Snyk or npm audit) in a separate parallel job.
- Deep Testing: Only after the linting and security gates pass should you trigger the heavy lifting: Jest tests, Docker build validations, or agentic testing (where another AI agent tries to break the code).
For those using Go, an aggressive configuration is often necessary. Some high-performance vibe-coded projects use Golang-cilint with a massive array of checkers, including errcheck for unhandled errors, staticcheck for general bugs, and gosec for security vulnerabilities. This level of granularity is a safeguard against the subtle logic errors AI agents frequently introduce.
The Human-AI Feedback Loop
Even with a perfect pipeline, you can't completely remove the human. The pipeline catches the technical errors, but a human is still needed to catch the conceptual errors. I call these "cringe things"-code that technically works and passes all tests but is structured in a way that is confusing or redundant.
To make this work, establish a rules.md file in your project root. This isn't for the AI's eyes only; it's a contract for the project. Specify things like:
- Maximum file length (e.g., under 400 lines).
- Strict adherence to DRY (Don't Repeat Yourself) principles.
- Specific folder structures for components vs. hooks.
rules.md file, and tell it to try again. This trains the agent to align with your project's specific "vibe."
Common Pitfalls to Avoid
One of the biggest mistakes teams make is adding too many rules too fast. If you enable 50 new linting rules on a Monday, your developers will spend Tuesday ignoring the CI failures because there are too many of them. Implement your pipeline in phases:
- Week 1: Linting and Basic Type Checking. These are low-noise and high-value.
- Week 2: Security Scanning and Secret Detection.
- Week 3: Test Coverage requirements (specifically for new code via changed-files patterns).
Another trap is relying solely on the AI to fix its own linting errors. Sometimes, an AI gets stuck in a loop where it fixes a linting error but introduces a type error in the process. When this happens, a quick human intervention to manually format the code or clarify the type definition is faster than five more prompts.
Why is --max-warnings 0 so important for AI code?
AI coding agents tend to generate unused variables, obsolete imports, and minor style inconsistencies at a much higher rate than humans. If warnings are allowed, they accumulate quietly in the background, creating a "noisy" codebase where real issues are hidden. Setting max-warnings to 0 treats every minor slip-up as a failure, forcing the agent to produce clean, production-ready code.
Does strict mode in TypeScript slow down development?
Initially, yes, because it exposes a lot of hidden errors. However, for vibe-coding, it's a necessity. Without it, AI agents frequently use the 'any' type when they are uncertain, which effectively disables type safety. Strict mode prevents this and ensures the AI provides specific, valid types for every piece of data.
What is the best tool for a unified linting/formatting experience?
Biome is highly recommended for vibe-coded projects. Because it combines linting and formatting into a single tool with a unified configuration, it eliminates the conflict that often happens between ESLint and Prettier, which can confuse AI agents and cause repetitive, unnecessary edits.
How should I handle security scans in my pipeline?
Security scans should be the third layer of your quality gate. Use tools like Semgrep for pattern-based vulnerability detection and Gitleaks to ensure no secrets are committed. These should run in parallel with type checking to keep the total CI time low while ensuring no insecure AI defaults reach production.
What is a "rules.md" file?
A rules.md file is a standardization document that defines the project's conventions, such as file length limits, naming patterns, and folder structures. It serves as a reference point that you can provide to AI agents to ensure their generated code adheres to the specific architectural and stylistic needs of your project.
- Apr, 30 2026
- Collin Pace
- 0
- Permalink
Written by Collin Pace
View all posts by: Collin Pace