Gemini Ollama CLI Bridge: Local-First Code Analysis with Optional Cloud Refinement
Gemini Ollama CLI Bridge is a Python CLI tool that chains a local Ollama model with Google's Gemini CLI into a two-stage code analysis pipeline. You point it at your codebase, it runs a first pass entirely on your machine via Ollama, and then optionally forwards the results to Gemini for a second opinion. Output lands as Markdown so it slots straight into docs or review workflows.
Why It's Useful
The main draw is the offline-first design. Most AI code-review tools require sending your source to a remote API. This bridge flips the default: the local Ollama pass handles the bulk of the work—scanning for bugs, security issues, or performance concerns—without any code leaving your machine. The Gemini refinement step is entirely opt-in, which makes it practical for proprietary codebases or air-gapped environments where you still want LLM-assisted review.
Technical Takeaway
The architecture is straightforward but worth noting. Ollama exposes a local HTTP API (default localhost:11434), and the bridge talks to it directly. For the Gemini leg, instead of using a REST client, it pipes the analysis through Gemini's CLI via stdin. This means you get the flexibility of custom Gemini commands and arguments without managing API keys or SDK versions for that stage—just a working gemini binary on your PATH. It also supports file-level include/exclude patterns so you can target specific directories or skip generated code.
