Skip to main content

One post tagged with "rust"

View All Tags

Sandboxed Python in the Browser with Pydantic's Monty

· 2 min read

Recently, Simon Willison shared research on running Pydantic's Monty in WebAssembly. Monty is a minimal, secure Python interpreter written in Rust, designed specifically for safely executing code generated by LLMs.

The key breakthrough here is the ability to run Python code with microsecond latency in a strictly sandboxed environment, either on the server (via Rust/Python) or directly in the browser (via WASM).

I've put together a demo project that explores both the Python integration and the WebAssembly build.

View Code

What is Monty?

Monty is a subset of Python implemented in Rust. Unlike Pyodide or MicroPython, which aim for full or broad compatibility, Monty is built for speed and security. It provides:

  1. Restricted Environment: No access to the host file system or network by default.
  2. Fast Startup: Ideal for "serverless" or "agentic" workflows where you need to run small snippets of code frequently.
  3. Rust Foundation: Leveraging Rust's safety and performance.

Running it in the Browser

By compiling Monty to WebAssembly, we can provide a Python REPL that runs entirely on the client side. This is perfect for interactive documentation, playground environments, or edge-side code execution.

In my demo, I've included the WASM assets and a simple HTML interface to try it out.

Why this matters for AI Agents

AI agents often need to execute code to solve problems (e.g., math, data processing). Traditional sandboxing (Docker, Firecracker) has significant overhead. Monty offers a "sandbox-in-a-sandbox" approach that is lightweight enough to be part of the inner loop of an LLM interaction.

Check out the GitHub repository for the full source and instructions on how to run it yourself.