OPNsense VLANs via REST API: Building the Network Layer for Agentic DevOps

Why I migrated from pfSense to OPNsense, what broke along the way, and how to drive your entire network configuration from code so an AI agent can do it instead of you.

The Shift Toward Agentic Infrastructure

For a long time the homelab was just a homelab. Proxmox, some VMs, a firewall, the usual accumulation of servers you tell yourself you will properly organize someday. But the projects running here have been converging on something more deliberate: a platform where AI agents can autonomously provision and manage infrastructure end to end.

The idea looks like this in practice. A request comes in, from a Slack message or a task queue, asking for a new staging environment for a project. An agent processes it, creates a VLAN on the firewall, provisions a VM on Proxmox, wires up DNS and DHCP, configures firewall rules to isolate the environment, and posts back a confirmation with connection details. No human clicks through a UI. No one edits a YAML file and waits for a pipeline. The agent owns the full provisioning loop.

That model only works if every layer of the stack exposes a clean, programmable interface. Storage has MinIO with an S3 API. Compute has the Proxmox REST API. But networking was the weak link. pfSense, which had been running here for years, has no real REST API. Configuration lives in a monolithic XML file and the only real automation paths are PHP console scripts or manipulating that XML directly. That is not something you can build reliable agentic workflows on top of.

OPNsense has a proper REST API built in since its early releases. VLANs, firewall rules, DHCP ranges, DNS entries, gateway configuration: all reachable via standard HTTP calls. That single fact is the reason for the migration, and the reason this post is long.

Why the network layer matters for agentic systems

An agent that can provision compute but not configure networking can only spin up isolated VMs. Real environments need network placement: which VLAN does this service live on, what can reach it, does it get a public-facing address. Without programmatic network control, a human is still in the loop for every environment. The firewall API is what closes that gap.

Hardware: The Dedicated Firewall Build

Running your firewall on dedicated hardware matters more than it might seem. Shared compute means firewall processing competes with VM workloads for CPU time, and more importantly, a hypervisor host reboot takes your gateway offline with it. A dedicated machine eliminates both problems.

For this build the platform is a Dell PowerEdge R230 pulled from the used server market. It is a 1U single-socket machine well-suited for a firewall role: low power draw, ECC memory, iDRAC Enterprise for out-of-band management, and enough PCIe bandwidth for a 10GbE NIC.

ComponentSpecNotes
PlatformDell PowerEdge R2301U rack-mount, iDRAC Enterprise included
CPUIntel Xeon E3-1220 v54C/4T, 3.0GHz, low TDP, hardware AES-NI for VPN throughput
RAM8GB ECC DDR4Plenty for a stateful firewall
Storage2x Intel S3500 160GB SATA SSDEnterprise drives with power-loss protection, via PERC H330
Storage ControllerDell PERC H330Onboard; requires mrsas driver fix for FreeBSD (see gotchas)
Built-in NICs2x 1GbE Broadcom BCM5720Fallback management and initial setup
Added NICChelsio T540-CR 4x 10GbE SFP+Primary traffic interfaces, T5 generation
ManagementiDRAC EnterpriseOut-of-band console, KVM over IP, remote power control

The Chelsio T540-CR is the important addition. Four SFP+ ports at 10GbE gives enough ports to run dual WAN connections, trunk to a managed switch, and keep a spare without reusing any port for multiple purposes. The card is well-supported in FreeBSD via the cxgbe driver (T5 generation interfaces appear as cxl0 through cxl3), which is worth checking before buying used 10GbE cards for OPNsense builds.

Why 10GbE matters on a firewall

All inter-VLAN traffic that crosses a security boundary passes through the firewall. However, it is important to understand that not all inter-VLAN traffic needs to hit the firewall. If you have a managed switch that supports Layer 3 routing, trusted internal VLAN-to-VLAN traffic can be routed at the switch level at line rate. OPNsense then only inspects traffic crossing security boundaries: production to database, DMZ to internal, anything touching WAN.

This architecture matters because it means the firewall does not become a 10GbE bottleneck for routine internal traffic. When a Proxmox node on the INFRA VLAN talks to a storage node on the same VLAN or a trusted adjacent VLAN, the switch handles it. The firewall only sees the traffic that actually needs policy enforcement.

That said, 10GbE uplinks on the firewall are still important. When traffic does cross the firewall (and a meaningful amount will), you do not want 1GbE ports creating a ceiling. Especially if you plan to upgrade to a multi-gigabit or 10Gbps WAN connection in the future, the WAN-facing interface needs to match.

iDRAC is not optional on a firewall build

iDRAC Enterprise is worth paying extra for in the used market. When you are changing network configuration on the machine that is also your gateway, losing SSH access mid-change is inevitable eventually. iDRAC runs independently of the OS and the network stack, so you always have a console path back in regardless of what state the network config is in. The entire OPNsense installation for this build was done remotely through iDRAC Virtual Console and Virtual Media, without a physical monitor or keyboard connected to the server.

Migrating from pfSense to OPNsense

The migration is conceptually straightforward: install OPNsense, assign interfaces, recreate rules and DHCP config. The interesting parts are in the details.

OPNsense 26.1 is the current release at time of writing. After first boot, FreeBSD assigns NIC names based on driver and index. The Broadcom BCM5720 onboard ports come up as bge0 and bge1. The Chelsio T5 ports come up as cxl0 through cxl3. Verifying that the right drivers loaded before touching anything else saves confusion later:

# Verify the Chelsio driver loaded
kldstat | grep cxgbe

# List all network interfaces
ifconfig -a | grep "flags=" | awk '{print $1}'

In OPNsense 26.1 the cxgbe driver for Chelsio T5 cards loads automatically. On older releases it may need to be added to /boot/loader.conf.local. You would add if_cxgbe_load="YES"

Interface assignment lives at Interfaces → Assignments. The final port allocation for this build: WAN on bge0 (onboard 1GbE, sufficient for a typical 1Gbps ISP connection), LAN trunk on cxl0 (10GbE to the managed switch carrying all VLAN tags), with cxl1 reserved for a second WAN upstream or future 10Gbps ISP upgrade, and cxl2/cxl3 spare. The second onboard 1GbE port (bge1) serves as fallback management access.

Struggles and Gotchas

Several things did not go smoothly and are worth documenting because none of them are obvious from the official documentation.

The mrsas driver conflict on Dell R230

This is the gotcha that will save you an entire day of troubleshooting if you are on this hardware.

FreeBSD (which OPNsense is built on) uses an older driver called mfi for the PERC H330 by default. For modern PERC controllers with SATA drives, the mfi driver causes I/O errors, kernel stalls, and infinite reset loops. The correct driver is mrsas, but FreeBSD deliberately gives mfi higher priority for backward compatibility with old RAID arrays. The mrsas driver will never load on its own — you must force it.

Intel SATA SSDs are particularly sensitive to this. Standard SAS drives are more forgiving of the mfi driver’s quirks, but Intel SATA SSDs are strict about timing. When the mfi driver tries to talk to a SATA drive through the H330, it often gets out of sync, leading to infinite timeout/stall loops.

The fix must be applied at the OPNsense boot menu before the first install, and again on first boot after installation, before being made permanent:

# At the OPNsense boot menu, press 3 for loader prompt, then:
set hw.mfi.mrsas_enable="1"
boot

After installation completes and OPNsense boots successfully, make the fix permanent:

# From the OPNsense console shell (option 8):
echo 'hw.mfi.mrsas_enable="1"' >> /boot/loader.conf.local
Gotcha: The variable is mrsas_enable, not mrsas_disable

The setting tells FreeBSD to enable the mrsas driver so it takes priority over mfi. Getting this backwards means the fix does nothing and you will be staring at the same stall wondering why. Search results and forum posts sometimes get this wrong.

Gotcha: /boot/loader.conf is managed by OPNsense and gets overwritten on upgrades

OPNsense owns /boot/loader.conf and will overwrite it during firmware upgrades. Custom driver settings go in /boot/loader.conf.local, which is preserved across updates. Putting things in the wrong file means your fix silently disappears after the next update.

This is a well-documented issue across FreeBSD forums, Netgate forums, and TrueNAS communities for anyone running Dell PERC H330, H730, or H830 controllers. It affects pfSense, OPNsense, TrueNAS, and any FreeBSD-based OS. The fix is the same everywhere.

PERC H330 RAID setup: use Lifecycle Controller, not iDRAC

The S3500 SSDs connect through the PERC H330 RAID controller. The goal is two separate RAID 0 virtual disks (one per physical drive), then let OPNsense’s ZFS installer mirror them at the software layer. This gives PERC compatibility while ZFS handles the actual redundancy.

Create the virtual disks through Lifecycle Controller (press F10 during POST), not the iDRAC web UI. iDRAC’s web interface creates virtual disks but has a known issue on the R230/H330 where the configuration does not persist across reboots — the drives flip back to unconfigured state. The Lifecycle Controller writes directly to the controller’s NVRAM and holds the configuration reliably.

Gotcha: HBA mode does not work with SATA drives on the H330

If you are tempted to switch the H330 to HBA mode for direct passthrough: it works fine with SAS drives but does not pass SATA drives to the OS at all. They simply disappear. The RAID 0 virtual disk approach is the correct workaround for SATA SSDs on this controller.

VLAN parent interface must be the trunk port

VLAN configuration requires a parent interface: the physical NIC carrying 802.1Q tagged frames from the managed switch. This must be the same physical port actually cabled to the switch trunk. If you create VLANs with one parent and your switch trunk cable is in a different port, no VLAN traffic will flow and nothing in the logs will tell you why clearly. Sketch the physical cabling before touching the VLAN config.

Gotcha: changing the VLAN parent interface means starting over

You cannot edit the parent interface of an existing VLAN in place. If you need to move VLANs to a different parent, you have to delete all the VLAN definitions and recreate them with the new parent. The opt interface assignments that reference those VLANs will also need to be updated.

Choosing a DHCP server: Dnsmasq vs Kea

OPNsense 26.1 ships with three DHCP server options. ISC DHCP has been moved to a legacy plugin and is not recommended for new setups. That leaves Kea DHCP and Dnsmasq.

Kea is the enterprise option: DHCP-only, a REST API for programmatic lease management, and HA with lease synchronization. Dnsmasq handles both DNS and DHCP in a single process and automatically registers DHCP hostnames into DNS, so every device gets name resolution for free without additional configuration.

For a homelab with a few hundred clients or fewer, Dnsmasq wins on simplicity and the automatic DNS registration benefit. Kea’s REST API is genuinely interesting for agentic use cases, but that is a future problem.

Gotcha: Dnsmasq DHCP is a plugin, not core OPNsense

The Dnsmasq DHCP API endpoints require the os-dnsmasq-dhcp plugin, installed separately from System → Firmware → Plugins. Without it, the /api/dnsmasq/ endpoints return 404 and you will wonder what you are doing wrong.

Gotcha: Dnsmasq interface list uses internal identifiers, not your friendly names

When configuring which interfaces Dnsmasq listens on, you specify internal OPNsense identifiers like lan, opt1, opt2 — not the descriptions you assigned like “MGMT” or “INFRA”. If you add a new VLAN interface and devices on it are not getting DHCP responses, check that you added the correct identifier to the Dnsmasq listen list and reloaded the service.

ISP gateway passthrough and MAC address binding

If your ISP provides a gateway device (common with fiber providers) and you are using IP Passthrough mode, the gateway typically identifies the downstream device by MAC address, not IP. Replacing your firewall means updating the passthrough configuration to point at the new WAN interface MAC. Until that happens, OPNsense WAN gets a private IP from the gateway instead of the public IP, resulting in double NAT with no obvious error in the OPNsense UI.

Find the WAN interface MAC at Interfaces → Diagnostics → Netstat → Interfaces, then update the passthrough setting in your ISP gateway’s admin panel.

VLAN Design for a Segmented Homelab

The goal was a segmentation model that mirrors how a mid-size enterprise structures network tiers: each functional zone gets its own VLAN, and firewall rules enforce what can communicate with what. This limits blast radius if something gets compromised, creates clean network identities for logging and monitoring, and gives the agentic provisioning layer meaningful decisions to make about where to place new workloads.

The subnet scheme embeds the VLAN tag into the third octet for easy identification. VLAN 10 lives on 10.0.10.0/24, VLAN 20 on 10.0.20.0/24, and so on. This makes it trivial to identify which network segment a given IP belongs to from any log entry or packet capture.

VLANTagSubnetPurposeDHCPOutbound Policy
MGMT1010.0.10.0/24Out-of-band management: iDRAC, switch management, UPSYesRestricted
INFRA2010.0.20.0/24Hypervisors, storage, Proxmox cluster trafficYesYes
PROD3010.0.30.0/24Production workloads and servicesYesYes
STAGING4010.0.40.0/24Staging environments mirroring productionYesYes
AI5010.0.50.0/24GPU compute nodes, LLM inference, training jobsYesYes
DB6010.0.60.0/24Database servers, static IPs onlyNoNo outbound; PROD and STAGING inbound only
DMZ7010.0.70.0/24Public-facing services, reverse proxies, ingressYesYes
HOME8010.0.80.0/24Personal devices, laptops, phonesYesYes
VPN9010.0.90.0/24VPN tunnel endpoints, remote accessYesControlled

The DB VLAN deserves a note. Database servers should not be initiating outbound connections and should not be reachable from anything except the application tiers that need them. A dedicated VLAN makes this trivial to enforce at the firewall level. No DHCP means every database host has a static IP that is explicitly documented, which forces intentionality about what is running there.

Configuring VLANs via the REST API

This is the part that makes the migration worthwhile. The same VLAN setup, DHCP ranges, and firewall rules you would normally click through in a UI can be driven entirely via HTTP calls. Here is how each step works.

Generating an API key

Navigate to System → Access → Users → (your user) → API keys → +. OPNsense generates a key and secret pair. Download the file immediately as the secret is only shown once. All API calls use HTTP Basic Auth with the key as username and secret as password, over HTTPS only. Self-signed certificates are the default, so pass -k to curl or disable cert verification in your HTTP client.

Verifying connectivity

KEY="your_api_key"
SECRET="your_api_secret"
HOST="your-opnsense-host"

curl -sk -u "$KEY:$SECRET" \
  "https://$HOST/api/core/firmware/running" | python3 -m json.tool

A JSON response with firmware version information means you are connected and credentials are valid.

Creating VLANs

# Create VLAN 10 (MGMT) on trunk interface cxl0
curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/interfaces/vlan_settings/addItem" \
  -H "Content-Type: application/json" \
  -d '{
    "vlan": {
      "if":    "cxl0",
      "tag":   "10",
      "pcp":   "0",
      "descr": "MGMT"
    }
  }'

# Apply -- without this, VLANs exist in config but not on the system
curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/interfaces/vlan_settings/reconfigure"
Gotcha: addItem does not affect the running config until reconfigure is called

Write operations in OPNsense’s API modify the configuration store but do not reload the live network config. Nothing you added will exist as a network interface until you call reconfigure on the relevant module. This is consistent across all OPNsense API modules and catches everyone at least once.

Checking what VLANs already exist

curl -sk -u "$KEY:$SECRET" \
  "https://$HOST/api/interfaces/vlan_settings/searchItem" | python3 -m json.tool

The response is a paginated list. Each entry has if (parent) and tag fields. Check these before calling addItem to avoid creating duplicates.

Assigning and configuring interfaces: the API gap

This is where honesty about the current state of the API matters. Creating a VLAN via the API produces a virtual interface like cxl0.10. For OPNsense to route traffic through it, the interface needs to be assigned and given an IP address.

Interface assignment is not available via the API as of OPNsense 26.1. This is a known gap tracked in the OPNsense GitHub repository. You cannot programmatically assign a VLAN interface to an opt slot or determine which opt identifier maps to which VLAN device. This means:

  1. Initial interface assignment requires the UI. Go to Interfaces → Assignments, find your new VLAN interface in the dropdown, and click Add. This assigns it as opt1, opt2, etc.
  2. Interface configuration (IP, enable/disable, description) is also done in the UI during initial setup. Navigate to the interface and set its IPv4 address, subnet mask, and description.
  3. Once assigned and configured, ongoing management of firewall rules, DHCP, and DNS is fully API-driven.

The practical impact: when bootstrapping a new OPNsense install with 9 VLANs, you will spend about 15 minutes clicking through the UI to assign and configure each interface. After that one-time setup, everything is programmable. This is the one remaining manual step in the provisioning workflow.

Note

For scripts that need to discover which opt identifier corresponds to which VLAN, parse the output of api/interfaces/vlan_settings/searchItem and cross-reference with the api/diagnostics/interface/getInterfaceNames endpoint. There is no clean single-call way to ask “which opt is my VLAN 40?”

Adding DHCP ranges (Dnsmasq)

# Add DHCP range for INFRA (opt2, assuming that's your INFRA VLAN assignment)
curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/dnsmasq/settings/addDhcpRange" \
  -H "Content-Type: application/json" \
  -d '{
    "dhcp_range": {
      "interface":   "opt2",
      "start_addr":  "10.0.20.100",
      "end_addr":    "10.0.20.200",
      "subnet_mask": "255.255.255.0",
      "domain_type": "range",
      "description": "INFRA",
      "nosync":      "0"
    }
  }'

# Reload dnsmasq to apply
curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/dnsmasq/service/reconfigure"

Updating the Dnsmasq listen interface list

curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/dnsmasq/settings/setGeneral" \
  -H "Content-Type: application/json" \
  -d '{
    "dnsmasq": {
      "interface": "lan,opt1,opt2,opt3,opt4,opt5,opt6,opt7,opt8,opt9"
    }
  }'

curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/dnsmasq/service/reconfigure"

Adding firewall rules

# Allow PROD to reach DB on PostgreSQL port
curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/firewall/filter/addRule" \
  -H "Content-Type: application/json" \
  -d '{
    "rule": {
      "enabled":          "1",
      "action":           "pass",
      "interface":        "opt3",
      "direction":        "in",
      "ipprotocol":       "inet",
      "protocol":         "tcp",
      "source_net":       "10.0.30.0/24",
      "destination_net":  "10.0.60.0/24",
      "destination_port": "5432",
      "description":      "PROD to DB PostgreSQL"
    }
  }'

# Firewall rules use apply, not reconfigure
curl -sk -u "$KEY:$SECRET" \
  -X POST "https://$HOST/api/firewall/filter/apply"
Gotcha: firewall rules use apply, not reconfigure

VLAN and DHCP changes use reconfigure. Firewall rule changes use apply. Using the wrong one is a silent no-op that leaves you wondering why your change did not take effect. There is no error, no warning — it just does nothing.

A Reusable Python Client

Curl is useful for testing individual endpoints but you want a proper client for automated workflows. The following wrapper uses only the Python standard library:

import json
import ssl
import base64
import urllib.request
import urllib.error


class OPNsenseClient:
    """
    Minimal OPNsense REST API client.
    No external dependencies -- stdlib only.
    """

    def __init__(self, host: str, key: str, secret: str, verify_ssl: bool = False):
        self.base = f"https://{host}/api"
        creds = base64.b64encode(f"{key}:{secret}".encode()).decode()
        self.headers = {
            "Authorization": f"Basic {creds}",
            "Content-Type":  "application/json",
        }
        self.ctx = ssl.create_default_context()
        if not verify_ssl:
            self.ctx.check_hostname = False
            self.ctx.verify_mode = ssl.CERT_NONE

    def _req(self, method: str, path: str, body: dict = None) -> dict:
        url  = f"{self.base}/{path}"
        data = json.dumps(body).encode() if body is not None else None
        req  = urllib.request.Request(url, data=data, headers=self.headers, method=method)
        with urllib.request.urlopen(req, context=self.ctx, timeout=15) as r:
            return json.loads(r.read())

    def get(self, path: str) -> dict:
        return self._req("GET", path)

    def post(self, path: str, body: dict = None) -> dict:
        return self._req("POST", path, body or {})

    # ── VLANs ──────────────────────────────────────────────────────────

    def get_vlans(self) -> list[dict]:
        return self.get("interfaces/vlan_settings/searchItem").get("rows", [])

    def create_vlan(self, parent: str, tag: int, description: str) -> str:
        """Create a VLAN and return its UUID. Call apply_vlans() to activate."""
        r = self.post("interfaces/vlan_settings/addItem", {
            "vlan": {"if": parent, "tag": str(tag), "pcp": "0", "descr": description}
        })
        if r.get("result") != "saved":
            raise RuntimeError(f"VLAN create failed: {r}")
        return r["uuid"]

    def apply_vlans(self):
        self.post("interfaces/vlan_settings/reconfigure")

    def ensure_vlan(self, parent: str, tag: int, description: str):
        """Idempotent VLAN creation -- no-op if the tag already exists on parent."""
        existing = {str(v["tag"]) for v in self.get_vlans() if v.get("if") == parent}
        if str(tag) in existing:
            return
        self.create_vlan(parent, tag, description)
        self.apply_vlans()

    # ── DHCP (Dnsmasq plugin) ─────────────────────────────────────────

    def add_dhcp_range(self, interface: str, start: str, end: str,
                       description: str = "", netmask: str = "255.255.255.0"):
        """Requires os-dnsmasq-dhcp plugin to be installed."""
        r = self.post("dnsmasq/settings/addDhcpRange", {
            "dhcp_range": {
                "interface":   interface,
                "start_addr":  start,
                "end_addr":    end,
                "subnet_mask": netmask,
                "domain_type": "range",
                "description": description,
                "nosync":      "0",
            }
        })
        if r.get("result") != "saved":
            raise RuntimeError(f"DHCP range create failed: {r}")
        self.post("dnsmasq/service/reconfigure")

    def set_dnsmasq_interfaces(self, interfaces: list[str]):
        self.post("dnsmasq/settings/setGeneral", {
            "dnsmasq": {"interface": ",".join(interfaces)}
        })
        self.post("dnsmasq/service/reconfigure")

    # ── Firewall ───────────────────────────────────────────────────────

    def add_firewall_rule(self, interface: str, source: str, destination: str,
                          dst_port: str = None, protocol: str = "tcp",
                          action: str = "pass", description: str = ""):
        rule = {
            "enabled":         "1",
            "action":          action,
            "interface":       interface,
            "direction":       "in",
            "ipprotocol":      "inet",
            "protocol":        protocol,
            "source_net":      source,
            "destination_net": destination,
            "description":     description,
        }
        if dst_port:
            rule["destination_port"] = dst_port
        r = self.post("firewall/filter/addRule", {"rule": rule})
        if r.get("result") != "saved":
            raise RuntimeError(f"Firewall rule create failed: {r}")
        self.post("firewall/filter/apply")

With that client, scripting a full VLAN environment into existence is a few readable calls:

opn = OPNsenseClient(host="your-opnsense-host", key=KEY, secret=SECRET)

# Idempotent -- safe to run multiple times
opn.ensure_vlan("cxl0", 40, "STAGING")

# Note: you still need to assign the VLAN interface in the UI before DHCP works.
# Once assigned as opt4, configure DHCP:
opn.add_dhcp_range(
    interface="opt4",
    start="10.0.40.100",
    end="10.0.40.200",
    description="STAGING"
)

# Allow PROD to reach STAGING
opn.add_firewall_rule(
    interface="opt3",
    source="10.0.30.0/24",
    destination="10.0.40.0/24",
    description="PROD to STAGING"
)

# Allow STAGING to reach DB on PostgreSQL
opn.add_firewall_rule(
    interface="opt4",
    source="10.0.40.0/24",
    destination="10.0.60.0/24",
    dst_port="5432",
    description="STAGING to DB PostgreSQL"
)
API coverage in OPNsense 26.1: an honest assessment

Most ongoing operations are fully API-driven: VLAN creation and deletion, firewall rules, NAT, DHCP ranges, DNS entries, gateway configuration, and service management. The gaps worth knowing about: interface assignment (not available via API — the single biggest gap), interface discovery (no clean way to map opt identifiers to VLAN devices), and Dnsmasq DHCP (requires a plugin, not core). The API surface has been growing steadily with each release, and the interface assignment gap is tracked as a known feature request. Plan for one manual UI session during initial setup, and everything after that is programmable.

What Comes Next

The network layer is now in a state where it can be driven programmatically. That was the precondition for the rest of what I want to build. The next layers to connect are compute (Proxmox REST API), container orchestration (k3s), and a GitOps layer via Argo CD to manage what runs inside the provisioned environments.

The actual agentic pieces I am working toward:

  • An MCP (Model Context Protocol) server wrapping the OPNsense client so LLMs can call network operations directly as tool use
  • Proxmox API integration for VM and LXC container provisioning
  • A unified provisioning agent that takes a service description and wires up network, compute, and DNS without human steps
  • A human-in-the-loop approval layer before destructive or expensive operations apply
  • HAProxy on OPNsense with Cloudflare Tunnels for zero-port-forward public exposure of internal services

The firewall migration was not really about the firewall. It was about getting the network layer into the same programmable shape as the rest of the stack, so that when an agent needs to provision an environment it does not have to stop and ask a human to go click something in a UI. That is the gap this work closes.


Hardware details, driver versions, and API endpoint paths are accurate as of OPNsense 26.1 on FreeBSD 14. If something is wrong or has changed, the comments are open.