Hacker News
Show HN: OneCLI – Vault for AI Agents in Rust
OneCLI is an open-source gateway that sits between your AI agents and the services they call. You store your real credentials once in OneCLI's encrypted vault, and give your agents placeholder keys. When an agent makes an HTTP call through the proxy, OneCLI matches the request by host/path, verifies the agent should have access, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret. It just uses CLI or MCP tools as normal.
Try it in one line: docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecli
The proxy is written in Rust, the dashboard is Next.js, and secrets are AES-256-GCM encrypted at rest. Everything runs in a single Docker container with an embedded Postgres (PGlite), no external dependencies. Works with any agent framework (OpenClaw, NanoClaw, IronClaw, or anything that can set an HTTPS_PROXY).
We started with what felt most urgent: agents shouldn't be holding raw credentials. The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through.
It's Apache-2.0 licensed. We'd love feedback on the approach, and we're especially curious how people are handling agent auth today.
GitHub: https://github.com/onecli/onecli Site: https://onecli.sh
captn3m0
|next
[-]
Mooshux
|next
|previous
[-]
The gap that gets teams eventually: this works great on one machine, but breaks at the team boundary. CI pipelines have no localhost. Multiple devs sharing agents need access control and audit trails, not just a local swap. A rogue sub-agent with the placeholder can still do damage if the proxy has no per-agent scoping.
We ran into the same thing building this out for OpenClaw setups. Ended up going vault-backed with group-based access control and HMAC-signed calls per request. Full breakdown on the production version: https://www.apistronghold.com/blog/phantom-token-pattern-pro...
sathish316
|next
|previous
[-]
vault_get.sh: https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...
vault_set.sh: https://gist.github.com/sathish316/1f4e6549a8f85ac5c5ac8a088...
Blog about the full setup for OpenClaw: https://x.com/sathish316/status/2019496552419717390
rgbrgb
|root
|parent
[-]
sathish316
|root
|parent
|next
[-]
The agent sees the output of the service, it does not directly see the keys. In OpenClaw, it’s possible to create the skill in a way that the agent does not directly know about or use vault_get command.
nonameiguess
|root
|parent
|previous
[-]
We're going to see this reinvented thousands of times in the next few months by people whose understanding of security is far poorer than HashiCorp's, via implementations that are nowhere near as well-tested, if tested at all.
wuweiaxin
|next
|previous
[-]
hardsnow
|next
|previous
[-]
1) Not all systems respect HTTP_PROXY. Node in particular is very uncooperative in this regard.
2) AWS access keys can’t be handled by simple credential swap; the requests need to be resigned with the real keys. Replicating the SigV4 and SigV4A exactly was bit of a pain.
3) To be secure, this system needs to run outside of the execution sandbox so that the agent can’t just read the keys from the proxy process.
For Airut I settled on a transparent (mitm)proxy, running in a separate container, and injecting proxy cert to the cert store in the container where the agent runs. This solved 1 and 3.
lancetipton
|root
|parent
|next
[-]
I essentially run a sidecar container that sets up ip tables that redirect all requests through my mitm proxy. This was specifically required because of Node not respecting HTTP_PROXY.
Also had to inject a self signed cert to ensure SSL could be proxied and terminated by the mitm proxy, which then injects the secrets, and forwards the request on.
Have you run into any issues with this setup? I'm trying to figure out if there's anything I'm missing that might come back to bite me?
hardsnow
|root
|parent
[-]
Another thing I did was to allow configuring which hosts each credential is scoped to. Replacement /resigning doesn’t happen unless host matches. That way it is not possible to leak keys by making requests to malicious hosts.
inssein
|root
|parent
|next
|previous
[-]
paxys
|next
|previous
[-]
So how does that help exactly? The agent can still do exactly what it could have done if it had the real key.
brabel
|root
|parent
[-]
paxys
|root
|parent
[-]
atonse
|next
|previous
[-]
I have few questions:
- How can a proxy inject stuff if it's TLS encrypted? (same for IronClaw and others)
- Any adapters for existing secret stores? like maybe my fake credential can be a 1Password entry path (like 1Password:vault-name/entry/field and it would pull from 1P instead of having to have yet another place for me to store secrets?
anthonyskipper
|next
|previous
[-]
Otherwise this is cool, we need more competition here.
guyb3
|root
|parent
[-]
https://github.com/onecli/onecli/blob/942cfc6c6fd6e184504e01...
miki_ships
|next
|previous
[-]
Olshansky
|next
|previous
[-]
---
If this is of interest, I also recommend looking into: https://github.com/loderunner/scrt.
To me, it's a compliment to 1password.
I use it to save every new secret/api key I get via the CLI.
It's intentionally very feature limited.
Haven't tried it with agents, but wouldn't be surprised if the CLI (as is) would be enough.
jpbryan
|next
|previous
[-]
stevekemp
|root
|parent
[-]
What are you suggesting? The program makes a call to retrieve the secret from AWS? Then has full access to do with it what they want? That's exactly the risk and the problem this, and related solutions mentioned in this thread, is trying to solve.