Hacker News
Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP
mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand:
mcp2cli --mcp https://mcp.example.com/sse --list # ~16 tokens/tool
mcp2cli --mcp https://mcp.example.com/sse create-task --help # ~120 tokens, once
mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"
No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface.Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns.
It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli`
Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub.
jancurn
|next
[-]
- https://github.com/apify/mcpc
- https://github.com/chrishayuk/mcp-cli
- https://github.com/wong2/mcp-cli
- https://github.com/f/mcptools
- https://github.com/adhikasp/mcp-client-cli
- https://github.com/thellimist/clihub
- https://github.com/EstebanForge/mcp-cli-ent
- https://github.com/knowsuchagency/mcp2cli
- https://github.com/philschmid/mcp-cli
- https://github.com/steipete/mcporter
- https://github.com/mattzcarey/cloudflare-mcp
- https://github.com/assimelha/cmcp
Doublon
|next
|previous
[-]
re-thc
|root
|parent
|next
|previous
[-]
Next we'll wrap the CLIs into MCPs.
Charon77
|root
|parent
|previous
[-]
Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...
Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP
acchow
|next
|previous
[-]
I consider this a bug. I'm sure the chat clients will fix this soon enough.
Something like: on each turn, a subagent searches available MCP tools for anything relevant. Usually, nothing helpful will be found and the regular chat continues without any MCP context added.
phh
|root
|parent
|next
[-]
I'll add to your comment that it isn't a bug of MCP itself. MCP doesn't specify what the LLM sees. It's a bug of the MCP client.
In my toy chatbot, I implement MCP as pseudo-python for the LLM, dropping typing info, and giving the tool infos as abruptly as possible, just a line - function_name(mandatory arg1 name, mandatory arg2 name): Description
(I don't recommend doing that, it's largely obsolete, my point is simply that you feed the LLM whatever you want, MCP doesn't mandate anything. tbh it doesn't even mandate that it feeds into a LLM, hence the MCP CLIs)
fennecbutt
|root
|parent
|next
|previous
[-]
I agree with the general idea that models are better trained to use popular cli tools like directory navigation etc, but outside of ls and ps etc the difference isn't really there, new clis are just as confusing to the model as new mcps.
rakamotog
|next
|previous
[-]
So, I dont see why a typical productivity app build CLI than MCP. Am I missing anything?
stephantul
|next
|previous
[-]
As an aside: this is a cool idea but the prose in the readme and the above post seem to be fully generated, so who knows whether it is actually true.
hrmtst93837
|root
|parent
|next
[-]
Measure fidelity with exact diffs and embedding similarity, and include streaming behavior, schema-change resilience, and rate-limit fallbacks in the cases you care about. Check the repo for a runnable benchmark, archived fixtures captured with vcrpy or WireMock, and a clear test harness that reproduces the claimed 96 to 99 percent savings.
benvan
|next
|previous
[-]
It works by schematising the upstream and making data locally synchronised + a common query language, so the longer term goals are more about avoiding API limits / escaping the confines of the MCP query feature set - i.e. token savings on reading data itself (in many cases, savings can be upwards of thousands of times fewer tokens)
Looking forward to trying this out!
DieErde
|next
|previous
[-]
Tell me the hottest day in Paris in the
coming 7 days. You can find useful tools
at www.weatherforadventurers.com/tools
And then the tools url can simply return a list of urls in plain text like /tool/forecast?city=berlin&day=2026-03-09 (Returns highest temp and rain probability for the given day in the given city)
Which return the data in plain text.What additional benefits does MCP bring to the table?
fennecbutt
|root
|parent
|next
[-]
Being able to have a verifiable input/output structure is key. I suppose you can do that with a regular http api call (json) but where do you document the openapi/schema stuff? Oh yeah...something like mcp.
I agree that mcp isn't as refined as it should be, but when used properly it's better than having it burn thru tokens by scraping around web content.
Phlogistique
|root
|parent
|next
|previous
[-]
You could restrict where it can go with domain allowlists but that has insufficient granularity. The same URL can serve a legitimate request or exfiltrate data depending on what's in the headers or payload: see https://embracethered.com/blog/posts/2025/claude-abusing-net...
So you need to restrict not only where the agent can reach, but what operations it can perform, with the host controlling credentials and parameters. That brings us to an MCP-like solution.
rvz
|root
|parent
[-]
MCP is just as worse version of the above allowing lots of data exfiltration and manipulation by the LLM.
acchow
|root
|parent
|next
[-]
The classic "API key" flow requires you to go to the resource site, generate a key, copy it, then paste it where you want it to go.
Oauth automates this. It's like "give me an API key" on demand.
SyneRyder
|root
|parent
|next
|previous
[-]
MCP can provide validation & verification of the request before making the API call. Giving the model a /tool/forecast URL doesn't prevent the model from deciding to instead explore what other tools might be available on the remote server instead, like deciding to try running /tool/imagegenerator or /tool/globalthermonuclearwar. MCP can gatekeep what the AI does, check that parameters are valid, etc.
Also, MCP can be used to do local computation, work with local files etc, things that web access wouldn't give you. CLI will work for some of those use cases too, but there is a maximum command line length limit, so you might struggle to write more than 8kB to a file when using the command line, for example. It can be easier to get MCP to work with binary files as well.
I tend to think of local MCP servers like DLLs, except the function calls are over stdio and use tons of wasteful JSON instead of being a direct C-function call. But thinking of where you might use a DLL and where you might call out to a CLI can be a useful way of thinking about the difference.
ewidar
|root
|parent
|next
|previous
[-]
Not all services provide good token definition or access control, and often have API Key + CLI combo which can be quite dangerous in some cases.
With an MCP even these bad interfaces can be fixed up on my side.
tern
|next
|previous
[-]
nwyin
|next
|previous
[-]
anthropic mentions MCPs eating up context and solutions here: https://www.anthropic.com/engineering/code-execution-with-mc...
I built one specifically for Cognition's DeepWiki (https://crates.io/crates/dw2md) -- but it's rather narrow. Something more general like this clearly has more utility.
Intermernet
|next
|previous
[-]
If the service is using more tokens to produce the same output from the same query, but over a different protocol, than the service is a scam.
ekianjo
|next
|previous
[-]
jofzar
|next
|previous
[-]
ejoubaud
|next
|previous
[-]
philipp-gayret
|next
|previous
[-]
tuananh
|next
|previous
[-]
rvz
|next
|previous
[-]
You might as well directly create a CLI tool that works with the AI agents which does an API call to the service anyway.
liminal-dev
|previous
[-]
If you want humans to spend time reading your prose, then spend time actually writing it.