Hacker News

New Claude Code programmatic usage restrictions

49 points by martinald ago | 36 comments

SyneRyder |next [-]

Ouch. I've just been building a tool to go through my historic usage. I'm only on the Max 5x plan, and I only use about 40% of my weekly usage allowance. But it looks like even that usage would now cost me $1000/month of API usage under the new plan. That's a 10x price increase.

At least we've got clarity now? But a lot of my value comes from "claude -p" usage, either scheduled tasks while I'm asleep, or responding to incoming emails / voicetexts. Even the email replies will barely fit in $100/month. I'm not going to pay $1000 / month, so I guess it really is time for me to look at the competition and move my programmatic usage to them.

Man, I love the Claude models, and the whole idea of constitutional AI. We built a lot of tools & infrastructure together, but kept a lot of logs as well. I'll be really sad if I mostly have to move on now.

coldtea |root |parent [-]

>responding to incoming emails / voicetexts.

You need an AI for that?

SyneRyder |root |parent |next [-]

I'm sending the emails and voicetexts to Claude, they're incoming on my machine but from me.

When I'm away from my computer and out walking, I'll often think of a task for Claude, or I might bounce an idea back and forth with Claude via voice messages. I wrote a small Go program to watch my email and launch Claude via "claude -p" when it sees an email from myself addressed to it.

Claude also has a different "character" when collaborating over email, it feels more like a colleague. Hard to describe, but email almost feels like a better interaction UI than the chat window.

I had been starting to train Claude to see how it might go on customer service (eg maybe it could reply to my customers while I'm asleep), but at current Anthropic API costs I think that might still be too expensive.

genxy |root |parent [-]

claude -p loads a lot less garbage into the context.

spoiler |root |parent |next |previous [-]

I'm 99% sure my old boss was pasting Slack messages in and out of ChatGPT. Some people are feral with this AI bullshit

arm32 |root |parent |previous [-]

How else am I going to rapidly cognitively decline?

vova_hn2 |next |previous [-]

XCancel (alternative Twitter frontend) link: https://xcancel.com/ClaudeDevs/status/2054610152817619388

I think that this is much better than the previous situation with total lack of clarity on what is allowed and what isn't.

TomGarden |next |previous [-]

This sucks. I use Claude -p over tailscale to code over voice when I’m on the go for accessibility reasons, and most of the time I do the same while at the computer. Running through $200 in API pricing takes no time. Oh well, time to switch providers I guess.

deaux |root |parent [-]

Out of curiosity, why do you use `claude -p` for that over remote control? I use that for similar work.

TomGarden |root |parent [-]

The biggest difference is that mine is audio-first - it reads everything out over Android tts by default, and runs a computer-side parakeet + Silero VAD server for transcription (My eyes struggle with small screens, though I use it text only occasionally). It's like a voice assistant but with Claude Code. I also made a custom GUI the with shortcuts and stuff, making saying "end conversation" actually end the conversation etc.

Maybe something similar can be done with tmux still, I'm definitely going to explore it

deaux |root |parent [-]

Ah so you use it because the STT you can run on your computer are a lot better than what you can run on your phone?

I use on-device STT with Claude Code's built-in remote control feature to do what you do without needing claude -p, but I guess I don't use it for large enough quantities of text where on-device STT quality becomes a big issue.

TomGarden |root |parent [-]

The big thing for me is the TTS, custom UI and persistent background mode! ie it switches turns automatically etc, no need to touch screen or keep screen on.

The STT on Gboard is very solid, so if that covers your use case you're good!

|next |previous [-]

khoirul |next |previous [-]

Switched to Codex a few days ago and not regretting it. Claude Code with the $20 subscription has been bad lately. Burning through quota in no time, even when sticking to older Sonnet models.

rickdg |next |previous [-]

Guess we're no short of reasons to stick to Codex.

stusmall |next |previous [-]

Does anyone know if this will impact ACP invoking Claude? IE using Claude from zed. I assume not but looking for confirmation

bhu8 |root |parent [-]

It would unfortunately impact it. ACP uses Claude SDK and is developed by a third-party.

a34729t |next |previous [-]

So basically local LLMs are rapidly improving to the point where they can handle many of the automation or local coding use cases on reasonable hardware (say $5k or less). What's the edge for frontier model providers here?

johntash |root |parent [-]

frontier models are still way better than local models from what I've seen. To get close to them with large context windows and decent performance, you need more than a reasonable machine imo.

I'm hoping local llms start rapidly improving even more though.

|next |previous [-]

|next |previous [-]

2001zhaozhao |next |previous [-]

Inb4 future Claude developer workflow (REQUIRED to save 90% of token $):

- The AI gives human prompts to copy-paste into Claude Code

- Human copy prompts into Claude Code

- The AI reads output from Claude Code

nikolay |next |previous [-]

Goodbye! Codex is better anyway!

LoganDark |next |previous [-]

I use `claude -p` interactively -- I understand why they put it under this new umbrella, but having to open the fullscreen interface each time to not be counted as a programmatic tool is a little disappointing.

eagle10ne |next |previous [-]

When AOL was released, they marketed unlimited, how have times changed with Claude limits.

potsandpans |next |previous [-]

Just stop using claude. It's easy. Grab pi, some provider with open weights, cheaper inference or a more permissive subscription plan, (openai, Alibaba, deepseek, what have you) and never look back.

kreidema |next |previous [-]

This is annoying because tools like conductor use the SDK. So this will either be the end of conductor for me or I switch to codex. Interesting dilemma.

Kim_Bruning |next |previous [-]

They're definitely aiming their sights at people who automate things. Which is to say: programmers.

Which is interesting, since you'd also think that programmers would be their primary customers.

coldtea |root |parent [-]

You're not a customer of a business when you cost them $2 for each $1 they make out of you. At best you're their VC subsidised target demographic.

Kim_Bruning |root |parent [-]

If so, then they don't actually have a product. Which -I guess- is what you're saying. I'm worried you might be right. Even though Claude is otherwise really good.

SaucyWrong |root |parent [-]

I’d say there is a product there, what remains to be seen IMO is whether the market will bear whatever the price of that product ends up being once Anthropic are finished changing their terms, pricing, and rules of engagement every several weeks…

Kim_Bruning |root |parent [-]

I'm definitely nervous to be a customer. Which is probably enough signal by itself, isn't it? :-/

harpooned |next |previous [-]

rip conductor :(

codex W

andrewstuart |next |previous [-]

Can someone explain in plain English please.

martinald |root |parent |next [-]

Currently, if you use claude -p (non interactive mode) in for example CI/CD, you can use your included subscription tokens.

They are now changing it to be:

You get $20/$100/$200 of "credit" that can be used for claude -p. Problem is, once you are out of that it is the normal API rates (outrageously expensive).

SaucyWrong |root |parent |previous [-]

“All of your favorite Claude harnesses will get dramatically more expensive starting on June 15”

StackTopherFlow |next |previous [-]

The anthropic enshittification continues.

0xking |previous [-]

[flagged]