Hacker News

Ask HN: Open Models are 9 months behind SOTA, how far behind are Local Models?

9 points by myk-e ago | 10 comments

magicalhippo |next [-]

A local model is an open model you run locally, so I'm not entirely sure the distinction in the question makes sense.

That said, if you're talking about models you can actually use on a single regular computer that costs less than a new home, the current crop of open models are very capable but also have noticeable limitations.

Small models will always have limitations in terms of capability and especially knowledge. Improved training data and training regiment can squeeze out more from the same number of weights, but there is a limit.

So with that in mind, I think such a question only makes sense when talking about specific tasks, like creative writing, data extraction from text, answering knowledge questions, refactoring code, writing greenfield code, etc.

In some of these areas the smaller open models are very good and not that far behind. In other areas they are lagging much more, due to their inherent limitations.

myk-e |root |parent [-]

Yes, I meant ordinary hardware which you find at home, like a current MacBook Air or equivalent Windows desktop. There must be a time frame when early SOTA LLMs were at a level that compares to open models that can run on ordinary hardware. But it's more like years rather than months. My rough guess would be 2-3 years. Which still would be amazing if we could get OPUS 4.5 quality within 2-3 years on an ordinary computer.

karmakaze |root |parent [-]

I don't know if you'd consider this ordinary, but a single Mac Studio M5 Ultra 512GB (or even 256GB) V/RAM seems pretty sweet.

myk-e |root |parent [-]

I love the spec, but it is like 5x or 10x a Macbook Air I mean really ordinary, Personal Computer in broad sense - not dedicated LLM kit.

hasperdi |next |previous [-]

Well, it depends on the hardware you have. If you have a hardware locally that can run best open models, then your local models are as capable as the open models.

That said, open models are not far behind SOTA, less than 9 months gap.

If what you're asking about those models that you can run on retail GPUs, then they're a couple years behind. They're "hobby" grade.

myk-e |root |parent [-]

Thanks, yes, I meant even ordinary retail PCs, not specialized GPUs. At some point in time in history, SOTA closed models were at a level that compares to todays open models that can run on ordinary hardware.

hasperdi |root |parent [-]

Retail PCs will probably never catch up to even the open‑weight models (the full, non‑quantized versions). Unless there’s a breakthrough, they just don’t have enough parameters to hold all the information we expect SOTA models to contain.

That’s the conventional view. I think there’s another angle: train a local model to act as an information agent. It could “realize” that, yeah, it’s a small model with limited knowledge, but it knows how to fetch the right data. Then you hook it up to a database and let it do the heavy lifting.

myk-e |root |parent [-]

Maybe the industry adapts too and the future PC is AI-ready out-of-the-box. Because people demand that.

softwaredoug |previous [-]

A local model is a smaller open model, so I’d expect it to be 9 months behind a small (ie nano) closed model as a base assumption

myk-e |root |parent [-]

Yes, a small open model that can run on today's hardware and that compared to a historic SOTA closed model with all in. What time difference do we think?