Hacker News

Show HN: Create-LLM – Train your own LLM in 60 seconds

45 points by theaniketgiri ago | 34 comments

potamic |next [-]

How did you test this? Did you train something?

theaniketgiri |root |parent [-]

Yeah, I did! I trained a few small ones — mostly the “nano” and “tiny” templates (a few million params) on datasets like Shakespeare and Alpaca. The goal was to make sure the training loop, tokenizer, and evaluation all worked smoothly.

Didn’t go for massive models — more about making the whole setup process quick and reliable. You can actually train the nano one on CPU in a few minutes just to see it working.

theaniketgiri |next |previous [-]

Thanks everyone for the feedback and discussion. For those asking technical questions - happy to help! The tool works on Mac/Linux/Windows, check the README for setup. For those concerned about the architecture - it follows standard scaffolding patterns (create-next-app, etc). TypeScript CLI generates Python projects. 82+ stars in 24 hours - grateful for everyone trying it out. Keep the feedback coming!

3abiton |next |previous [-]

How does this differ from nanochat?

theaniketgiri |root |parent |next [-]

Good question! I think you mean nanoGPT (Karpathy's minimal GPT implementation)?

Key differences:

nanoGPT: - Minimal reference implementation (~300 lines) - Educational code for understanding transformers - Requires manual setup and configuration - Great for learning the internals

create-llm: - Production-ready scaffolding tool (like create-next-app) - One command: npx create-llm → complete project ready - Multiple templates (nano/tiny/small/base) - Built-in validation (warns about overfitting, vocab mismatches) - Includes tokenizer training, evaluation, deployment tools - Auto-detects issues before you waste GPU time

Think of it as: nanoGPT is the reference, create-llm is the framework.

nanoGPT teaches you HOW it works. create-llm lets you BUILD with what you learned.

You can actually use nanoGPT's architecture in create-llm templates - they're complementary tools!

Grimblewald |root |parent |previous [-]

Unlike nanochat this is purely vibe-coded, improving vibes by 110%, with 112x more emoji. A key innovation that gets to the heart of the problem is that this project stores python files as strings in typscript files to help improve workflows. I imagine the author solved this engineering challenge to overcome existing limitations\emdash more efficient, interpretable, and maintainable code\emdash in existing projects.

3abiton |root |parent |next [-]

> Unlike nanochat this is purely vibe-coded, improving vibes by 110%

Karpathy clearly said that it wasn't vibe coded. Apparently it was more time consuming to fix gpt bugs than to do it by himself.

theaniketgiri |root |parent |previous [-]

The Python-in-TS bit made me smile But to clarify, it’s a standard TypeScript CLI — no such hacks involved, just template-based generation.

Grimblewald |root |parent [-]

Ok, but there is no reason to bake it into the TS scripts. You could write the python scripts and package them using standard tools. In my experience only an LLM would do that, since it makes sense to generate the code and templates to insert in one go. However, if a human were to do it, the python scripts would be their own files and they would be bundled / read in as strings when/as required. A gigantic lump of text in a string makes no sense in human paradigms, even if it makes perfect sense for an LLM to do it. For humans it is incredibly hostile to update and maintain.

As a side note, without looking it up, on your device, what is the process for typing an emdash?

endofreach |root |parent |next [-]

While you are not wrong about this not being the wisest choice, i have seen it done before countless times already long before llms arrived. So i don‘t think it‘s such a clear sign of intense llm usage as you make it out to be.

theaniketgiri |root |parent |previous [-]

Fair point I agree embedding code as strings isn’t ideal. I did it mainly to make npx create-llm portable without needing a Python setup during scaffolding. Definitely open to improving that happy to refactor if you have suggestions.

Grimblewald |root |parent [-]

That makes zero sense. Am I even speaking with a natural person right now?!? Your comments sound like llm bullshit and everything about this project reeks of it as well, from code to readme.

freakynit |root |parent [-]

Bro is angry about using llm to write code instead of being happy about a working code that makes it extremely easy for anyone to build their own nano gpt's.

LLM's are just the next evolution that assist you with coding tasks... similar to w3schools => blogs => stackoverflow, and now => llm's.

There's absolutely nothing wrong with using them. The problem is people who use them without reviewing their outputs.

darepublic |next |previous [-]

I don't quite understand how you get from this:

> I wanted to understand how these things work by building one myself.

Directly to this:

What if training an LLM was as easy as npx create-next-app?

I mean that the second thought seems to be the opposite of the first (what if the entirety of training llm was abstracted behind a simple command)

theaniketgiri |root |parent [-]

Great question - I should've been clearer.

When I started, I wanted to understand LLMs deeply. But I hit a wall: tutorials were either "hello world" toys or "here's 500 lines of setup before you start."

What I needed was: "give me working code quickly, THEN let me modify and learn."

That's what create-llm does. It scaffolds the boilerplate (like create-next-app), so you can spend time learning the interesting parts: - Why does vocab size matter? (adjust config, see results) - What causes overfitting? (train on small data, see it happen) - How do different architectures perform? (swap templates, compare)

It's "easy to start, deep to master." The abstraction gets you running in 60 seconds, then you dig into the code

seg_lol |next |previous [-]

The blogpost is some of the best LLM greentext I have seen for targeting the hn hivemind. Everything about this is :chefs kiss:

theaniketgiri |root |parent [-]

Thanks! The blog post is just my honest journey - spent way too much time trying to understand LLMs, figured others had the same frustration.

If you try create-llm, would love your feedback. Always looking to make it better.

efilife |next |previous [-]

2 questions: how much of this project is AI generated and how much of only the readme is AI generated?

theaniketgiri |root |parent [-]

Mostly the repetitive stuff like README generation and pushing code with meaningful commit messages was handled by AI. The actual work and logic were done by me.

joshribakoff |root |parent [-]

What about the commit that added tens of thousands of lines of markdown claiming to be an AI summary?

Or the meaningful commit message of “.”

And the commit editing 1,000s of lines of python code mislabeled as a docs change?

theaniketgiri |root |parent [-]

Totally fair question!

Docs / Markdown: AI handled repetitive stuff like READMEs and summaries.

Core logic / Python: fully written by me.

Commit messages: some minimal ones just for quick iterations — the real work is in the code.

AI helped with boilerplate so I could ship faster; all functionality is hand-crafted.

joshribakoff |root |parent |next [-]

If the AI did the boilerplate that implies it was not fully written by you.

The “meaningful commit messages” — again are a single period as the message for a single commit for the entire python portion of the codebase.

My question was rhetorical. Whether the AI did it or a human did, it burns credibility to refer to things that don’t exist (like “meaningful commit messages”)

teruakohatu |root |parent [-]

Hacker News is a better place when we don’t attack people sharing their work. Your point was made.

Well done to the author for shipping code. I look forward to trying it out.

theaniketgiri |root |parent |next [-]

Thanks for the support!

And yeah, the commit history is messy - I was learning and shipping fast. Not perfect, but the tool works and people are using it.

Let me know if you have any questions when you try it!

Grimblewald |root |parent |previous [-]

> for sharing their work

If it was their work your point would hold.

theaniketgiri |root |parent [-]

To clarify the AI question once and for all:

What AI did: - Generated README templates (boilerplate markdown) - Suggested commit messages (I didn't always edit them) - Helped with documentation structure

What I wrote: - All Python training logic (train.py, trainer.py, callbacks) - All model architectures (gpt.py, tiny.py, small.py, etc.) - Tokenizer integration - Data pipeline - CLI scaffolding (

biinjo |root |parent |next [-]

Don’t feed the trolls. This was your idea and you made something that works. Who cares if its (partially) done by AI. Whomever is taking offense by people using AI for coding, is just having a hard time adapting to the current state of affairs.

It’s here, it’s happening. Try the project, if you like it thats great, if you don’t then move on.

And if you don’t intent to try it for whatever reason that’s fine as well but don’t be salty to the OP for sharing their passion project.

Grimblewald |root |parent |next [-]

It isn't trolling if there is a genuine concern. It isn't about the fact it is all AI generated, I don't care about that personally. I do however care if someone lies about provenance.

theaniketgiri |root |parent |previous [-]

Thanks for the support! Appreciate you trying it out. Let me know if you hit any issues or have ideas for improvements.

Grimblewald |root |parent |previous [-]

I find that hard to believe. You chose to put emojis into comments returned during script executions? You chose to store python scripts as strings in a TypeScript file rather than as python script files? You're even aware of the fact there are no python files in the project, rather it is strings in typescript files that get interpreted as python files, and you still refer to python files by filename in your comment like you expect it to exist in "your" codebase? You're competent enough to put together a project like this but then choose to use if-else for something solved better with match case? (LLM's do that since it's a recent addition to python and so LLM's avoid using match case, but humans rarely fail to use it)

Your medium article, your *.md and most of your code ALL looks LLM generated, which isn't as much a problem in my books, but lying about it is a huge problem.

theaniketgiri |root |parent [-]

Fair points and I get where you’re coming from. I’ve been very open that AI helped with repetitive parts (docs, boilerplate, commit messages). The functional code training logic, model architecture, CLI was written and tested by me. Some design choices (like storing scripts as strings or using if-else) were just pragmatic decisions made while iterating fast, not signs of AI authorship. Either way, the project is open source — you can inspect, critique, or even improve any part of it. I’m happy to take constructive feedback. My goal’s just to make LLM training more accessible.

Grimblewald |root |parent [-]

If you're iterating fast, then working on python files where you get the benefits of linting, syntax highlighting etc. and then reading the files as strings or bundled as required is MUCH faster than blindly mucking around in strings. As a side note, how fast do you type? Some of your multi-hundred line addition commits happen rather rapidly back to back.

The only time it would be faster to iterate with your scripts hard-coded into your TS files would be if an LLM is doing your iterating for you.

Why would anyone invest time and effort in a project where the author lies through their teeth about provenance? Why use your project that contributes, as it appears, nothing a LLM can't just give me? Why use this when I could just use an LLM to get the same directly in python without dicking around with npm?

computerthings |root |parent |previous [-]

[dead]

kk58 |previous [-]

Does this work on mac

theaniketgiri |root |parent [-]

Yep, works fine on Mac. Try the nano or tiny templates if you want quicker training runs