Hacker News
Retrofitting JIT Compilers into C Interpreters
9fwfj9r
|next
[-]
fuhsnn
|next
|previous
[-]
ltratt
|root
|parent
|next
[-]
itriednfaild
|root
|parent
|next
|previous
[-]
hencq
|root
|parent
|next
[-]
vkazanov
|root
|parent
|next
|previous
[-]
Metatracing ones are kind of an interesting twist on the original idea.
> So it takes normal interpreted code and jits it somehow?
Anyway, they use a patched LLVM to JIT-compile not just interpreted code but the main loop of the bytecode interpreter. Like, the C implementation itself.
> But you have to modify the source code of your program in some way?
Generally speaking, this is not normally the goal. All JIT-s try to support as much of the target language as possible. Some JIT-s do limit the subset of features supported.
fuhsnn
|root
|parent
|previous
[-]
moardiggin
|root
|parent
[-]
while (true) {
__yk_tracebasicblock(0);
Instruction i = code[pc];
switch (GET_OPCODE(i)) {
case OP_LOOKUP:
__yk_tracebasicblock(1);
push(lookup(GET_OPVAL()));
pc++; break;
... case OP_INT: push(yk_promote(constant_pool[GET_OPVAL(i)])); pc++; break;
Knowledge of tracing compilers, LLVM and SSA are needed by the user.> added about 400LoC to PUC Lua, and changed under 50LoC
Lua 5.5.0 has 32106 lines of code including comments and empty lines. The changes amount to 1.4% of the code base. And then there's the code changes in the yk LLVM fork that you'd have to maintain which I'm guessing would be a few orders of magnitude larger.
If this project would be able to detect the interpreter hotspots itself and completely automate the procedure, it would be great.
ltratt
|root
|parent
[-]
I don't think that's realistic; or, at least, not if you want good performance. You need to use quite a bit of knowledge about your context to know when best to add optimisation hints. That said, it's not impossible to imagine an LLM working this out, if not today, then perhaps in the not-too-distant future! But that's above my pay grade.
pjmlp
|root
|parent
|previous
[-]
i_don_t_know
|root
|parent
[-]
It integrated with "Function panels". Function panels were our attempt at documenting our library functions. See the second link below. But you could enter values, declare variables, etc and then run the function panel. Behind the scenes, the code is inserted to the interactive window and then run. Results are added back to the function panel.
These also worked while suspended on a breakpoint in your project so available while debugging.
My understanding was that these features were quite popular with customers. They also came it handy internally when we wrote examples and did manual testing.
https://www.ni.com/docs/de-DE/bundle/labwindows-cvi/page/cvi...
https://www.ni.com/docs/de-DE/bundle/labwindows-cvi/page/cvi...
https://irkr.fei.tuke.sk/PPpET/_materialy/CVI/Quick_manual.p...
pjmlp
|root
|parent
|next
[-]
Yeah, I find this valuable regardless of the programming language, ideally the toolchain should be a mix of interpreter/JIT/AOT, to cherry pick depending on the deployment use case.
Naturally for dynamic languages pure AOT is not really worth it, althought a JIT cache is helpful as alternative.
djwatson24
|next
|previous
[-]
However not without downsides. It sounds like average code is only 2x faster than Lua, vs. LuaJit which is often 5-10x faster.
pjmlp
|next
|previous
[-]
linzhangrun
|next
|previous
[-]
edmondx
|next
|previous
[-]
measurablefunc
|next
|previous
[-]
ltratt
|root
|parent
[-]
sgbeal
|next
|previous
[-]
throwaway1492
|root
|parent
[-]
sgbeal
|root
|parent
[-]
That's an abrasive question but i dare say that we all do. It's our only constant point of reference.
> Have you written a parser or interpreter?
i have written many parsers, several parser generators, and a handful of programming languages. This article, however, covers a whole other level, way over my head (or well beyond any of my ambitions, in any case).
Pics or it didn't happen: fossil.wanderinghorse.net/r/cwal