Hacker News
What we learned building 100 API integrations with OpenCode
neya
|next
[-]
Here is how I solved this problem:
1. There is already a knowledgebase of almost all APIs (the ones that are useful to the average Joe anyway) in either Swagger.json or Postman.json format. This is totally upto you as to what format you prefer.
2. Write a generator (I use Elixir) to infer which format 1. uses and generate your API modules using a code generator. There are plenty, or you can even write your own using simple File.write!
3. In the rare occurence you coming across a shitty API with only scattered documentation across outdated static pages online, only then use an LLM + browser to automate it to write it into the format listed in 1. (Swagger.json or Postman.json)
Throwing an LLM at everything is just inefficient lazy work.
Falimonda
|root
|parent
|next
[-]
The post provides a lot of good food for thought based on experience which is exactly what the title conveys
gchamonlive
|root
|parent
|previous
[-]
> We chose the second because we didn’t want to overfit our assumptions.
> Some of it went better than expected.
> But they also broke in very unexpected ways, sometimes spectacularly.
You clearly missed the whole point of the article, which is to experiment with agents and explore the limits of having them run wild.
Efficient use of tokens and which tasks to delegate is secondary to the experiment. Optimizing these is in any case premature if you don't understand the limits of the models.
neya
|root
|parent
[-]
I think you completely missed the point - they built a product purely using agents and deployed it to production for others to use. Read what the product actually does first.
gchamonlive
|root
|parent
[-]
neya
|root
|parent
[-]
What evidence? There is 0 evidence. It's deployed to production, but that doesn't mean it works fine or is free of bugs - which is exactly my point and why you use algorithms for these types of things. They're testable, repeatable and scalable.
With LLM slop it's just that - slop.
groby_b
|next
|previous
[-]
What is the value add of having the AI rebuild code over and over, individually for each project using it?
bilekas
|root
|parent
|next
[-]
I hope this isn't their business model.
j16sdiz
|root
|parent
|next
|previous
[-]
It take lots of readings and testing before integrating to your project.
rguldener
|root
|parent
|previous
[-]
The news here is the AI reading the API docs, assembling requests, and iterating on them until it works as expected.
This sounds simple, but is time consuming and error prone for humans to do.
mellosouls
|next
|previous
[-]
https://nango.dev/docs/guides/platform/free-self-hosting/con...
Ofc that may well be my misreading but it seems important in the context of the claim and the analysis using OpenCode.
Perhaps they could clarify and/or revisit the docs.
yojo
|next
|previous
[-]
They claim the agents reliably generated a week’s worth of dev work for $20 in tokens, then go on to list all the failure modes and debugging they had to do to get it to work, and conclude with “Agents are not ready to autonomously ship every integration end-to-end.”
Generally a good write up that matches my experience (experts can make systems that can guide agents to do useful work, with review), but the first section is pretty misleading.
cpursley
|next
|previous
[-]
Falimonda
|previous
[-]
The idea of assigning a code-owner agent per directory is really interesting. A2A (read: message passing and self-updating AGENTS.md files) might really shine there in some way.