Hacker News

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

86 points by segmenta ago | 25 comments
Hi HN,

AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.

For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.

Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I

Rowboat has two parts:

(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.

(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.

Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.

Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.

Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.

We’d love to hear your thoughts and welcome contributions!

wyattjoh |next [-]

It would be fantastic if this supported email and calendar providers that weren't Google. Supporting protocols like IMAP or JMAP alongside CalDav would be a fantastic step, as well as open source note-taking apps like Hyprnote would be neat.

ramnique |root |parent |next [-]

Agreed 100% and we'll slot these into our roadmap. We started out with Google because it was the fastest. Will definitely look into Hypernote integration as well.

asciii |root |parent |previous [-]

I second this, as a big Fastmail user

mchusma |next |previous [-]

This is cool! A couple of pieces of feedback as I am looking for something in this family of things but haven't found the perfect fit: 1. I have multiple inboxes, and want to have them work on multiple. 2. I would really like to have skills and mcps visible and understandable. Craft Agents does a nice job of segmenting by workspace and making skills and mcps all visible so I can understand what exactly my agent is set up to do (no black boxes). 3. I want scheduled runs. I don't need push, I actually kind of prefer just the reliability of scheduled, but push would be fine too. In particular, I want to: a. After each granola meeting save in obsidian (I did this in Craft Code for example, but I prefer your more built in approach here, this is nice). b. On intervals, check my emails. I want to give it information on who/what is important to me, and ping me. E.g. billing on Anthropic failed, ping me. c. I also want it to email back and forth to schedule with approved categories of things on request. Just get it on my calendar (share calendly, send times, etc). d. I want junk etc archived. e. For important things, update my knowledge graph (ignore spam, etc). 4. Tying into a to-do list that actually updates based on priorities, and suggests auto archiving things etc would be good.

In practice, i connected gmail and asked it: "can you archive emails that have an unsubscribe link in them (that are not currently archived)?" and it got stuck on "I'll check what MCP tools are available for email operations first." But i connected gmail through your interface, and I don't see in settings anything about it also having configured the mcp? I also looked at the knowledge graph and it had 20 entities, NONE of which I had any idea what they were. I'm guessing its just putting in people trying to spam me into the contacts? It didn't finish running, but I didn't want to burn endless tokens trying to see if it would find actual people i care about, so I shut it down. One "proxy" for "people i care about" might be "people I send emails to"? I could see how this is a hard problem. I also think regardless I want things more transparent. So for the moment, I'm sticking with Craft Code for this even though it is missing some major things but at least its more clear what it is: its claude code, with a nice UI.

Hope this was helpful. I know there are multiple people working on things in this family, and I will probably be "largely solved" by the end of 2026, and then we will want it to do the next thing! Good luck, I will watch for updates and these are some nice ideas!

segmenta |root |parent [-]

Really appreciate the detailed feedback. There are bunch of great features that you are pointing out that are on our roadmap (will add whats missing). The agent can setup tasks on schedule and help manage them. You can try a prompt like 'Can you schedule a background task xyz to run every morning ...'. The background tasks would show up on the UI once it is scheduled by the assistant. However, you might have to connect the necessary MCP tools in your case.

On Gmail actions - we currently don’t take write actions on inboxes like archiving or categorizing emails. The Google connection is read-only and used purely to build the knowledge graph. We’re working on adding write actions, but we’re being careful about how we implement them. Also probably why the agent was confused and was looking for an MCP to accomplish the same job.

On noise in the knowledge graph — this is something we’re actively tuning. We currently have different note-strictness levels that auto-inferred based on the inbox volume (configurable in ~/.rowboat/config/note-creation.json) that control what qualifies as a new node. Higher strictness prevents most emails from creating new entities and instead only updates existing ones. That said, this needs to be surfaced in the product and better calibrated. Using “people I send emails to” as a proxy for importance is a really good idea.

alansaber |next |previous [-]

Big fan of the idea. 1: is the context graph tweakable in any way 2: how does the user handle/approve background tasks? Otherwise cool and good job!

segmenta |root |parent [-]

Thanks!

All the knowledge is stored in Markdown files on disk. You can edit them through the Rowboat UI (including the backlinks) or any editor of your choice. You can use the built in AI to edit it as well.

On background tasks - there is an assistant-skill that lets it schedule and manage background tasks. For now, background tasks cannot execute shell-commands on the system. They can execute built-in file handling tools and MCP tools if connected. We are adding an approval system for background tasks as well.

There are three types of schedules - (a) cron, (b) schedule in a window (run every morning at-most once between 8-10am), (b) run once at x-time. There is also a manual enable/disable (kill switch) on the UI.

nkmnz |next |previous [-]

How does this differ from https://github.com/getzep/graphiti ?

segmenta |root |parent [-]

Graphiti is primarily focused on extracting and organizing structured facts into a knowledge graph. Rowboat is more focused on day-to-day work. We organize the graph around people, projects, organizations, and topics.

One design choice we made was to make each node human-readable and editable. For example, a project note contains a clear summary of its current state derived from conversations and tasks across tools like Gmail or Granola. It’s stored as plain Markdown with Obsidian-style backlinks so the user can read, understand, and edit it directly.

delichon |next |previous [-]

How do you handle entity clustering/deduplication?

segmenta |root |parent [-]

We use a two-layer approach.

The raw sync layer (Gmail, calendar, transcripts, etc.) is idempotent and file-based. Each thread, event, or transcript is stored as its own Markdown file keyed by the source ID, and we track sync state to avoid re-ingesting the same item. That layer is append-only and not deduplicated.

Entity consolidation happens in a separate graph-building step. An LLM processes batches of those raw files along with an index of existing entities (people, orgs, projects and their aliases). Instead of relying on string matching, the model decides whether a mention like “Sarah” maps to an existing “Sarah Chen” node or represents a new entity, and then either updates the existing note or creates a new one.

delichon |root |parent [-]

> the model decides whether a mention like “Sarah” maps to an existing “Sarah Chen” node or represents a new entity, and then either updates the existing note or creates a new one.

Thanks! How much context does the model get for the consolidation step? Just the immediate file? Related files? The existing knowledge graph? If the graph, does it need to be multi-pass?

segmenta |root |parent [-]

The graph building agent processes the raw files (like emails) in a batch. It gets two things: a lightweight index of the entire knowledge graph, and the raw source files for the current batch being processed.

Before each batch, we rebuild an index of all existing entities (people, orgs, projects, topics) including aliases and key metadata. That index plus the batch’s raw content goes into the prompt. The agent also has tool access to read full notes or search for entity mentions in existing knowledge if it needs more detail than what’s in the index.

It’s effectively multi-pass: we process in batches and rebuild the index between batches, so later batches see entities created earlier. That keeps context manageable while still letting the graph converge over time.

haolez |next |previous [-]

Cool idea. I use Logseq with some custom scripts and plugins for that. Works very well with today's models capabilities.

segmenta |root |parent [-]

Thanks. Obsidian and Logseq were definitely an inspiration while building this. What we’re trying to explore is pushing that a bit further. Instead of manually curating the graph and then querying it, the system continuously updates the graph as work happens and lets the agent operate directly on that structure.

Would love to know what kind of scripts or plugins you’re using in Logseq, and what you’re primarily using it for.

haolez |root |parent [-]

My point was to say that your idea should work because today's models are capable enough.

If I get some time later today, I'll post my scripts.

btbuildem |next |previous [-]

How do you manage scope creep (ie, context size), and contradictory information in the context?

segmenta |root |parent [-]

Good question. We don’t pass the entire graph into the model. The graph acts as an index over structured notes. The assistant retrieves only the relevant notes by following the graph. That keeps context size bounded and avoids dumping raw history into the model.

For contradictory or stale information, since these are based on emails and conversations, we use the timestamp of the conversation to determine the latest information when updating the corresponding note. The agent operates on that current state.

That said, handling contradictions more explicitly is something we’re thinking about. For example, flagging conflicting updates for the user to manually review and resolve. Appreciate you raising it.

delichon |root |parent [-]

> That said, handling contradictions more explicitly is something we’re thinking about.

That's a great idea. The inconsistencies in a given graph are just where attention is needed. Like an internal semantic diff. If you aim it at values it becomes a hypocrisy or moral complexity detector.

segmenta |root |parent [-]

Interesting framing! We’ve mostly been thinking of inconsistencies as signals that something was missed by the system, but treating them as attention points makes sense and could actually help build trust.

einpoklum |next |previous [-]

> We’d love to hear your thoughts

Google Mail should not be used, nor its use encouraged. Nor should you encourage the use of LLMs of large corporations which suck in user data for mining, analysis, and surveillance purposes.

I would also be worried about energy use, and would not trust an "agent" to have shell access, that sounds rather unsafe.

rezmoss |next |previous [-]

this makes a lot of sense "work memory" feels like what agents have been missing

segmenta |root |parent [-]

Thanks! Agent capabilities are getting commoditized fast. The differentiator is context. If you had a human assistant, you'd want them sitting in on all your meetings and reading your emails before they could actually be useful. That's what we're trying to build.

limonstublechew |next |previous [-]

[dead]

Curiositiy |previous [-]

Fucking hate software dorks turning simple web searches into a polluted, unrelated results list, thanks to their stupid, unimaginative & completely unrelated one-word "product" names.

delichon |root |parent [-]

Dear software dorks turning raw text searches into meaningful, relevant linked data: rock on and thank you for your service.