Hacker News
Show HN: I built a tool that watches webpages and exposes changes as RSS
It watches webpages for changes and shows the result like a diff. The part I think HN might find interesting is that it can monitor a specific element on a page, not just the whole page, and it can expose changes as RSS feeds.
So instead of tracking an entire noisy page, you can watch just a price, a stock status, a headline, or a specific content block. When it changes, you can inspect the diff, browse the snapshot history, or follow the updates in an RSS reader.
It’s a Chrome/Firefox extension plus a web dashboard.
Main features:
- Element picker for tracking a specific part of a page
- Diff view plus full snapshot timeline
- RSS feeds per watch, per tag, or across all watches
- MCP server for Claude, Cursor, and other AI agents
- Browser push, Email, and Telegram notifications
Chrome: https://chromewebstore.google.com/detail/site-spy/jeapcpanag...
Firefox: https://addons.mozilla.org/en-GB/firefox/addon/site-spy/
Docs: https://docs.sitespy.app
I’d especially love feedback on two things:
- Is RSS actually a useful interface for this, or do most people just want direct alerts?
- Does element-level tracking feel meaningfully better than full-page monitoring?
tene80i
|next
[-]
reconnecting
|next
|previous
[-]
Did you solve this?
xnx
|next
|previous
[-]
vkuprin
|root
|parent
|next
[-]
xnx
|root
|parent
[-]
raphman
|root
|parent
|next
|previous
[-]
vkuprin
|root
|parent
[-]
enoint
|next
|previous
[-]
1. RSS is just fine for updates. Given the importance of your visa use-case, were you thinking of push notifications?
2. Your competition does element-level tracking. Maybe they choose XPath?
vkuprin
|root
|parent
[-]
And yeah, element-level tracking isn't a brand new idea by itself. The thing I wanted to improve was making it easy to pick the exact part of a page you care about and then inspect the change via diffs, history, or RSS instead of just getting a generic "page changed" notification
hinkley
|next
|previous
[-]
Essentially instead of having a bunch of search engines and AI spamming your site, the idea was that they would get a feed. You would essentially scan your own website.
As crawlers grew from an occasional visitor to an actual problem (an inordinate percent of all consumer traffic at the SaaS I worked for was bots rather than organic traffic, and would have been more without throttling) I keep wondering why we haven’t done this.
Google has already solved the problem of people lying about their content, because RSS feeds or user agent sniffing you can still provide false witness to your site’s content and purpose. But you’d only have to be scanned when there was something to see. And really you could play games with time delays on the feed to smear out bot traffic over the day if you wanted.
bananaflag
|next
|previous
[-]
This is something that existed in the past and I used successfully, but services like this tend to disappear