Hacker News

Launch HN: Prism (YC X25) – Workspace and API to generate and edit videos

23 points by aliu327 ago | 13 comments
Hey HN — we’re Rajit, Land, and Alex. We’re building Prism (https://www.prismvideos.com), an AI video creation platform and API.

Here’s a quick demo of how you can remix any video with Prism: https://youtu.be/0eez_2DnayI

Here’s a quick demo of how you can automate UGC-style ads with Openclaw + Prism: https://www.youtube.com/watch?v=5dWaD23qnro

Accompanying skill.md file: https://docs.google.com/document/d/1lIskVljW1OqbkXFyXeLHRsfM...

Making an AI video today usually means stitching together a dozen tools (image generation, image-to-video, upscalers, lip-sync, voiceover, and an editor). Every step turns into export/import and file juggling, so assets end up scattered across tabs and local storage, and iterating on a multi-scene video is slow.

Prism keeps the workflow in one place: you generate assets (images/video clips) and assemble them directly in a timeline editor without downloading files between tools. Practically, that means you can try different models (Kling, Veo, Sora, Hailuo, etc) and settings for a single clip, swap it on the timeline, and keep iterating without re-exporting and rebuilding the edit elsewhere.

We also support templates and one-click asset recreation, so you can reuse workflows from us or the community instead of rebuilding each asset from scratch. Those templates are exposed through our API, letting your AI agents discover templates in our catalog, supply the required inputs, and generate videos in a repeatable way without manually stitching the workflow together.

We built Prism because we were making AI videos ourselves and were unsatisfied with the available tools. We kept losing time to repetitive “glue work” such as constantly downloading files, keeping track of prompts/versions, and stitching clips in a separate video editing software. We’re trying to make the boring parts of multi-step AI video creation less manual so users can generate → review → edit → assemble → export, all inside one platform.

Pricing is based on usage credits, with a free tier (100 credits/month) and free models, so you can try it without providing a credit card: https://prismvideos.com.

We’d love to hear from people who’ve tried making AI videos: where does your workflow break, what parts are the most tedious, and what do you wish video creation tools on the market could do?

spacecrafter3d |next [-]

Hey, I've been looking to experiment with Kling 3.0. How does this compare to Higgsfield?

rajit |root |parent [-]

We support Kling 3.0 on our platform, similar to Higgsfield. You can see some presets here: https://prismvideos.com/workspace/templates.

spacecrafter3d |root |parent [-]

What are some reasons I should consider Prism over Higgsfield?

tcbrah |next |previous [-]

ive tried like 5 of these all-in-one AI video platforms and always end up back at my own script. the problem isnt the "glue work" between tools honestly - thats like 20 lines of python. the problem is when the platform abstracts over the model APIs so much that you cant access new params when kling or whoever ships an update. how quickly do yall expose new model features when providers update? thats the make or break thing IMO

rajit |root |parent |next [-]

We access models through Fal (https://fal.ai). We offered day 0 support for Kling 3.0 and launch models on our platform the day they are live.

Would be curious to see your script.

mrieck |root |parent |previous [-]

I've had that problem with my free Chrome extension: (bring your own fal.ai key)

https://chromewebstore.google.com/detail/ai-slop-canvas/dogg...

To be honest it doesn't take long to add a new model/params. It's evaluating the models to see if they're even worth including that takes the most time.

rajit |root |parent [-]

This is a great point. It is challenging to know which models are good at what.

We've found that Seedance is good at photorealitic faces, Kling is fantastic at generating audio (highest quality model in terms of syncing character's face to the words they say imo), and Sora is great at UGC.

|next |previous [-]

informal007 |next |previous [-]

You use the same name with a research writing tool from OpenAI

https://openai.com/prism

aliu327 |root |parent [-]

yeah, SEO has been an issue for us lol

deepdarkforest |previous [-]

[flagged]

aliu327 |root |parent |next [-]

Prism can be used for more than just advertising! I was just showing one way someone might use our API. People have used us for creative projects, product demos, filmmaking, etc.

alexanderameye |root |parent |next [-]

Why was the original comment flagged? They had a valid and relevant point.

andyfilms1 |root |parent |previous [-]

If I see a company using an AI generated image or video for their product, my first thought will always be, "What are they trying to hide?"

rajit |root |parent [-]

This is a great point, and I agree with you. If a weight loss supplement brand were to use an AI influencer to market their product, it does raise questions about whether their supplement does in fact work on real people.

Nevertheless, things are trending more in this direction, and AI influencers will soon become the norm. Brands should be required to disclose when their marketing is AI.

It's worth mentioning that AI videos on Prism (and on any platform) do not have to be purely prompt to creative. For example, a brand designer can take an existing creative for a billboard for example and then use AI to generate images of this creative at a train station, in the Louvre, at a bus stop etc (without actually going there and shooting images).

rajit |root |parent |previous [-]

[dead]