Show HN: Turn a video of an app into a functional prototype with Claude Opus
17 points by abi 1 year ago | 3 commentsI’m the maintainer of the popular screenshot-to-code repo on Github (46k+ stars).
When Claude Opus was released, I thought to myself what if you could send in a video of yourself using a website or app, would the LLM be able to build it as a functional prototype? To my surprise, it worked quite well.
Here are two examples:
* In this video, you can see the AI replicating Google with auto-complete suggestions and a search results page (failed at putting the results on a separate page). https://streamable.com/s24pq6
* Here, we show it a multi-step form (https://tally.so/templates/online-quiz/V3qOnk) and ask Claude to re-create it. It does a really good job! https://streamable.com/gstsgn
The technical details: Claude Opus only allows you to send a max of 20 images so 20 frames are extracted from the video, and passed along with a prompt that uses a lot of Claude-specific techniques such as using XML tags and pre-filling an assistant response. In total, 2 passes are performed with the second pass instructing the AI improve on the first attempt. More passes might help as well. While I think the model has Google.com memorized but for many other multi-page/screen apps, it tends to work quite well.
You can try it out by downloading the Github repo and setting up a Anthropic API key in backend/.env Be warned that one creation/iteration (with 2 passes) can be quite expensive ($3-6 dollars).
- HughParry 1 year agoLooks awesome!
How does it handle taking instruction about things like what stack to use?
E.g. if I'm building a new prototype, I personally would want to build with Tailwind, but lots of people think Tailwind is evil and prefer prototyping with <other_css_thing_they_like>
Also curious to hear what you think of the quality of the output code. My experiences with LLMs doing frontend code has been a pretty mixed bag personally
- abi 1 year agoThanks!
There's a dropdown where you can choose a stack for screenshots (Tailwind, React, Vue, etc.). I haven't updated the prompts for the video feature just yet. You can tweak the prompt yourself here: https://github.com/abi/screenshot-to-code/blob/6069c2a118592...
The quality of the output code is solid, I think. You can see the code for the examples: https://codepen.io/abi/pen/ExJPdop and https://codepen.io/abi/pen/jORWeYB
I think the biggest thing LLM code is typically missing is better abstractions/componentization. You could probably prompt around some of that.
- abi 1 year ago
- dflock 1 year agoCloning and stealing other peoples apps, as a service!
- 96hudson 1 year ago[flagged]
- 96hudson 1 year ago