Show HN: I built a cache for LLM development
1 point by js4 1 year ago | 1 commentDevelop against large language models without the big bill.
This application serves as a reverse proxy with caching capabilities, specifically tailored for language model API requests. Built with Golang, it facilitates interactions with models hosted on platforms like OpenAI by caching responses and minimizing redundant external API calls.
The goal is to allow for you to develop against llm api's without running up a bill.
- flarion 1 year agono repo link?