Top
New
Ask
Show
Gcam
twitter: https://twitter.com/grmcameron
86 karma
MicroEvals – Easily run vibe checks against models
3 points by
Gcam
2 weeks ago |
0 comments
From GPT-4 to Mistral 7B, there is a 300x range in the cost of LLM inference
2 points by
Gcam
1 year ago |
0 comments
Show HN: LLM Benchmarks Leaderboard with 60 model and API host combinations
3 points by
Gcam
1 year ago |
1 comment
Mistral API reduces time to first token by 10x (only place for Mistral Medium)
4 points by
Gcam
1 year ago |
0 comments
240 Tokens/s achieved by Groq's custom chips on Lama 2 Chat (70B)
5 points by
Gcam
1 year ago |
0 comments
New GPT-4 Turbo (0125 Preview) slightly faster per initial benchmarks
2 points by
Gcam
1 year ago |
0 comments