Amazon Titan Text Premier LLM with SOTA Common Sense Reasoning
14 points by ruckfool 1 year ago | 4 comments- cs702 1 year agoUPDATE: As Onawa points out in his comment below, the OP is showing benchmarks.
---
I couldn't find any mention of model performance on standard benchmarks, nor any mention of model scale (number of parameters, MoE setup, etc.).
How come? Does Amazon not want customers to know how much better/worse, or much larger/smaller, this model is compared to other models, proprietary and open?
- Onawa 1 year agoThey must be updating the blog post, because I just checked and saw they listed benchmark results for MMLU, ARC-Challenge, BIG-Bench Hard, DROP, F1 score, and HellaSwag. However, their link https://aws.amazon.com/machine-learning/responsible-machine-... is showing 404 for me
- cs702 1 year agoThanks. I updated my comment.
- cs702 1 year ago
- ruckfool 1 year agoContext Window is 32k (present in the blog). I suppose number of Params, MOE setup are intentionally not revealed. Numbers on well known benchmarks and comparison with Google and Open AI is also published later in the blog.
- Onawa 1 year ago