KernelLLM – Meta's new 8B SotA model

2 points by flockonus 1 month ago | 1 comment
  • flockonus 1 month ago
    > On KernelBench-Triton Level 1, our 8B parameter model exceeds models such as GPT-4o and DeepSeek V3 in single-shot performance. With multiple inferences, KernelLLM's performance outperforms DeepSeek R1. This is all from a model with two orders of magnitude fewer parameters than its competitors.