Introducing Grok-0

After approximately four months of dedicated development, the team has introduced their initial model dubbed "Grok-0," equipped with a hefty 33 billion parameters. To put this into perspective, that's a significant size for an AI model, and yet it's only the beginning.

Grok's Milestones

Within the subsequent two months following Grok-0, the team trained Grok-1. This new iteration showed a remarkable 63.2% success rate on the HumanEval coding tasks and a 73% on the MMLU, a multiple-choice question test. The HumanEval is designed for testing programming capabilities, indicating Grok-1's proficiency in understanding and generating code

Interestingly, Grok-0 has achieved the same performance levels as Meta's 70 billion parameter LLaMA 2 model while being developed with just half the resources. This impressive feat showcases the efficiency and potential of Grok AI's underlying architecture and optimization

Tokenomics

Grok-0 Token

  • 0% BUY & SELL TAX

  • FULLY RENOUNCED

  • LIQUIDITY LOCKED