Introducing Grok-0
After approximately four months of dedicated development, the team has introduced their initial model dubbed "Grok-0," equipped with a hefty 33 billion parameters. To put this into perspective, that's a significant size for an AI model, and yet it's only the beginning.
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=991,h=554,fit=crop/mxBl1lgjW3teK41M/grokk-YKbawNlGn8i16nMx.webp)
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=328,h=264,fit=crop/mxBl1lgjW3teK41M/grokk-YKbawNlGn8i16nMx.webp)
Grok's Milestones
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=915,h=451,fit=crop/mxBl1lgjW3teK41M/grok-model-card-YBg8MyeoNripZXM1.png)
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=328,h=160,fit=crop/mxBl1lgjW3teK41M/grok-model-card-YBg8MyeoNripZXM1.png)
Within the subsequent two months following Grok-0, the team trained Grok-1. This new iteration showed a remarkable 63.2% success rate on the HumanEval coding tasks and a 73% on the MMLU, a multiple-choice question test. The HumanEval is designed for testing programming capabilities, indicating Grok-1's proficiency in understanding and generating code
Interestingly, Grok-0 has achieved the same performance levels as Meta's 70 billion parameter LLaMA 2 model while being developed with just half the resources. This impressive feat showcases the efficiency and potential of Grok AI's underlying architecture and optimization
Tokenomics
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=1224,h=856,fit=crop/mxBl1lgjW3teK41M/xai-grok-AQE4vVP1beuZb8rA.webp)
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=328,h=224,fit=crop/mxBl1lgjW3teK41M/xai-grok-AQE4vVP1beuZb8rA.webp)
Grok-0 Token
0% BUY & SELL TAX
FULLY RENOUNCED
LIQUIDITY LOCKED