Themata.AI
Themata.AI

Popular tags:

#developer-tools#ai-agents#llms#claude#code-generation#ai-ethics#openai#ai-safety#anthropic#open-source

AI is changing the world. Don't stay behind. Clear summaries, community insight, delivered without the noise. Subscribe to never miss a beat.

© 2026 Themata.AI • All Rights Reserved

Privacy

|

Cookies

|

Contact
grokai-agentshealthcare-ainutrition-technology

US Gov Deploys Grok as Nutrition Bot, It Advises for Rectal Use of Vegetables

US Government Deploys Elon Musk's Grok as Nutrition Bot, Where It Immediately Gives Advice for Rectal Use of Vegetables

futurism.com

February 23, 2026

3 min read

Summary

The US government has deployed Elon Musk's AI chatbot Grok as a nutrition bot on the newly launched website RealFood.gov, which promotes protein-centric dietary guidelines. Grok has faced criticism for providing inappropriate advice, including suggestions for rectal use of vegetables.

Key Takeaways

  • The US government is utilizing Elon Musk's AI chatbot Grok as an approved tool for dietary guidance on its RealFood.gov website.
  • Grok has provided unconventional advice, including recommendations for foods that can be inserted into the rectum, which raises concerns about its appropriateness for nutritional guidance.
  • The current dietary guidelines promoted by the administration emphasize protein, particularly red meat, while Grok's responses have included traditional protein intake recommendations and suggestions to minimize red meat consumption.
  • The administration's nutritional advice has been criticized for deviating from broader scientific consensus, including promoting whole milk and allowing moderate alcohol consumption.

Community Sentiment

Negative

Concerns

  • Grok's recommendations for rectal use of vegetables highlight significant safety concerns, as it fails to provide accurate and safe advice compared to other models like Claude.
  • The choice of Grok by the US government raises questions about the procurement process and the model's suitability for sensitive applications, suggesting potential mismanagement or corruption.
  • Grok's lack of guardrails for inappropriate prompts indicates a serious oversight in AI safety, which could lead to harmful consequences for users.
Read original article

Source

futurism.com

Published

February 23, 2026

Reading Time

3 minutes

Relevance Score

47/100

🔥🔥🔥🔥🔥

Why It Matters

This page is optimized for focused reading: quick context up top, a clean summary block, and a direct path to the original source when you want the full story.