Themata.AI
Themata.AI

Popular tags:

#developer-tools#ai-agents#llms#claude#ai-ethics#code-generation#openai#ai-safety#anthropic#open-source

AI is changing the world. Don't stay behind. Clear summaries, community insight, delivered without the noise. Subscribe to never miss a beat.

© 2026 Themata.AI • All Rights Reserved

Privacy

|

Cookies

|

Contact
🕒 Latest🔥 Top

Filtering by tag:

foundation-modelsClear
GLM-5V-Turbo: Toward a Native Foundation Model for Multimodal Agents
glm-5v-turbomultimodal-agentsfoundation-modelscomputer-vision
Research

GLM-5V-Turbo: Toward a Native Foundation Model for Multimodal Agents

GLM-5V-Turbo is a foundation model designed for multimodal agents, enhancing their capabilities in language reasoning and perception across diverse contexts. The model aims to improve the performance of agents in real-world applications by integrating various modalities.

arxiv.org

🔥🔥🔥🔥🔥

2 min

5/5/2026

GitHub - google-research/timesfm: TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.Tool

Google's 200M-parameter time-series foundation model with 16k context

TimesFM is a pretrained time-series foundation model developed by Google Research for time-series forecasting. The latest model version is TimesFM 2.5, with archived versions 1.0 and 2.0 available, and it can be accessed through the TimesFM Hugging Face Collection and is integrated into BigQuery as an official Google product.

github.com

🔥🔥🔥🔥🔥

2 min

3/31/2026

Step 3.5 Flash – Open-source foundation model, supports deep reasoning at speed

Step 3.5 Flash is an open-source foundation model designed for advanced reasoning and agentic capabilities. Utilizing a sparse Mixture of Experts (MoE) architecture, it activates only 11B of its 196B parameters per token, enabling high intelligence density and real-time interaction.

static.stepfun.com

🔥🔥🔥🔥🔥

16 min

2/19/2026

GLM-5V-Turbo: Toward a Native Foundation Model for Multimodal Agents

GLM-5V-Turbo is a foundation model designed for multimodal agents, enhancing their capabilities in language reasoning and perception across diverse contexts. The model aims to improve the performance of agents in real-world applications by integrating various modalities.

arxiv.org

🔥🔥🔥🔥🔥

2 min

5/5/2026

Step 3.5 Flash – Open-source foundation model, supports deep reasoning at speed

Step 3.5 Flash is an open-source foundation model designed for advanced reasoning and agentic capabilities. Utilizing a sparse Mixture of Experts (MoE) architecture, it activates only 11B of its 196B parameters per token, enabling high intelligence density and real-time interaction.

static.stepfun.com

🔥🔥🔥🔥🔥

16 min

2/19/2026

Google's 200M-parameter time-series foundation model with 16k context

TimesFM is a pretrained time-series foundation model developed by Google Research for time-series forecasting. The latest model version is TimesFM 2.5, with archived versions 1.0 and 2.0 available, and it can be accessed through the TimesFM Hugging Face Collection and is integrated into BigQuery as an official Google product.

github.com

🔥🔥🔥🔥🔥

2 min

3/31/2026

GLM-5V-Turbo: Toward a Native Foundation Model for Multimodal Agents

GLM-5V-Turbo is a foundation model designed for multimodal agents, enhancing their capabilities in language reasoning and perception across diverse contexts. The model aims to improve the performance of agents in real-world applications by integrating various modalities.

arxiv.org

🔥🔥🔥🔥🔥

2 min

5/5/2026

Google's 200M-parameter time-series foundation model with 16k context

TimesFM is a pretrained time-series foundation model developed by Google Research for time-series forecasting. The latest model version is TimesFM 2.5, with archived versions 1.0 and 2.0 available, and it can be accessed through the TimesFM Hugging Face Collection and is integrated into BigQuery as an official Google product.

github.com

🔥🔥🔥🔥🔥

2 min

3/31/2026

Step 3.5 Flash – Open-source foundation model, supports deep reasoning at speed

Step 3.5 Flash is an open-source foundation model designed for advanced reasoning and agentic capabilities. Utilizing a sparse Mixture of Experts (MoE) architecture, it activates only 11B of its 196B parameters per token, enabling high intelligence density and real-time interaction.

static.stepfun.com

🔥🔥🔥🔥🔥

16 min

2/19/2026

No more articles to load