DeepSeek-V4

Open-source MoE language model with 1M token context

Visit DeepSeek-V4 →

DeepSeek-V4 is a series of open-source Mixture-of-Experts language models, offering V4-Pro (1.6T parameters) and V4-Flash (284B parameters). Both support a 1 million token context window by default using a hybrid attention architecture that reduces compute and memory costs. It targets developers and researchers needing large-context reasoning at lower inference costs.

At a glance

Company
DeepSeek
Pricing
freemium
API available
Yes
Self-hostable
Yes
Launched
2026-04
Last verified
2026-05-11

Capabilities

long-contextmixture-of-expertsopen-weightsapi-accesscode-generationreasoning

Categories

Alternatives

For AI agents: machine-readable markdown version of this page at /tools/deepseek-v4-3.md, or send Accept: text/markdown.