# DeepSeek-V4

> Open-source MoE language model with 1M token context

DeepSeek-V4 is a series of open-source Mixture-of-Experts language models, offering V4-Pro (1.6T parameters) and V4-Flash (284B parameters). Both support a 1 million token context window by default using a hybrid attention architecture that reduces compute and memory costs. It targets developers and researchers needing large-context reasoning at lower inference costs.

## At a glance

- **Company**: DeepSeek
- **Homepage**: https://www.producthunt.com/r/LNIDLCQSIWWNBY?utm_campaign=producthunt-api&amp;utm_medium=api-v2&amp;utm_source=Application%3A+claude+access+%28ID%3A+284460%29
- **Pricing**: freemium
- **API available**: Yes
- **Self-hostable**: Yes
- **Launched**: 2026-04
- **Last verified**: 2026-05-11

## Categories

- developer-platforms
- research
- chat-assistants


## Capabilities

- long-context
- mixture-of-experts
- open-weights
- api-access
- code-generation
- reasoning


## Alternatives

- [ChatGPT](https://inforelay.ai/tools/chatgpt.md): Conversational AI assistant for text, code, and images
- [Claude](https://inforelay.ai/tools/claude.md): Anthropic's AI assistant for code, writing, and analysis



---
Source: https://inforelay.ai/tools/deepseek-v4-3/
Index: https://inforelay.ai/llms.txt
