
DeepSeek has introduced the DeepSeek-V4 series, a new generation of Mixture-of-Experts (MoE) language models focused on long-context processing, reasoning, and agent-based tasks. The release includes DeepSeek-V4-Pro and DeepSeek-V4-Flash, both supporting up to one million tokens of context, which is now set as the default across DeepSeek services. Continue reading “DeepSeek rolls out DeepSeek-V4 with 1M token context, agentic AI, and coding capabilities”














