PANews February 27th reported that while the industry eagerly anticipates the new flagship model DeepSeek V4, the DeepSeek team quietly released a new academic paper. The new paper introduces an innovative reasoning system called DualPath, specifically optimized for large model (LLM) inference performance under agent workloads. By introducing a “dual-path KV-Cache reading mechanism” (similar to a memory cache), it redistributes storage network load, achieving up to 1.87 times higher offline inference throughput and an average of 1.96 times more agents running per second in online services. The paper’s introduction mentions that large models are rapidly evolving from single-turn chatbots and standalone reasoning models into agent systems — capable of autonomous planning, tool invocation, and multi-turn interactions to solve real-world tasks. This shift in application paradigm drives significant changes in large model inference workloads: from traditional human-large model interactions to human-large model-environment interactions, with interaction rounds reaching dozens or even hundreds.