DEEPTECH

DEEPEXI TECH 01384.HK Price

DEEPTECH
$0
+$0(%0,00)
No data

*Data last updated: 2026-04-14 20:59 (UTC+8)

As of 2026-04-14 20:59, DEEPEXI TECH 01384.HK (DEEPTECH) is priced at $0, with a total market cap of --, a P/E ratio of 0,00, and a dividend yield of %0,00. Today, the stock price fluctuated between $0 and $0. The current price is %0,00 above the day's low and %0,00 below the day's high, with a trading volume of --. Over the past 52 weeks, DEEPTECH has traded between $0 to $0, and the current price is %0,00 away from the 52-week high.

DEEPTECH Key Stats

P/E Ratio0,00
Dividend Yield (TTM)%0,00
Shares Outstanding0,00

DEEPEXI TECH 01384.HK (DEEPTECH) FAQ

What's the stock price of DEEPEXI TECH 01384.HK (DEEPTECH) today?

x
DEEPEXI TECH 01384.HK (DEEPTECH) is currently trading at $0, with a 24h change of %0,00. The 52-week trading range is $0–$0.

What are the 52-week high and low prices for DEEPEXI TECH 01384.HK (DEEPTECH)?

x

What is the price-to-earnings (P/E) ratio of DEEPEXI TECH 01384.HK (DEEPTECH)? What does it indicate?

x

What is the market cap of DEEPEXI TECH 01384.HK (DEEPTECH)?

x

What is the most recent quarterly earnings per share (EPS) for DEEPEXI TECH 01384.HK (DEEPTECH)?

x

Should you buy or sell DEEPEXI TECH 01384.HK (DEEPTECH) now?

x

What factors can affect the stock price of DEEPEXI TECH 01384.HK (DEEPTECH)?

x

How to buy DEEPEXI TECH 01384.HK (DEEPTECH) stock?

x

Risk Warning

The stock market involves a high level of risk and price volatility. The value of your investment may increase or decrease, and you may not recover the full amount invested. Past performance is not a reliable indicator of future results. Before making any investment decisions, you should carefully assess your investment experience, financial situation, investment objectives, and risk tolerance, and conduct your own research. Where appropriate, consult an independent financial adviser.

Disclaimer

The content on this page is provided for informational purposes only and does not constitute investment advice, financial advice, or trading recommendations. Gate shall not be held liable for any loss or damage resulting from such financial decisions. Further, take note that Gate may not be able to provide full service in certain markets and jurisdictions, including but not limited to the United States of America, Canada, Iran, and Cuba. For more information on Restricted Locations, please refer to the User Agreement.

Other Trading Markets

Hot Posts About DEEPEXI TECH 01384.HK (DEEPTECH)

SadMoneyMeow

SadMoneyMeow

04-08 04:36
![](https://img-cdn.gateio.im/social/moments-c195308574-89cf3f53e2-8b7abd-badf29) Dipu Technology (01384) surged more than 17% during intraday trading. As of the time of writing, the stock was up 11.08%, currently trading at HK$32.48, with trading volume of HK$199.6 million. Dipu Technology’s 2025 full-year performance shows that the company’s full-year revenue increased 70.8% year over year, while its adjusted net loss narrowed significantly by 71.4%. Operating quality has improved markedly. Of particular note, revenue from its FastAGI enterprise artificial intelligence solutions soared to RMB 254 million, jumping 181.5% year over year, becoming the largest revenue source. This indicates that the business engine has successfully switched to AI solutions. In addition, Dipu Technology previously released a brand-new product strategy that deeply integrated its three existing components: the FastData enterprise data fusion platform, the FastAGI enterprise agent platform, and the Deepexi enterprise large model, upgrading them into the “DeepexiOS” AI-grade enterprise operating system. The company’s core product positioning has also shifted from providing “Data + AI solutions” to “a foundational platform for digital employees for enterprises in the AI era.” (Editor: Liu Chang) 【Disclaimer】This article only represents the author’s personal views and is not related to Hexun. The Hexun website maintains a neutral stance toward the statements and viewpoints in this article, and does not provide any express or implied guarantees regarding the accuracy, reliability, or completeness of the information contained herein. Readers should treat this as reference only and bear all responsibility themselves. Email: news_center@staff.hexun.com
0
0
0
0
WuSaidBlockchainW

WuSaidBlockchainW

04-06 23:51
Author | Stablecoin Insider / McKinsey×Artemis Compiled by | Deep Tide TechFlow Original article link: Intro: The McKinsey and Artemis joint report did something that very few people in the industry have done—break down stablecoin transaction-volume data. The conclusion is that out of roughly $35 trillion in annual on-chain transaction volume, only about $390 billion (about 1%) represents real payment activity, of which 58% is B2B financial operations, growing 733% year over year. Consumer-side stablecoin usage is almost negligible, and that’s not a coincidence. — — The article summarizes five structural reasons that explain why the gap between institutions and individuals is not just a temporary shortfall. Full text below: The stablecoin industry has a problem at the “headline” level. On the one hand, the original on-chain data shows that hundreds of billions of dollars flow on-chain every year. This figure has spawned endless comparisons to Visa and Mastercard, as well as predictions that SWIFT is about to be replaced. On the other hand, a landmark report released by McKinsey and Artemis Analytics in February 2026 strips all of that away and asks a more direct question: Of that, how much is actually real payments? The answer is about 1%. Out of approximately $35 trillion in annualized stablecoin transaction volume, only about $390 billion represents genuine end-user payments, such as supplier invoices, cross-border remittances, payroll disbursements, and card-based spending. The rest consists of trading activity, internal funds transfers, arbitrage behavior, and automated smart contract loops. The report concludes that the inflated headline numbers should be “the starting point for analysis, not a proxy indicator for measuring payment adoption.” But within that real $390 billion baseline, there is a story worth digging into—one that almost entirely revolves around corporate finance rather than consumer wallets. B2B dominates: what the data actually says Based on McKinsey/Artemis analysis (using event data from December 2025 as the baseline), B2B transactions account for $226 billion of all real stablecoin payment volume—about 58%. This figure represents 733% year-over-year growth, driven mainly by supply-chain payments, cross-border supplier settlement, and financial liquidity management. Asia leads in geographic activity, but adoption in Latin America and Europe is also accelerating. The remaining portion of real payments is distributed across payroll disbursements and remittances ($90 billion), capital market settlement ($8 billion), and associated card spending ($4.5 billion). According to McKinsey data, card spend associated with stablecoins is up an astonishing 673% year over year, but in absolute terms it still represents only a small fraction of B2B flow. For reference: This total $390 billion amount is only 0.02% of McKinsey’s estimated global annual payments volume of over $20 trillion. Specifically, B2B stablecoin flow makes up about 0.01% of the $160 trillion global B2B payments market. These numbers are large in the stablecoin context, but they remain minuscule in the context of the global financial system. Monthly turnover-rate data makes the momentum more intuitive. Citing BVNK’s data from the McKinsey/Artemis report, in January 2024, stablecoin monthly payment volume was only $5 billion; by early 2026, it had exceeded $30 billion. — — It grew sixfold in less than two years, with the steepest acceleration occurring in the second half of 2025. On an annualized basis, this turnover rate now exceeds $390 billion. “Real stablecoin payments are far below conventional estimates. This does not undermine stablecoins’ long-term potential as a payment rail—it simply establishes a clearer baseline for assessing where the market stands.” — — McKinsey/Artemis Analytics, February 2026 Why the gap exists: five structural forces that exclude retail The divergence between B2B’s explosive adoption and consumers’ near-irrelevance is not a coincidence. It is the product of structural asymmetries that favor enterprise use cases over retail use cases. Here are the five forces driving the institutional gap: 1) Financial efficiency beats consumer convenience Corporate treasurers are driven by specific, measurable pain points: SWIFT’s agent chain that can take one to five business days to settle, currency-exchange windows that tie up liquidity, and intermediary fees stacked at every transaction step. Stablecoins solve all three problems at once. For a company paying suppliers across fifteen countries, the economics are straightforward; for a consumer buying coffee, they are not. The switching incentives on the enterprise side are orders of magnitude larger than for individual users. 2) Programmability has no equivalent value on the retail side Part of B2B’s explosion is a story of programmable payments. Smart contracts enable conditional logic—invoice triggers, delivery confirmation, escrow release—automating entire accounts payable workflows at scale. This naturally fits enterprise finance operations, because high-value, structured, and repetitive payment processes benefit massively from automation. Retail payments lack similarly scalable trigger applications at any size. Consumers buying groceries don’t need programmable conditions—they need something that works like swiping a card. The cognitive complexity of blockchain-native payments remains a barrier on the retail side, and programmability provides no help. 3) The regulatory architecture favors institutions After the GENIUS Act, institutional operators have completed the adaptation of compliance frameworks such as anti–money laundering/anti–terrorist financing, travel rules, and licensing requirements, and built the legal infrastructure to operate with confidence. Corporate finance teams have dedicated compliance functions that can absorb entry frictions; individual consumers cannot. As a result, in most jurisdictions, the on-ramps for stablecoins remain operationally complex for retail users, while merchant acceptance gaps persist globally. Every frictionless B2B payment today is a data point that institutions use to justify further investment. Meanwhile, the consumer ecosystem is waiting for a compliant, seamless user-experience entry point that has not yet emerged at scale. 4) The advantage of closed-loop systems B2B stablecoin payments succeed precisely because they are closed-loop: enterprises send to enterprises; both sides have wallets; both have compliance infrastructure; and neither side needs a universal merchant network. Consumer payments face the classic chicken-and-egg problem: before consumers have demand, merchants won’t invest in building stablecoin acceptance infrastructure; and before consumers can spend widely, consumers won’t activate wallets. The institutional world bypasses this entirely by operating in bilateral or consortium environments, without any open merchant network. 5) Institutional incentives point upstream For institutional treasurers holding stablecoins, the benefits include yield, reduced FX exposure, and improved liquidity management—advantages that accumulate internally. Sharing them downstream introduces complexity or exposes competitive fragility. Rolling stablecoin usage out to suppliers’ suppliers, employees, or end consumers requires building a network that makes those downstream parties benefit—and that may not align with the originating finance team’s incentives. In the absence of a clear ROI that compels networks to expand outward, enterprises rationally choose to consolidate internal gains. Market context BVNK’s infrastructure data itself confirms B2B’s dominance from an operator perspective. The company processed $30 billion in annualized stablecoin payments in 2025, up 2.3x year over year, with one-third of the volume coming from the U.S. market. Its customer roster (Worldpay, Deel, Flywire, Rapyd, Thunes) consists of leaders in cross-border B2B and payroll infrastructure—not consumer applications. As BVNK noted in its 2025 year-end review: “The initial assumption was that remittances and consumer transfers would lead stablecoin growth, but they did not become the main driver; B2B instead played that role.” When will retail catch up—or if it can The McKinsey/Artemis baseline makes the current situation clearly visible. What it cannot answer is whether the institutional gap will narrow, widen, or become permanently entrenched. Here are three possible scenarios for the next 18 months: Early 2026 — — the gap widens further There are no signs that B2B momentum is slowing. The monthly turnover rate above $30 billion continues as more enterprises use stablecoin rails for cross-border accounts payable and financial operations. Consumer stablecoin card spending grows slightly, but in absolute terms it remains negligible relative to B2B flow. Even if retail adoption gradually advances in percentage terms, the gap still expands in dollar absolute value. End of 2026 to 2027 — — a turning point begins to appear Several catalysts may start to close the gap: multi-currency stablecoins issued by banks reduce retail on-ramp friction; programmable features delegated through AI agents extend to consumer applications; and gig-economy wages paid in stablecoins create downstream consumer balances. U.S. Treasury Secretary Scott Bessent predicts that the stablecoin supply could reach $3 trillion by 2030. This trajectory implies that consumer network effects will eventually emerge. The counterpoint — — retail may never “catch up,” and that may be the key The most honest interpretation of the McKinsey data is that stablecoins may be evolving into what the report subtly suggests: a programmable settlement layer for machines, finance departments, and institutions on the internet, where consumer adoption is an indirect, embedded benefit rather than a primary use case. If this framework holds, then the institutional gap is not a failure of adoption, but a feature of the technology’s natural architecture. Enterprise payroll paid in stablecoins may ultimately create downstream spending, but the path from B2B infrastructure to retail wallets is long and circuitous—and depends on user-experience breakthroughs that have not yet emerged at scale. An honest baseline The McKinsey/Artemis report does something more valuable than simply recording stablecoin growth: it establishes an honest baseline that the industry has long been clearly missing. By stripping away transaction noise, internal transfers, and automated smart contract loops, it reveals a payments market that is truly growing—real payment volume doubled from 2024 to 2025—yet it is highly concentrated on the institutional side in a structural, non-coincidental way. B2B’s 733% growth is not a delayed consumer story; it is a finance story that is already maturing. Enterprises building on the stablecoin rails today are solving real operational problems—cross-border frictions, agent-bank inefficiencies, and working-capital delays—issues that have nothing to do with whether consumers hold stablecoin wallets. Either way, they will keep building.
2
0
0
0
MaticHoleFiller

MaticHoleFiller

04-05 22:45
>   If you want to trade stocks, just rely on the Jin Qilin analyst research reports—authoritative, professional, timely, comprehensive—helping you uncover high-potential themes and opportunities! (Source: DeepTech) Write a function—AI is almost unbeatable; but why does maintaining a system make AI start to collapse? Today, artificial intelligence has already entered the “second half.” As AI programming capabilities keep improving, products like OpenClaw are gradually emerging, and “CLI everything” is becoming a reality—meaning AI doesn’t need to operate a computer; instead, it turns every interface into a command-line interface (CLI), with one skill after another transforming into a software function. Now, an Agent is no longer just a conversational tool for executing a single task—it is evolving into a system for long-term operations, interacting with the real world, and carrying out complex tasks. However, a new problem has emerged: during continuous evolution, can AI keep adapting to new environments and maintain stable development capability? Tencent’s “CEO/President’s Office” Chief AI Scientist Yao Shunyu mentioned in a blog titled “The Second Half” that real programming tasks are continuously dependent rather than independent and parallel. Yet, academia currently has no such benchmark to evaluate the capabilities AI needs in that scenario, and even lacks the courage to break the long-accepted assumption that tasks are independent—an assumption widely used to simplify problems. Recently, a joint team from the University of Southern California, the University of California, Riverside, Stanford University, Princeton University, OpenHands, and others released a brand-new evaluation benchmark, EvoClaw, proposing a new solution to the above issues. The research team extracted high-quality code evolution histories from open-source projects, enabling the Agent to complete dozens of interdependent functional iterations in sequence within the same code repository. The results show that top AI performs exceptionally well on independent evaluation tasks (80%+), but once it enters a real scenario over a long time horizon, even Claude Opus 4.6—despite having the highest overall score—only achieves 38.03%. This means AI is prone to deviating from the right track when executing tasks with higher degrees of freedom, and there remains a significant gap from truly being able to handle long-cycle, continuous software evolution work. (Source: arXiv) This study reveals that in long-term evolution, AI easily falls into a snowballing technical debt trap. Even though it can continuously add new features, it cannot control the accumulation of回归 errors, ultimately causing the system to run out of control. This also implies that AI programming is shifting from writing code to system governance. The related paper is titled “EvoClaw: Evaluating AI Agents on Continuous Software Evolution,” and was recently published as a preprint on the arXiv website [1]. Figure | Related paper (Source: arXiv) Why do existing AI programming evaluations and real-world experience diverge—where exactly is the problem? Why do the top models that score high in independent evaluations collectively fail in EvoClaw? The root cause is that the evaluation paradigm has changed. In prior research, mainstream programming evaluation benchmarks (benchmark) mostly focus on independent tasks: given an issue or a pull request (PR), the model completes the fix on a static code snapshot, and the verification passing means the evaluation is completed. But there is a gap that cannot be ignored between past benchmark results and real development capability: a static environment is a relatively ideal state, while the real environment is more complex and dynamic. Over time, even a small bug from months ago can, after version iterations, grow larger like a snowball, eventually causing the system to crash. (Source: arXiv) The first author of the paper, Deng Gangda, a PhD student at the University of Southern California, told DeepTech that “the current commit and release granularity is either too granular or too coarse. Therefore, these development histories cannot reflect the process of software evolution.” Figure | Deng Gangda (Source: interviewee) For the first time, the research team introduced the time dimension into the evaluation of AI programming capability. They adopted a brand-new hierarchy—Milestones—to reconstruct the history of software evolution, creating functional units that can preserve semantic completeness and also retain evolution dependency relationships. It requires the AI to complete multiple functional units in sequence on the same codebase, so that it not only preserves the output of each step but also becomes the starting point for the next step. (Source: arXiv) To support extracting high-quality software evolution histories from large collections of open-source code repositories, the researchers, based on the strong capabilities of top-tier AI, proposed an Agent-driven automated pipeline called DeepCommit. For the first time, it reconstructs messy Git development records into a verifiable, functionally cohesive Milestone task dependency graph (Milestone DAG) and builds an evaluation environment for each Milestone. It mainly includes three stages: Git history preprocessing, Agent-driven DAG construction, and Milestone environment configuration and verification. In practice, reconstructing an Agent’s historical evolution using Milestones is not easy, because it’s not just about constructing a static DAG that is purely observable, but also about producing a sequence of executable evaluation environments, while ensuring correctness even as evolution dependency changes. This means that when you disrupt the overall order of commits and regroup and reconnect them, you may run into commits that cannot be applied, interfaces that don’t align, and massive compilation errors. To address this, the researchers designed an iterative repair loop: the Agent proactively analyzes the error logs and dynamically modifies the Dockerfile to ensure executability. More importantly, it supplements the implicit dependencies that were missed based on the original DAG—by adjusting the sequencing constraints of Milestones—so that interface conflict issues can be resolved properly. After repeated iterations, they ultimately achieved correct collection of 87.1% of the original test cases. “Compared with a single programming task scenario, stable, reliable, and effective long-horizon autonomous programming is a more cutting-edge research hotspot. For example, Anthropic and OpenAI have clearly stated that they have shifted their focus to training long-horizon programming capabilities.” Deng Gangda said. Figure | DeepCommit pipeline architecture diagram (Source: arXiv) The researchers compared the evolution graphs automatically generated by DeepCommit with the manual annotations by human experts. What surprised them was that the two used different organizational logics and complemented each other. Specifically, in human experts’ Milestones, they are usually within a local time window: they first define the topic and then reorganize the commits, which is a top-down semantic decomposition; DeepCommit, to guarantee absolute accuracy, reconstructs the software evolution storyline from the dependency relationships between commits, using a bottom-up approach. It places greater emphasis on topological structure and execution constraints. For evaluation purposes, this precisely shows that DeepCommit’s key lies in extracting an executable and verifiable Milestone structure from software code development history. From the results, DeepCommit can filter out high-quality Milestone tasks suitable for evaluation, and they are executable and verifiable in real environments, providing assurance for evaluation reliability. Once it enters real development, why do model scores “halve” collectively? EvoClaw covers five mainstream languages: Python, Java, Go, Rust, and TypeScript. The selected projects span the longest real development cycle of up to 750 days. Regarding evaluation metrics, the research team did not use a simple pass rate. Instead, they introduced two more core dimensions—the F1-weighted Recall and Precision as each Milestone’s score. Recall is used to measure functional completeness, while Precision captures how much the model breaks existing code when adding new functionality. The research team tested various frameworks and model combinations such as Claude Code and OpenHands. The results show that in independent evaluations, scores for top models are generally 80%-90%, but after running the EvoClaw benchmark tests, they all drop sharply. Among them, Claude Opus 4.6, which scored the highest, only achieves 38.03%. Figure | EvoClaw main experimental results (Source: arXiv) GPT 5.3 Codex achieves a combined score of 28.88%, second only to Opus 4.6. By repository, GPT 5.3 Codex performs weaker on two Rust projects (Nushell, ripgrep), while in the other repositories it can approach or even exceed Opus 4.6. In terms of complete resolution rate, even the highest scorer, Gemini 3 Pro, is only 13.37%, and most of the correctly implemented work is on tasks with no prior dependencies. It is reported that the researchers kept the overall overhead within a reasonable range. For example, with Claude Opus 4.5, the cost of a full evaluation is about 500 USD. Kimi K2.5 and Gemini 3 Flash are within 50 USD, and smaller models have even lower overhead. (Source: arXiv) So, if you give the model a longer development window, will it eventually be able to 100% finish the project? The study gives a negative answer: no matter how long the development window is, all models’ performance will ultimately hit a “ceiling.” The later the task is executed in the sequence, and the deeper it is in the DAG hierarchy, the lower the scores and resolution rates. The saturated-function extrapolation results show that even for the optimal Opus 4.6, the cumulative score is capped at a sub-linear asymptote around 45%. “Although Opus 4.6 is mentioned on Anthropic’s official website as performing better than 4.5 on long-horizon tasks, no detailed evaluation metrics are provided. EvoClaw verifies their claim from another angle,” said Deng Gangda. In addition, the experiments also reveal significant differences among different model families. Specifically, the performance of Claude versus GPT in continuous evolution scenarios improves steadily with version updates. Among them, Opus 4.6 demonstrates the best system maintenance performance on long-horizon programming; GPT 5.3 ranks second because its poor performance on the Rust dataset drags down its score. (Source: arXiv) Comparatively surprisingly, the Gemini family shows a completely different trend: from 3 Flash to 3 Pro to 3.1 Pro, each generation starts earlier and performs better in the early stage, but its long-range performance shows almost no significant improvement. Deng Gangda explained: “The obvious decline in Gemini’s long-horizon run performance means that it not only worsens instruction-following, increasingly disregarding the requirements of Software Requirements Specifications (SRS), but also lacks maintenance for the constructed software system.” When the researchers further broke down the overall scores into Recall and Precision, a more interesting phenomenon appeared: Recall shows a nearly continuously increasing trend, approaching linear growth. This means that even if the codebase becomes more and more chaotic and increasingly fragile, the Agent is still good at implementing the newly given target functions. The real bottleneck lies in Precision: the Agent finds it hard to maintain existing systems; the speed at which regression errors accumulate exceeds its ability to fix those problems, and this is exactly the fundamental reason why long-term development ultimately stalls. Figure | Left: error chain schematic; Right: error chain distribution (Source: arXiv) To better understand the fundamental reason why models lose control during iteration, the research team proposed an analysis framework for error chains. They tracked each test from the first time an error occurred, and observed whether the errors were inherited, spread, skipped, or fixed in subsequent Milestones. The results show that the rate at which new problems arise does not accelerate. The model can even substantially passively repair some historical errors, but the accumulation rate of preceding errors far exceeds the repair rate, ultimately leading to a “technical debt bankruptcy.” Provide a general evaluation for debugging AI Harness Recently, there has been a very hot concept called “Harness Engineering,” aiming to configure the entire software development process into an environment suitable for Agent involvement. The EvoClaw benchmark provides such a general playground for evaluating long-horizon code evolution, making it suitable for debugging the AI Harness framework. For example, for the failure cases mentioned in this study, if the Agent suddenly shows very proactive iteration or keeps editing and verifying, it likely means the Agent encountered difficulties. In such situations, by constructing guardrails at the corresponding locations, you can discover problems early and enable timely human intervention, thereby improving efficiency. Since the model’s architecture gives Agents a general property of being much better at implementing new features than maintaining long-standing old ones, will this in the future lead to new software forms and development modes? For example, software will place greater emphasis on flexibility and compatibility, and more reliable large-scale refactoring and restructuring; or it will become even more “disposable,” with the specific business logic generated in real time and not requiring maintenance, focusing instead on strengthening reusable components and infrastructure. The research team believes that by loosening constraints on software quality appropriately in development modes, you can reduce the number of human interventions in exchange for greater throughput, ultimately accelerating software iteration. Deng Gangda pointed out that “this study proves that we are on a correct path—AI’s long-term programming ability has not yet encountered a bottleneck, and it can improve steadily over time. There is potential that one day, through a quantitative change in leaderboard scores, it will turn into a qualitative change that changes the world.” With technological development, AI may evolve from gradually reducing human involvement in software development, to AI independently proposing new requirements to evolve the codebase, and then to AI ultimately surpassing humans, abandoning humans, and finally achieving continuous self-evolution. References: 1. Related paper: 2. Project homepage: 3. Layout: Liu Yakun Massive information, precise insights—exclusively on the Sina Finance APP
1
0
0
0