LG CNS, the IT services arm of South Korea’s LG Group, launched its PhysicalWorks platform to train and manage mixed robot fleets through one unified software layer, according to The Korea Herald. At a demonstration, four robots from Unitree, Deep Robotics, Dexmate, and Bear Robotics moved boxes without remote control, completing one handoff over two to three meters in 90 seconds. The company said the system combines simulation and video training with software that reassigns work in real time, including the ability to switch equipment during emergencies—such as when it diverted a quadruped to patrol duty and reassigned its task to a Bear Robotics cart.
Platform Capabilities and Deployment
The PhysicalWorks system manages robots from different manufacturers through a single control layer, addressing a fragmented market where machines from separate vendors typically require custom engineering to work together. According to the source, this unified approach could make automation easier to adopt, allowing businesses to select the best robot for each job without being locked into one vendor ecosystem.
LG CNS reported that the platform can reduce robot deployment time from several months to approximately one to two months. The company is currently running over 20 proof-of-concept projects, with an executive noting that revenue generation may take one to two years.
LG CNS’s Robot Software Foundation
The PhysicalWorks launch builds on LG CNS’s four decades of experience as a systems integrator in the manufacturing sector. The company has spent 40 years constructing IT infrastructure for manufacturers, including expertise in linking legacy production software—an advantage the company views as relevant to modern robotics integration.
PhysicalWorks builds on existing LG CNS tools such as Real Time Dispatcher (RTD), which sets task priorities and logistics movement conditions in real time. RTD can also control logistics equipment including Automated Guided Vehicles (AGVs), which are driverless vehicles used to move materials in factories.
Artificial Intelligence and Adaptability
The platform incorporates a Robot Foundation Model (RFM) developed through LG CNS’s partnership with Skild AI, a US startup building AI systems for robots. The RFM aims to make robots more adaptable by enabling them to learn from workplace photos and video data, then act autonomously instead of requiring task-specific development for each action or direct control at every step.
LG CNS’s preparation for the launch included an 11-month development period, during which the company invested in Skild AI and acquired a stake in robotics firm Dexmate.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
CopilotKit 開源 Open Generative UI:Claude Artifacts 跨 Agent 框架實作
The open-source AI Agent frontend framework CopilotKit announced on May 7 that it has launched Open Generative UI, an open-source implementation of Anthropic Claude Artifacts. akshay\_pachaar compiled an explanation: CopilotKit’s version enables Agents to dynamically generate HTML/SVG at runtime, stream the output token-by-token and display it in a sandboxed iframe, so users can see the interface assembly process in real time without having to wait for the full response. Before Anthropic Claude Artifacts, generative UI capabilities only existed within Anthropic’s own products; CopilotKit brings the same mode to
ChainNewsAbmedia1h ago
OpenAI Codex launches a Chrome extension: can test Web Apps in the browser, pull Context across pages, and run in parallel
On May 7, OpenAI (U.S. time) released Codex’s Chrome extension, enabling Codex coding agents to run directly inside Chrome browsers on macOS and Windows. OpenAI’s official Codex documentation explains that the extension allows Codex to test web apps without taking over the user’s browsing, gather context across multiple tabs, use Chrome DevTools, and carry out other tasks in parallel.
OpenAI also announced that Codex’s weekly active users exceed 4 million, up 8x since the beginning of the year.
Things that can be done in the browser: test web apps, get context across pages, use DevTools
Chrome extension
ChainNewsAbmedia1h ago
Benchmark Reiterates $27 Buy Rating on Bitdeer as BTDR Surges 21% on AI Infrastructure Push
According to Benchmark Equity Research, on Thursday the firm reiterated its buy rating and $27 price target on Bitdeer Technologies (BTDR), citing the stock's re-rating potential as it transitions into AI and
GateNews10h ago
Video Rebirth Launches BACH AI Tool for Multi-Shot Video Generation Up to 30 Seconds
Video Rebirth launched BACH on May 7, a tool that generates multi-shot videos of up to 30 seconds from text prompts and reference images. The tool maintains character consistency across shots, follows camera instructions, and produces native 1080p video with sound effects, voiceover, and
GateNews16h ago
Tether Releases QVAC MedPsy Medical AI Model, Achieves 62.62 Score on 17B-Parameter Version
According to Odaily, Tether AI Research Group released QVAC MedPsy, a medical AI model designed to run locally on smartphones and wearable devices without cloud dependency. The 1.7 billion-parameter version scored 62.62 on seven medical benchmarks, outperforming Google's MedGemma-1.5-4B by 11.42 poi
GateNews17h ago
Xiaomi Open-Sources OmniVoice, Zero-Shot Voice Cloning Model Supporting 646 Languages
According to Beating, Xiaomi's AI Lab Kaldi team has open-sourced OmniVoice, a zero-shot voice cloning TTS model supporting 646 languages. The model clones voice characteristics from just seconds of reference audio and works across languages—a single voice can synthesize speech in Mandarin,
GateNews19h ago