The PLA Daily Warning: A Shift in Military Rhetoric
The Controversial Article
In July 2025, the PLA Daily—considered the authoritative mouthpiece of China’s armed forces—sent shockwaves across China’s strategic community by directly challenging the push for autonomous military robotics. Authored by Yuan Yi, Ma Ye, and Yue Shiguang, the editorial explicitly cautioned that deployment of humanoid robots in combat could lead to “indiscriminate killings and accidental deaths,” raising fears over both military escalation and international condemnation. The article notably referenced Isaac Asimov’s celebrated Three Laws of Robotics, particularly the prohibition against causing harm to humans, insisting that the current trajectory “clearly violates” these ethical boundaries.
This dramatic about-face stands in stark contrast to just two months earlier, when the PLA Daily lauded the transformative impact of automation on the future of warfare—directly mirroring Xi Jinping’s vision. The paper argued that military humanoids remain in an “embryonic stage,” with comprehension of risks far outpaced by technological deployment. By invoking the specter of legal and ethical catastrophe, the article signals grave unrest within a military establishment usually aligned with party directives and propaganda.
Timing and Context
The timing of this critical intervention is no coincidence. China recently showcased the world’s first fully autonomous AI football match—a milestone for its robotics sector that reinforced the narrative of technological dominance. The same months also saw PLA drills simulating urban combat, drawing explicit attention to the kind of high-risk scenarios humanoid robots are being designed to enter.
PLA Daily occupies a unique role in China’s information architecture, operating under the Central Military Commission and directly reflecting currents within the officer corps and defense policymaking spheres. Its sudden pivot from advocating battlefield robots towards issuing public warnings signals not only a substantive policy debate but also rising anxiety about global scrutiny should these machines malfunction during a crisis, such as a conflict over Taiwan.
China’s Humanoid Robot Advancements: Key Players and Technologies
UBTECH and the Tiangong Robot
Leading China’s humanoid robotics surge is UBTECH Robotics, which heads the prestigious Beijing Humanoid Robot Innovation Center. Its flagship product, the Tiangong robot, stands 5-foot-4, weighs just 95 pounds, and can run at a rapid 12 kilometers per hour, all while traversing rugged terrain. Tiangong is equipped with 3D binocular vision, high-precision inertial measurement, and a cutting-edge CPU capable of 550 trillion operations per second, allowing real-time adaptation to dynamic surroundings.
The Tiangong platform is designed under a principle of “military-civil fusion,” blending commercial innovation with explicit military applications. Wang Yonghua, scholar at the PLA’s Academy of Military Science, has championed these efforts, noting that such machines can “simulate human activities with greater reliability” and are well suited to replace human soldiers in hazardous, unpredictable environments. Field tests have demonstrated Tiangong’s ability to scale slopes, leap obstacles, and navigate simulated urban battlefields, fueling speculation about near-term deployment in PLA units.
Xiaomi’s CyberOne
Consumer electronics titan Xiaomi has brought its ambition to the military robotics sphere with the development of CyberOne. This 1.77-meter tall, 52-kilogram humanoid features 21 degrees of freedom—a metric of sophisticated limb articulation—and boasts an arm span of 1.68 meters with payload capacity of 1.5 kilograms in each hand. In addition to a 3D vision system, CyberOne integrates cutting-edge AI software able to recognize 85 environmental sounds and interpret 45 human emotional cues, hinting at advanced situational awareness potential for tactical scenarios.
While CyberOne’s initial launch targeted consumer and research applications, its modular design and rapid response time—joints react in just 0.5 milliseconds—make it a strong candidate for adaptation to defense tasks. The robot’s pattern recognition capabilities and robust frame are specifically engineered to operate within unpredictable environments, aligning with PLA priorities for multi-role platforms which can adjust to rapidly changing operational theaters.
CloudMinds XR-1 and Sanctions
Another prominent player, CloudMinds, has propelled humanoid robotics with its advanced XR-1 robot. XR-1 pioneered cloud-AI integration with over 30 smart joints and precision grasping actuators powered by high-speed 5G connectivity. This configuration, highly adaptable for teleoperation and large-scale coordination, demonstrates growing interest in networked robot battalions for reconnaissance, logistics, and combat support.
However, CloudMinds has faced obstacles due to U.S. national security sanctions enacted in 2020. The restrictions underscore growing fears among Western governments about the dual-use nature of such platforms, capable of both civilian service and direct military deployment, particularly in light of China’s demonstrated capacity to scale production orders into the tens of thousands annually. While these sanctions have hampered some overseas partnerships, CloudMinds continues to play a major role in domestic PLA research and procurement.
Military Advantages of Humanoid Robots: The PLA’s Vision
Versatility in Combat Environments
Military strategists like Wang Yonghua present a compelling portrait of the potential utility of humanoid robots in China’s future wars. Unlike industrial robots or ground drones limited to smooth factory floors or open terrain, bipedal humanoids are uniquely suited to navigate complex urban environments: climbing stairs, opening doors, manipulating human tools, and driving vehicles. This versatility is crucial in an envisioned Taiwan invasion, where the outcome may hinge on the ability to operate under intricate, human-designed infrastructure.
Zhang Yicheng, writing in PLA Daily, describes China’s emerging battlefield humanoids as equipped with a perception “brain” to process sensor data in real time, and a “cerebellum” to coordinate balance and locomotion, yielding a platform capable of both nuanced decision-making and robust mobility. These machines offer not only the speed and endurance to keep pace with mechanized units but also the dexterity to perform tasks previously reserved for highly trained infantry.
Scalability and Swarm Tactics
Reflecting on recent lessons from the Ukraine conflict, PLA innovators see swarm tactics as a key advantage in leveraging robot deployments. A coordinated cluster of humanoid robots, much like drone swarms, can overwhelm defenses, disperse to avoid targeting, and exploit even small breaches in enemy lines. The scalability of these systems promises a quantum leap in force projection—with each robot representing a force multiplier capable of amplifying the effects of PLA network-centric operations.
By integrating AI-driven coordination and responsive data links, commanders can direct multiple units simultaneously, adjusting strategies on the fly in response to changing threats. This fluid, collective operation is considered essential to seizing and holding ground in rapidly evolving emergencies, whether counter-terrorism raids or full-scale amphibious assaults.
Survivability and Sacrificial Roles
Perhaps the most controversial benefit cited by military proponents is the potential to use humanoid robots as expendable “sacrificial assets.” In high-risk combat zones—such as dense urban fighting or initial assault waves—robots could draw fire, breach obstacles, or conduct reconnaissance ahead of manned teams, reducing the risk to human soldiers while gathering vital tactical data. In extreme scenarios like contested street battles in Taiwan, robot decoys could confuse defenders and shield core forces during breakthroughs.
This calculated expendability, enabled by rapid manufacturing and scalable deployment, introduces a new dimension to operational planning. However, it also amplifies ethical concerns: should a robot acting entirely on its own authority cause unintended casualties, human military commanders may face legal and political blowback, further intensifying the PLA’s internal debate.
Ethical and Technical Challenges: The Roadblocks to Deployment
Ethical Concerns: From Asimov to Accidental Deaths
The most profound obstacles to full-scale deployment of humanoid war robots lie in the realms of ethics and accountability. The recent PLA Daily article warned explicitly of the risk of “indiscriminate killings and accidental deaths,” underscoring anxieties over uncontrollable escalation and violations of humanitarian law. By invoking Asimov’s framework, the authors revive age-old debates about whether machines—lacking empathy and moral intuition—should ever be entrusted with life-and-death decisions.
International legal standards, such as the Geneva Conventions, explicitly prohibit indiscriminate targeting and require assessment of proportionality and distinction. These concepts are notoriously difficult to encode in machine logic, particularly in dynamic, fog-of-war settings. China’s emergence as a forerunner in this sphere brings uncomfortable attention to the fact that, should robots commit atrocities, both state and programmer could be held accountable in global courts—a risk PLA Daily’s authors are urging policymakers to confront before crisis strikes.
Technical Hurdles: Moravec’s Paradox and Trust Issues
Setting aside philosophy, major technical barriers remain. Moravec’s Paradox captures a persistent truth of AI: while robots can outperform humans in abstract tasks and calculations, they are frequently confounded by everyday judgments—identifying friend from foe in a crowd, interpreting unexpected gestures, or responding to rapidly shifting threats. In the chaos of battle, even minor errors could cascade into catastrophic miscalculations.
Trust in battlefield AI is further undermined by vulnerabilities to hacking and electronic warfare. A compromised robot could be reprogrammed to betray its operators or cause friendly fire incidents, a scenario previously highlighted by PLA technologists and security analysts alike. Meanwhile, such systems make heavy demands on infrastructure, requiring resilient, ultra-low latency 5G and 6G networks to ensure real-time responsiveness—capabilities still unevenly distributed across China’s national grid.
Finally, machine learning algorithms crave vast quantities of live combat data for training—data which is scarce, classified, and often impossible to replicate outside actual battle contexts. This data deficit increases the risk of errors during unexpected contests, where robots may encounter novel scenarios for which they are entirely unprepared. These realities have led even the most bullish Chinese strategists to call for a measured, “step-by-step” research approach rather than reckless scaling.
Global Military Robotics Landscape: China’s Lead and International Context
China’s Dominance in Automation
China’s industry is unmatched in the global landscape, having installed more than half of all new robots worldwide in 2022–2023 alone—over 276,000 units. While most are industrial, the same ecosystem backs rapid military prototyping, making mass deployment of humanoid robots a viable reality for the PLA—years ahead of most competitors. This infrastructure gives China a genuine “automation asymmetry”: a capacity not only to roll out emerging battlefield robots but also to refine and scale them with unmatched speed and efficiency.
Within the PLA’s “intelligentization” paradigm, AI, quantum technology, and robotics are merged to outmaneuver adversaries both on the battlefield and in the broader theater of hybrid warfare. However, mass adoption is now shadowed by institutional reluctance, as underscored by the ethical queries and technical warnings now circulating among top PLA circles.
U.S. Priorities: Fighter Jets and Drones Over Humanoids
Despite massive investments in AI, the United States has not prioritized humanoid robots as a frontline military solution. U.S. defense doctrine currently focuses on next-generation fighter aircraft, precision-guided missiles, and deployable drone swarms rather than full-featured bipedal machines. This reflects an assessment that, while humanoids have long-term potential, their current reliability and controllability lag behind more mature remote-operated and semi-autonomous equipment.
Western defense analysts have noted the escalating deployment of unmanned ground vehicles and robotic logistics platforms within the U.S. Army, Marine Corps, and NATO partners, but none approach the anthropomorphic sophistication now emerging from Chinese labs. The result is a divergence in technological paths: China invests in platforms that mimic the human form, while the U.S. perfects systems optimized for speed, force, and remote operation.
International Reactions and Arms Race Implications
The global community is watching closely. Japan, Russia, and South Korea are all developing advanced robotics for disaster response or limited security operations, but none are currently fielding humanoid systems at the scale China proposes. Recent tensions—such as Japanese protests over Chinese military flights—underscore the regional frictions that could be exacerbated by deployment of humanoid assets in maritime or urban gray zones.
International arms control organizations have so far failed to reach consensus on limits for lethal autonomous robots, with repeated United Nations summits yielding only vague principles. China’s internal debate about ethical boundaries, now made public, will likely shape how other powers approach their own research pipelines and build the case for new regulatory frameworks.
Conclusion
The PLA Daily’s warning marks a watershed moment in the high-stakes contest over China’s military modernization. The call for “ethical and legal research” before deploying humanoid robots on battlefields signals a rare act of institutional dissent, breaking ranks with Xi Jinping’s vision of seamless intelligentized warfare. This internal struggle reflects not only anxiety about legal and reputational consequences but also a deep-seated belief amongst some commanders that technological prowess must be balanced with ethical restraint.
As China hovers at the precipice of mass deployment, its choice will reverberate far beyond its own borders—setting standards, provoking arms races, or igniting new international norms for AI in combat. The debate underway among PLA strategists and state media is not about abandoning innovation, but about managing the explosive power technological breakthroughs can unleash on the battlefield. Ultimately, as the authors themselves suggest, the challenge is to anticipate the unforeseen consequences that even the most visionary leadership cannot predict—a burden that may define the next era of warfare as much as any hardware or code.
Frequently Asked Questions
What are the main humanoid robots developed in China for military use?
China’s leading humanoid military robots include Tiangong by UBTECH Robotics, CyberOne by Xiaomi, and XR-1 by CloudMinds. Each is designed with advanced AI sensors, robust locomotion, and potential for deployment in complex environments.
Why has the PLA Daily criticized battlefield robots?
The PLA Daily has warned that humanoid battlefield robots risk causing indiscriminate killings and accidental civilian deaths due to limitations in AI decision-making. Concerns over accountability, hacking vulnerabilities, and compliance with international law were major reasons cited for the call for stricter ethical and legal research before deployment.
How does China’s humanoid robot strategy differ from the U.S.?
China is pressing ahead with bipedal humanoid robots that imitate human capabilities for direct battlefield applications, drawing from its strengths in industrial automation and AI. By contrast, the U.S. is prioritizing next-generation stealth fighters, drones, and remote-operated vehicles, seeing humanoid robots as less reliable and harder to control in current technological conditions.
What ethical frameworks are being referenced in the debate?
The PLA Daily cited Isaac Asimov’s Three Laws of Robotics alongside international legal standards such as those embedded in the Geneva Conventions. Central to the debate is whether robots can reliably distinguish between combatants and civilians and make proportional use-of-force decisions on their own.


