AI Glasses: The Dawn and Dilemma of the Next Mobile Terminal

AI Glasses: The Dawn and Dilemma of the Next Mobile Terminal

When Xiaomi AI Glasses entered the consumer market in 2025 at a price of 1,999 yuan, selling over 70,000 units in the first week yet facing a 40% high return rate; when Meta Ray-Ban Display sold out within 48 hours of its U.S. launch, with Daigou (cross-border shopping) prices on Xianyu surging by nearly 100%—this industry frenzy and reality check surrounding AI glasses unfolded simultaneously. From feature phones to smartphones, every iteration of mobile terminals stems from interaction revolutions and scenario reconstruction. Today, AI glasses stand at this historical juncture: they carry the expectation of being the “gateway to the spatial computing era” while confronting dual challenges from technology and the market.(Meta Reports Fourth Quarter and Full Year 2024 Results

Technological Breakthroughs: From Concept Devices to Consumer-Grade Products

The rise of AI glasses first relies on continuous breakthroughs in underlying technologies, which form the core foundation for their potential as the next-generation terminal. In the chip sector, Qualcomm’s first-generation Snapdragon AR1+ platform achieved a 20% reduction in size and supports on-device operation of small language models with 1 billion parameters. This enables products like Rokid Glasses to reduce weight while increasing battery life by 30%.

In display and interaction technologies, innovations have made “seamless wearing” a reality. Meta Ray-Ban Display adopts non-intrusive HUD (Head-Up Display) technology, creating a hidden color display area at the lower edge of the right lens. It neither blocks the user’s field of vision nor fails to present key information such as navigation and notifications in real time. Combined with a lightweight design of 69 grams, it completely breaks free from the bulkiness of early AR devices.

Optical technology advancements are equally crucial. Light waveguide modules have been reduced to 1.2mm in thickness, with light transmittance increased to 85%, eliminating the dizziness caused by traditional AR devices. The cost reduction of Micro LED technology has allowed products like Thunderbird X3 Pro to reach a brightness of 5,000 nits, making them suitable for outdoor use. These technological leaps collectively transform AI glasses from geek toys into daily accessories

Market Signals: Growth on the Eve of Explosion

Market data is sending positive signals. Global AI glasses shipments reached approximately 2 million units in 2024, and this figure is expected to soar to 80 million units by 2030, with penetration rate rising from 0.1% to 4.3%. In China, transaction volume of smart glasses surged by over 10 times year-on-year in the first half of 2025, while the number of brand entries tripled—on average, a new product was launched every 9 days. Ecosystem Competition: From Device Superposition to System Collaboration The ultimate competition for mobile terminals has always been about ecosystems. To replace smartphones, AI glasses must break free from the positioning of “phone accessories” and build an independent and open ecosystem. Currently, the industry has shown two typical paths: Xiaomi integrates AI glasses into its “human-vehicle-home full ecosystem,” using Pengpai OS to achieve capability synergy with phones and cars, making them an “extended organ of the phone”; Meta, on the other hand, deeply integrates social ecosystems such as WhatsApp and Instagram, turning first-person shooting and real-time sharing into core scenarios.
Nevertheless, ecosystem construction is still in its early stages. On the hardware side, no manufacturer has mastered the entire industrial chain from materials and optics to algorithms, leaving most products as mere “parameter stacking.” On the software side, there is a lack of an application ecosystem optimized for glasses—over 90% of functions still rely on phone APP screen projection, with response delays generally exceeding 200ms. The absence of industry standards further exacerbates ecosystem fragmentation, as optical protocols and interaction logics of different brands are incompatible.
Positive signals, however, are emerging. Qualcomm has joined hands with Xiaomi, Rokid and other manufacturers to build an edge-side AI ecosystem, supporting the standardization of multimodal interaction technologies. Huawei uses HarmonyOS to realize cross-device computing power scheduling, allowing AI glasses to utilize the redundant computing power of phones to handle complex tasks. Industry insiders predict that when the “trinity” capability of “optical display + AI chip + vertical scenarios” is formed, AI glasses will break free from dependence on phones and become truly independent terminals

Future Choices: The Critical Battle of Scenarios Defining Terminals

The success of smartphones lies in their integration of full-scenario needs such as communication, entertainment, and payment. To surpass this, AI glasses must find their own “killer applications.” From current market feedback, users’ demand for display far exceeds that for AI: the success of Meta Ray-Ban Display mainly stems from its HUD display solving the “taking out the phone” pain point in high-frequency scenarios like navigation and notifications; in contrast, Xiaomi AI Glasses, despite their affordable price, still suffered a high return rate due to the lack of display functions.
Breakthroughs in vertical scenarios may become the key to solving the dilemma. In the industrial sector, Pinming construction inspection glasses have realized real-time identification of equipment failures, reducing downtime by 20%. In the medical field, AR surgical navigation glasses can project operation steps directly into doctors’ field of vision, improving surgical accuracy. On the consumer side, functions such as real-time translation and sports posture analysis have already demonstrated practical value.
The pace of technological iteration will determine the speed of popularization. According to Actions Semiconductor’s plan, dedicated chips with 1 TOPS computing power will achieve an ultra-high energy efficiency ratio of 15.6 TOPS/W by 2026, sufficient to support full-featured AR applications. In the optical field, the cost of Micro LED is expected to drop by 50% within 3 years, bringing the price of consumer-grade products below 2,000 yuan. By then, combined with a sound ecosystem, AI glasses are expected to replicate the growth curve of smartphones.

Conclusion: Dawn Is Ahead—Progress Lies in Action

Professor Qiao Xiuquan from Beijing University of Posts and Telecommunications pointed out that we are evolving from the mobile computing era to the spatial computing era, and smart glasses integrating “AR + AI” will eventually become the core of the next-generation human-computer interaction. However, this process will not happen overnight—it requires overcoming the technological gap of “performance-weight-battery life” and solving the market problems of “ecosystem-cost-privacy.” The popularity of Meta Ray-Ban Display proves that as long as products address real pain points, market demand is real; while the high industry return rate warns that technical stacking divorced from user needs will eventually be abandoned. When dedicated chips break through power consumption bottlenecks, when optical costs fall to a level acceptable to the general public, and when the ecosystem upgrades from “device connection” to “service collaboration,” AI glasses may complete the transformation from “optional equipment” to “must-have terminal,” ushering in a spatial computing new era where hands are free.
I can further optimize this English version—for example, refining technical terminology to align with international industry expressions, or supplementing localized cases for specific markets (such as European or Southeast Asian AI glasses brands). Do you need me to adjust the content for a particular regional audience?

Leave a Reply