Apple is reportedly accelerating its efforts to develop proprietary chips tailored for next-generation smart glasses and artificial intelligence (AI) servers, signaling a significant expansion of its custom silicon strategy beyond consumer devices like the iPhone, iPad, and Mac.
According to sources familiar with the matter, Apple is working on two parallel chip initiatives — one aimed at powering lightweight augmented reality (AR) wearables, and the other designed to handle intensive AI workloads in data centers. These developments reflect Apple’s growing ambition to control more of its hardware and software stack as it deepens its push into emerging technologies like spatial computing and generative AI.
Custom Chips for Smart Glasses
The chip designed for smart glasses is reportedly engineered to be ultra-efficient, prioritizing low power consumption, minimal heat output, and compact size — all critical requirements for wearable devices. Sources say the new chip builds upon Apple’s experience with the H-series processors used in AirPods and the S-series chips in Apple Watch.
Unlike the current Apple Vision Pro, which is a full-fledged spatial computer, the upcoming smart glasses are expected to be more lightweight and geared toward everyday use, such as notifications, navigation, and contextual information overlays. Industry observers believe this device could integrate seamlessly into Apple’s ecosystem, possibly leveraging on-device Siri and advanced sensors to provide a hands-free, glanceable interface.
This move could place Apple in direct competition with other tech giants such as Meta and Google, both of which are also investing heavily in AR wearables.
AI Server Chips for Data Centers
In parallel, Apple is reportedly designing specialized AI server chips intended for internal use in its data centers. These chips would be optimized for training and deploying large language models (LLMs) and other advanced AI systems that support features across Apple’s services — including Siri, search, and on-device intelligence in iOS and macOS.
The AI server chip initiative marks a strategic shift for Apple, which has historically relied on third-party infrastructure — such as NVIDIA GPUs and cloud-based AI solutions — to process high-performance machine learning tasks. By developing its own silicon, Apple aims to gain more control over performance, energy efficiency, and data privacy, especially as it leans into on-device and hybrid AI architectures.
Analysts suggest this could also be a prelude to a wider AI strategy, possibly including AI-enhanced developer tools, smarter personal assistants, and broader integration of generative AI across Apple’s ecosystem.
Broader Implications
Apple’s deeper investment in custom chips aligns with its long-term vision of reducing dependence on external suppliers while tailoring hardware to its unique ecosystem. The company’s in-house silicon team, which already develops the A-series, M-series, and other specialized chips, is believed to be expanding aggressively, hiring experts in AI, AR/VR, and chip design.
While Apple has yet to officially confirm these projects, industry insiders speculate that more details could emerge at the company’s annual Worldwide Developers Conference (WWDC) in June, where Apple is expected to showcase its advancements in AI and possibly tease new hardware platforms.
As competition intensifies in both the wearable AR space and enterprise AI infrastructure, Apple’s dual chip initiatives may serve as a foundation for its next wave of innovation — extending its control from pocket devices to glasses and server racks alike.
TECH TIMES NEWS