We are a provider of artificial intelligence (“AI”) inference systems-on-a-chip (“SoCs”), delivering high-performance perception and computing platforms for edge and endpoint AI applications. We are building an advanced AI computing infrastructure to make artificial intelligence accessible to everyone, creating an empowered future where AI improves lives for all. At the heart of our capabilities and SoC offerings is the Axera Neutron (愛芯通元) mixed-precision neural processing unit (“NPU”), a specialized processing architecture that delivers improved AI inference efficiency through advanced mixed-precision computing. This technology is crucial for deployment of quantized models, enabling AI inference on edge and endpoint devices. In particular, the growing smart vehicles industry is contributing to the increasing adoption of this technology. Complementing the mixedprecision NPU is our Axera Proton (愛芯智眸) AI-ISP, the world’s first commercially scaled AI-enabled image signal processor. The Axera Proton AI-ISP is an advanced image signal processing engine that optimizes visual data in real time and at the pixel level, ensuring high-quality imaging even under challenging conditions. Together with our other technologies, such as the Pulsar2 toolchain and our software development kit (“SDK”), these innovations address both the fundamental “computation” requirements of AI inference and the critical “perception” applications that drive real-world value. We adopt the fabless model and focus solely on chip design and sales. Our proprietary technology platform embodies an integrated and universal architecture, enabling efficient reuse of IP cores across multiple applications. This scalable approach allows us to rapidly develop, commercialize, and iterate our SoCs with flexibility and speed. This approach has enabled us to scale the production of visual on-device AI inference SoCs and expand into growing market such as smart vehicles and emerging market like edge computing, through continuous refinement of core technologies, improved computing power, and stringent control of power consumption. This platform strategy delivers dual competitive advantages: significantly reducing R&D costs while accelerating product development cycles, cementing our leadership in AI inference SoCs. Since our establishment, we have achieved selected core achievements, including: ‧ We are largest provider of mid-to-high-end visual on-device AI inference chips in the world, in terms of the shipments in 2024, according to CIC. According to the same source, we are a leading player in the global visual on-device AI Inference chip market, ranking among the top five players in 2024. ‧ Our Axera Proton AI-ISP is the world’s first AI-ISP technology commercialized at scale, according to CIC, marking a significant milestone in the computer vision industry. ‧ Our AI inference SoCs shipments volume achieved 9.3 million in 2024. ‧ As of September 30, 2025, we had independently developed and commercialized five generations of SoCs, spanning dozens of types, that have achieved large-scale production in visual on-device computing, smart vehicle and edge AI inference applications. Beyond our robust technical capabilities, we believe that achieving large-scale commercialization is critical for a fabless company like us to maintaining financial health while executing a disciplined product strategy. Since our inception, we have focused on translating our advanced technologies into market-proven products and achieving broad commercial adoption across multiple application scenarios. As of September 30, 2025, we had cumulatively delivered over 165 million SoC units since our inception. We primarily focus on AI inference chips regarding on-device computing, smart vehicle applications and edge AI inference, being subsets of the overall AI inference chip market, which is in turn a subset of the overall semiconductor market. In particular, our sales of visual on-device computing SoCs and edge computing SoCs experienced significant growth in 2024, increasing by approximately 69% and 400% compared to 2023, respectively. As of September 30, 2025, cumulative shipments of our smart vehicle SoCs reached over 518,800 units since launch. According to CIC, we have become the fifth largest provider of visual on-device AI inference chips globally, in terms of the shipment volume in 2024. In the realm of edge AI inference, we were the third largest provider in China in terms of the shipment volume in 2024, according to the same source. This commercial scale validates our technology-to-market capabilities while creating a virtuous cycle for continued R&D investment and product innovation. Our highly competitive and industry-acclaimed SoC products have driven rapid revenue growth during the Track Record Period. Specifically, from 2022 to 2024, our revenue increased from RMB50.2 million to RMB472.9 million, achieving a CAGR of 206.8%. Furthermore, our revenue increased from RMB254.2 million in the nine months ended September 30, 2024 to RMB269.0 million in the nine months ended September 30, 2025. In 2022, 2023, 2024 and the nine months ended September 30, 2024 and 2025, we had loss for the year/period of approximately RMB611.6 million, RMB743.1 million, RMB904.2 million, RMB691.0 million and RMB855.7 million, respectively. Our Market Opportunities AI has emerged as one of the most vital technologies today. AI model architectures, which refer to the structural design and organization of the components and processes within an AI model, determine how data is processed, models are trained and evaluated, and predictions are generated. Transformer architecture, one of the most popular AI model architectures in this decade, has become the foundation of nearly every major large AI models in use today. With the rapid advancement of large AI models based on Transformer architecture such as large language models (“LLMs”), visual language models (“VLMs”) and visual language action models (“VLAs”), both global tech giants and innovative startups have made significant investment in large AI model training and inference. Historically, large AI models have been deployed in the cloud due to their substantial computational and memory requirements. However, cloud-based deployment raises concerns around latency and privacy, driving the industry to explore edge inference frameworks that enable efficient large AI model processing on resource-constrained devices. This expansion toward edge computing—where AI tasks are performed locally rather than relying solely on centralized cloud infrastructure—represents a transformative opportunity in the AI landscape, unlocking new use cases and applications. The industry, previously focused on the training-centric paradigm, is now extending to cover inference—the process of applying trained and fine-tuned large AI models to generate predictions or decisions in real time, where tangible real-world value is delivered. High-performance AI processors are essential for AI deployment. While central processing units (“CPUs”) are versatile and graphics processing units (“GPUs”) excel at parallel computation, neither is optimal for the unique requirements of AI inferencing. To enable the seamless integration of large AI models into everyday activities, efficient and cost-effective AI inference chips, such as NPUs, are becoming indispensable, particular for on-device and edge computing scenarios. Unlike general-purpose CPUs and GPUs, NPUs are architecturally optimized for large-scale parallel computing, delivering exceptional efficiency in processing large volumes of data and neural network computations. This specialized design makes NPUs a critical enabler for scalable, low-latency, and cost-effective AI inference at the edge. Furthermore, the emergence of large AI models like DeepSeek has driven a reduction in the cost of accessing high-quality models, and advances in quantization technology have made deploying sophisticated models on devices or at the edge more cost-effective than ever. Together, these developments are set to accelerate the widespread adoption of edge-based large AI models, positioning us on the cusp of a major industry transformation. Our Technology Platform Our technology platform follows a dual-track approach for development, integrating both iterations of IP cores for technological advancement and repurpose of IP cores for horizontal domain expansion. On the one hand, through iterative refinement, we incorporate the latest technological innovations, industry advances, and market feedback into our IP cores, enabling us to maintain technological leadership while enhancing product reliability and appeal. On the other hand, we maximize reuse of our IP cores, such as the Axera Neutron mixed-precision NPU and Axera Proton AI-ISP, across diverse applications to reduce R&D costs and accelerate time-to-market, enhancing our business scalability and enabling effective horizontal expansion. The platform’s extensibility is further strengthened by the Pulsar2 development toolchain and our mature SDK, which together provide our customers and partners with comprehensive resources to build advanced AI applications. ‧ Axera Neutron NPU. Dynamically selects numerical precision, such as INT4, INT8 and INT16, based on varying computational requirements, our Axera Neutron NPU employs a multi-threading, heterogeneous, and multi-core design that tightly integrates memory with processing units. This architecture delivers fundamental efficiency gains through two key mechanisms: (1) reducing computational overhead by optimizing for neural network operations, and (2) minimizing unnecessary data transfers through memory hierarchy design. Additionally, our Axera Neutron NPU has native compatibility with mainstream AI models—including those based on Transformer (one of the most popular AI model architectures, being the foundation of nearly every major large AI models in use today) and convolutional neural networks (“CNN”) (another mainstream AI model architecture, particularly useful for visual datasets such as images or videos) architectures—ensures large-scale seamless deployment of AI models across edge and endpoint devices. This dual capability of high efficiency and broad compatibility establishes our technology as a versatile hardware foundation for next-generation AI applications. ‧ Axera Proton AI-ISP. Leveraging the power of the Neutron NPU and AI algorithms, the Proton AI-ISP improves traditional image signal processing by optimizing key stages of the processing pipeline, resulting in superior pixel-level image quality. This technology enables “nightas- bright-as-noon (黑夜如白晝)” imaging in low-light environments, ensuring that high-quality images and video data as input to downstream AI inference tasks such as object detection, image segmentation, image classification etc. ‧ Pulsar2. Pulsar2 toolchain serves as a solution for converting, optimizing, and deploying neural networks on our SoCs, emphasizing efficiency, safety, and ease of use. It integrates model conversion, quantization, and compilation, supports leading frameworks, and is ISO 26262:2018 TCL 3 certified for automotive safety. With developer-friendly kits and deep hardware-software integration, Pulsar2 toolchain optimizes performance and is suited for large-scale AI deployment. ‧ SDK. Our stable and mature SDK is a package that includes an easy-to-use application programming interface (“API”) and turnkey tooling. It enables developers to leverage the same API across multiple SoCs and efficiently produce, test, and efficiently develop various full featured applications and products, supporting rapid and scalable product roll outs.
Source: Axera (00600) Prospectus (IPO Date : 2026/01/30) |