News
News
 

2024 WAIC Intelligent Chip and Multimodal Large Model Forum | Axera AI Processors Empower Inclusive Intelligence

Release time:2024-07-05

Recently, the 2024 World Artificial Intelligence Conference was held in Shanghai. On July 5th, Axera successfully hosted the "Leading the Future with Chips | Intelligent Chips and Multimodal Large Models Forum" with the theme of "Leading AI Innovation, Creating Inclusive Intelligent Life". The forum brought together experts and opinion leaders from the fields of chips, large models, and intelligent manufacturing to share innovative opportunities and implementation achievements in the large model era. Axera proposed a product proposition for building AI processors based on edge-side intelligence, highlighting its "more economical, more efficient, more environmentally friendly" advantages. At the sub-forum, Axera officially released the "AxeraNeurons AI Processor," demonstrating the technology application and business ecosystem of the deep integration of intelligent chips and large models.

Accelerated Integration of Cloud, Edge, and End Sides

"More Economical, More Efficient, More Environmentally Friendly" Become Keywords for AI Chips

At present, China's large models are experiencing rapid development. Dr. Qiu Xiaoxin, Founder and Chairperson of Axera, stated in her keynote speech that the true large-scale implementation of large models requires the close integration of cloud, edge, and end sides, and the key to the combination of edge and end sides lies in AI computing and perception. Based on its two self-developed core technologies - AxeraVision AI-ISP and AxeraNeurons hybrid-precision NPU - Axera has established an "AIoT+ADAS" one-body-two-wings strategic route, and is developing deeper into edge computing and AI inference, accelerating the implementation of application scenarios such as smart cities and intelligent driving.

 Image

                            Dr. Qiu Xiaoxin, Founder and Chairperson of Axera

Dr. Qiu believes that intelligent chips and multimodal large models have become the "golden combination" of the AI era. As large model applications become more widespread, "more economical, more efficient, and more environmentally friendly" will become the keywords for intelligent chips. High-efficiency inference chips equipped with AI processors will be a more reasonable choice for large model implementation, which is also the key to promoting Inclusive AI.

Leading AI Processor

AxeraNeurons Layouts Full-Spectrum Computing Power

As a leading basic computing power platform company in China, Axera predicted the outbreak of Transformer in 2022 and took the lead in launching chips equipped with the AxeraNeurons AI Processor. Liu Jianwei, Co-founder and Vice President of Axera, introduced that the core of the AxeraNeurons AI Processor is the operator instruction set and data flow microarchitecture. Its underlying architecture uses a programmable data flow microarchitecture to improve energy efficiency and computing power density. At the same time, its flexibility ensures the completeness of the operator instruction set, supporting various AI applications. The mature software toolchain allows developers to quickly get started. In addition, the joint design of hardware and software ensures the high-speed iteration of the AxeraNeurons AI Processor, maintaining its competitiveness. The AxeraNeurons AI Processor significantly reduces the development and operation costs of AI applications, making AI intelligence more economical, efficient, and environmentally friendly.

 Image

Liu Jianwei, Co-founder and Vice President of Axera

Liu Jianwei introduced that the AxeraNeurons AI Processor has completed its layout in three computing power tiers (high, medium, and low), and has achieved large-scale mass production in two fields: smart cities and assisted driving. Its energy efficiency ratio has been improved by an order of magnitude compared to GPGPU chips. In general large model applications such as text-to-image search, general detection, image-to-text generation, and AI Agent, the AxeraNeurons AI Processor enables AI developers to develop efficiently at lower costs.

Visual Large Model Implementation Speeds Up

Smart IoT and Intelligent Driving Bloom with New Opportunities

At the forum, Axera's partners in the fields of smart IoT and intelligent driving also shared the application prospects of AI processors. Yin Jun, an expert in smart IoT and AI innovation integration, stated that vision-based intelligence is widely used in urban governance and daily life. In recent years, large models have developed rapidly in text and voice fields, but visual implementation faces challenges such as reliability, stability, and incomplete understanding. Accurately describing the objective world is the key to the implementation of visual large models. For continuously updated visual large models, Yin Jun believes that users should not be required to abandon their original technology investments. Instead, through the collaboration of large and small models and model miniaturization, optimal computing power configuration can be achieved to accelerate the implementation of large models in the industry.

Image

Yin Jun, Expert in Smart IoT and AI Innovation Integration

Similar to IoT, intelligent driving has also experienced the process of model development from small to large. The "BEV+Transformer" large model architecture is becoming the main force in the intelligent driving industry, requiring breakthroughs in "end-to-end" model technology. In this regard, Zhang Chi, CTO of Maxieye Technology Co., Ltd., stated that large models have accelerated the transition of autonomous driving from highway to more complex urban scenarios and promoted the formation of end-to-end perception-control integration. In this process, the role of lidar and high-precision maps is weakening, but rich end-to-end large models have made point-to-point autonomous driving without geographical restrictions possible.

 Image

Zhang Chi, CTO of Maxieye Technology Co., Ltd.

From Cloud to End, Embracing RISC-V

Intelligent Chip + Large Model Helps Inclusive AI

The guests on site also made predictions about the development direction of artificial intelligence. Jia Chao, Vice President of FaceBook AI, believes that with advantages in cost, privacy, latency, and reliability, end-side AI development will become a global trend, which means that large models have officially entered the lightweight era. In this context, "model knowledge density doubles every 8 months" will become the new Moore's Law of the large model era. Jia Chao emphasized that enterprises developing end-side large models need to work on both algorithm and chip sides, using end-side models with end-side chips to efficiently implement in user scenarios, so as to bring the best experience to users.

 Image

Jia Chao, Vice President of FaceBook AI

Shang Yunhai, Senior Technical Expert in RISC-V and Ecosystem at DAMO Academy, analyzed that large models will present three major trends in the future: large scale, unified structure, and enhanced capabilities. Currently, we are in a stage where computing demand does not match hardware computing power. Quantization, structured sparsity, and low-precision training will become effective paths to improve large model performance. As an open-source, open instruction architecture, RISC-V can adapt to the rapid changes of AI algorithms and operators in real-time, meeting the current demand for large models to promote the development of AI computing power and chip architecture.

  •  
    Email Addresses
    Contact Sales

    Business@axera-tech.com

    Automotive Sales

    auto@axera-tech.com

    Algorithm Cooperation

    DL-AITech-ST@axera-tech.com

    IR Contact

    ir@axera-tech.com

    Join Us

    hr@axera-tech.com

    Legal Contact

    legal@axera-tech.com

    PR

    axerapr@axera-tech.com

    Complaint

    complaints@axera-tech.com