A few days ago, graphics chip maker Nvidia released a quarterly revenue report with the largest increase in six years, and drove its share price to rise 14% after the market, which triggered industry attention. In addition, its former CEO Huang Renxun claimed that Nvidia is already an AI (artificial intelligence) chip company, which is quite eager to catch up with the AI. The industry is optimistic about its market performance in chips, especially AI chips. But is this really the case?
Let's take a look at NVI's earnings performance for the quarter. Total revenue was US$2 billion, a year-on-year increase of 53.6%. The revenue of the graphics chip division accounted for 85% of its total revenue, up 52.9% year-on-year to US$1.7 billion; data center business tripled to US$240 million; automotive business grew 60.8% to 1.27. One hundred million U.S. dollars.
From the composition of revenue, it is not difficult to see that the core business supporting NVIDIA is still the graphics chip (independent graphics card) in the traditional PC market, and the revenue related to AI related fields or business related to AI (such as data center) is only It accounts for about 1/10 of its total revenue. Therefore, only watching from the camp, Nvidia is far from being an AI chip company. The fact is that AI is currently a hot spot for the industry to compete. It has included the fact that NVIDIA, which is still a traditional PC industry's independent graphics card (display chip), has become a AI company. In this sense, we do not rule out that Nvidia has the ingredients to use the AI ​​vent to speculate and exaggerate the role of its own chip.
Of course, we are not denying that AI chips (supporting AI applications and functions) will be the future development trend of the chip industry. In particular, Google's artificial intelligence software AlphaGo beats the world's top Go player Li Shishi with deep learning technology, indicating that artificial intelligence will be the next hot spot for the technology industry and the big players to compete. The development of big data and the Internet of Things has prompted technology giants such as IBM, Google, Facebook, Microsoft and many large cloud computing companies that provide cloud services to compete in the development of artificial intelligence technology, in order to take advantage of the massive data collected by future IoT devices ( Analysis) to provide better services to the market and users.
It should be noted that although the names of different manufacturers are different, such as what IBM calls cognitive computing, Facebook and Google call machine learning or artificial intelligence, but as one of the data center infrastructure hardware supporting these technologies and applications. Still plays an important role.
Based on this trend, according to relevant statistics, at least 10% of the workloads currently running on data centers (servers) including IBM, Google, Facebook, Amazon, Microsoft, and cloud computing companies are related to AI applications (or Develop your own AI applications or support and run your AI development and applications, and this trend will expand in the future as the market and users demand AI.
This trend poses a new challenge to the computing power and power consumption of the data center's basic chip. NVI has always had a natural advantage in the GPU (graphics chip). For example, the large-scale parallel computing capability required by AI; under the same area, the GPU has more computing units (integer, floating-point multiply-add units, special arithmetic units, etc.); the GPU has a larger bandwidth of Memory, so it is large There will also be good performance in throughput applications; GPUs have far less energy requirements than CPUs.
However, this does not mean that the above data centers (servers) do not have CPU requirements. On the contrary, the CPU is still an indispensable part of the computing task. In the deep learning algorithm processing task, a high-performance CPU is required to execute instructions and perform data transmission with the GPU, while exerting the versatility of the CPU and the complex task processing capability of the GPU. In order to achieve the best results, this is why most companies still use the "CPU + GPU" combination, or heterogeneous computing.
In this heterogeneous mode, the serial portion of the application runs on the CPU, and the GPU acts as a coprocessor, primarily responsible for the parts that require a lot of computation. From this perspective, the lack of CPU should be the shortcoming of NVIDIA in the present and not from AI.
Since the CPU is mentioned, it is naturally associated with the oldest Intel in this field. It is the barrier that Invitro has achieved for the AI ​​chip company that it cannot cross the domain on the CPU, even on the GPU that it is good at meeting the needs of the above-mentioned big AI. Challenger.
This challenge is first reflected in Intel's innovative potential for CPU computing power. For example, the Xeon Phi chip for data center servers released recently. According to Intel's related report, the Xeon Phi processor is 2.3 times faster than NVIDIA's GPU, and the Xeon Phi chip has 38% extension on multiple nodes and up to 128 nodes. This is currently on the market. The GPU can't do it. At the same time, the system consisting of 128Xeon Phi processors is 50 times faster than a single Xeon Phi processor, which means that the Xeon Phi processor has obvious scalability advantages, which is crucial for AI applications.
However, for Intel's claims, Nvidia made a strong rebuttal and pointed out that Intel used data 18 months ago, compared to four Maxwell GPUs and four Xeon Phi processors. If you use the updated Caffe AlexNet data, you will find that four Maxwell GPUs are 30% faster than the four Xeon Phi processors. Regardless of who is more objective, we are here, but from the perspective of the saliva war on the report, at least the CPU that is not dominant is still very promising. At least from the CPU itself, Intel can narrow the gap with Nvidia. Or put pressure on Nvidia.
In addition, from the point of view of simply satisfying the computing power and implementation method of the AI ​​application itself, whether the GPU is the best or the only one in the industry is still controversial. Some researchers have tested that the FPGA architecture is more flexible than the GPU, and the performance is stronger under unit energy consumption. Deep learning algorithms run faster and more efficiently on FPGAs, and power consumption can be lower.
This seems to explain why Intel had previously acquired FPGA manufacturer Altera for $16.7 billion. When it comes to mergers and acquisitions, there is another industry that believes that Intel can enhance its competitiveness in AI chips. It is even possible to surpass NVIDIA's acquisition of Nervana Systems, which specializes in AI chips. It is said that the deep learning chip researched by Nervana Systems is more cost-effective than the GPU, and the processing speed is 10 times that of the GPU.
To illustrate the strength of Nervana Systems or the threat to NVIDIA, we might as well introduce an episode of Nervana Systems being acquired. It is said that when Intel contacted Nervana to discuss the sale, Nervana considered NVIDIA to be one of the reasonable choices, because Nervana's deep learning software Neon can also run on NVIDIA chips, which can help Nvidia fill the short board. However, Nvidia is not flustered by Nervana and believes that its GPU-based deep learning technology is better than Nervana, but after Nervana and Intel reached a deal, NVIDIA seems to have changed its mind and tried to restart the acquisition negotiations, but the opportunity has been missed.
In this regard, some analysts believe that getting Intel to get Nervana is the biggest mistake of NVIDIA, because through this acquisition, Intel will get a specific product and IP for deep learning, which can be used alone or with Intel's future. Technology integration to produce more competitive and creative chip products.
When it comes to integration, Intel is best at it. For the acquisition of Nervana Systems, it can integrate related products into chips or multi-chip packages. For example, adding the Nervana Engine IP to a Xeon CPU can provide a low-cost way to accelerate the performance required by AI, productizing Nervana IP, and improving the computing power of your CPU to meet AI development and application. Higher demand for data center chips.
Connecting Terminals,Micro Connecting Terminal,Aluminum Connecting Terminals,Connecting Copper Terminal
Taixing Longyi Terminals Co.,Ltd. , https://www.txlyterminals.com