Search History
Clear
Trending Searches
Refresh
avatar
The Hype Around AI PCs Shatters, What Truly Constitutes a Real AI PC?
Semiconductor Industry Review 2025-03-05 15:42:55
It has been over a year since the concept hype of AI PCs was introduced, but it seems to be "much ado about nothing." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was introduced, but it seems to be "much ado about nothing." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was introduced, but it seems to be "much ado about nothing." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was launched, but it seems to be "all bark and no bite." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was launched, but it seems to be "all bark and no bite." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was launched, but it seems to be "all bark and no bite." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was launched, but it seems to be "all bark and no bite." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was launched, but it seems to be "all bark and no bite." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
It has been over a year since the concept hype of AI PCs was launched, but it seems to be "all bark and no bite." The market and consumers do not seem to be buying into it. Is the AI PC really "AI"? What is a true AI PC? Let's see the answer from the real AI giants.
 
01. The Rise of AI PC Concept
 
01. The Rise of AI PC Concept
 
 
01. The Rise of AI PC Concept
01. The Rise of AI PC Concept

AI PC, short for Artificial Intelligence Personal Computer, was first proposed by Intel in September 2023. It quickly gained widespread favor within the industry. Although its development time is not long, it is generally believed that AI PC will be a turning point in the PC industry. Canalys defines AI PCs as desktops and laptops equipped with dedicated AI chipsets or modules (such as NPUs) for handling AI workloads.

AI PC, short for Artificial Intelligence Personal Computer, was first proposed by Intel in September 2023. It quickly gained widespread favor within the industry. Although its development time is not long, it is generally believed that AI PC will be a turning point in the PC industry. Canalys defines AI PCs as desktops and laptops equipped with dedicated AI chipsets or modules (such as NPUs) for handling AI workloads.

2024 is widely recognized as the inaugural year for AI PC applications, with major companies launching their own AI computers.

2024 is widely recognized as the inaugural year for AI PC applications, with major companies launching their own AI computers.
In early March, Apple released the AI PC MacBook Air. On March 18, Honor launched its first AI PC, the MagicBook Pro 16. Shortly after, AMD's Chairman and CEO Lisa Su announced that the AMD Ryzen 8040 series AI PC processors have already shipped. On March 22, Microsoft announced the launch of the Surface AI PC. On April 11, Huawei released a new MateBook X Pro laptop, featuring the application of Huawei's PanGu large model for the first time.
In early March, Apple released the AI PC MacBook Air. On March 18, Honor launched the company's first AI PC, the MagicBook Pro 16, and shortly after, AMD's Chairwoman and CEO Lisa Su announced that the AMD Ryzen 8040 series AI PC processors have already shipped. On March 22, Microsoft announced the launch of the Surface AI PC. On April 11, Huawei released a new MateBook X Pro laptop, which for the first time applied Huawei's PanGu large model.
To some extent, the PC industry, strongly tied to the concept of AI, has indeed shown improvement. In the fourth quarter of 2024, the shipment of AI PCs reached 15.4 million units, accounting for 23% of the total quarterly PC shipments. For the entire year of 2024, AI PCs accounted for 17% of the total PC shipments. Among them, Apple led with a 54% market share, followed by Lenovo and HP, each holding 12%. The market penetration rate of AI PCs is expected to continue to rise in 2025, driven by the upgrade wave due to the end of Windows 10 service. But how much AI content does this really involve?
To some extent, the PC industry, strongly tied to the concept of AI, has indeed shown improvement. In the fourth quarter of 2024, the shipment of AI PCs reached 15.4 million units, accounting for 23% of the total quarterly PC shipments. For the entire year of 2024, AI PCs accounted for 17% of the total PC shipments. Among them, Apple led with a 54% market share, followed by Lenovo and HP, each holding 12%. The market penetration rate of AI PCs is expected to continue to rise in 2025, driven by the upgrade wave due to the end of Windows 10 service. But how much AI content does this really involve?
 02
AI PC: Much Ado About Nothing
 02
AI PC: Much Ado About Nothing
 02
 02
AI PC: Much Ado About Nothing
AI PC: Much Ado About Nothing

On February 23, 2024, Lenovo CEO Yang Yuanqing stated after the release of the latest financial report that he expects global PC shipments to grow by about 5% year-over-year in 2024. Despite facing some challenges, he firmly believes that artificial intelligence will be the key factor driving Lenovo's business growth and transformation.

February 23, 2024, Lenovo CEO Yang Yuanqing stated after the release of the latest financial report that he expects global PC shipments to grow by about 5% year-over-year in 2024. Despite facing some challenges, he firmly believes that artificial intelligence will be the key factor in driving Lenovo's business growth and transformation.

However, Yang Yuanqing also pointed out that the current AI PC market is still at a preliminary stage, with actual sales volume and user acceptance being relatively low despite the "loud thunder." He believes this is mainly due to reasons such as technological maturity, user education, and market acceptance.

However, Yang Yuanqing also pointed out that the current AI PC market is still at a preliminary stage, with actual sales volume and user acceptance being relatively low despite the "loud thunder." He believes this is mainly due to reasons such as technological maturity, user education, and market acceptance.
For the AI PC products already released, many people do not recognize them, primarily because the "AI" and "PC" (hardware) in these AI PCs are essentially separate. Taking Microsoft Copilot, currently the largest AI use case on PCs, for example, in the joint definition of AI PCs by Intel and Microsoft, it is emphasized that they must be equipped with hybrid architecture chips, Copilot, and corresponding physical keys. However, in reality, all PCs upgraded to the latest Windows 11 version can use Copilot, as Copilot only relies on Microsoft Azure cloud computing power and has nothing to do with the PC hardware itself.
For the AI PC products already released, many people do not recognize them, primarily because the "AI" and "PC" (hardware) in these AI PCs are essentially separate. Taking Microsoft Copilot, currently the largest AI use case on PCs, for example, in the joint definition of AI PCs by Intel and Microsoft, it is emphasized that they must be equipped with hybrid architecture chips, Copilot, and corresponding physical keys. However, in reality, all PCs upgraded to the latest Windows 11 version can use Copilot, as Copilot only relies on Microsoft Azure cloud computing power and has nothing to do with the PC hardware itself.
As the leader in AI chip technology, NVIDIA simply ignores Microsoft's definition. Who could have more say in AI than NVIDIA? NVIDIA has been laying out its AI ecosystem since its establishment in 1993 and has always been a pioneer in accelerated computing, possessing the most extensive CUDA ecosystem application for AI productivity. High-performance PCs with N-series discrete graphics cards are less dependent on OEM adaptation. Not only can they run lightweight AI tools, such as local large language models and simple Stable Diffusion drawing, but they can also handle medium-sized AI models, with actual generation speeds much faster than those achieved using ordinary integrated graphics for AI tasks.
As the leader in AI chips who holds core technology, NVIDIA simply doesn't care about Microsoft's definitions. Who could possibly have more say in AI than NVIDIA? NVIDIA has long been laying out its ecosystem in the AI field. Since its establishment in 1993, it has been a pioneer in accelerated computing, boasting the most extensive CUDA-based AI productivity. High-performance PCs equipped with NVIDIA’s discrete graphics cards (N cards) are less reliant on OEM compatibility. They can not only run lightweight AI tools, such as local large language models and simple Stable Diffusion drawing, but also handle medium-sized AI models. The actual generation speed is much faster compared to using regular integrated graphics for AI tasks.
There are several main reasons why AI PCs are currently receiving a cold shoulder from the market:
There are several main reasons why AI PCs are currently receiving a cold shoulder from the market:
1. The NPU computing power of current AI PCs is insufficient.
1. The NPU computing power of current AI PCs is insufficient.
The maximum AI performance of Intel NPU is 48 TOPS, while that of Intel Xe integrated graphics is around 28 TOPS. The computing power of AI PCs with integrated graphics currently ranges from 10 to 45 TOPS. In contrast, devices equipped with GeForce RTX 40 series GPUs, including laptops and desktops, offer product solutions ranging from 200 to 1400 TOPS.
The maximum AI performance of Intel NPU is 48 TOPS, while that of Intel Xe integrated graphics is around 28 TOPS. The computing power of AI PCs with integrated graphics currently ranges from 10 to 45 TOPS. In contrast, devices equipped with GeForce RTX 40 series GPUs, including laptops and desktops, offer product solutions ranging from 200 to 1400 TOPS.
The recently released RTX 5090 graphics card adopts NVIDIA's Blackwell architecture, which brings a qualitative leap in performance. According to NVIDIA, the AI computing power of the RTX 5090 reaches 4000 TOPS, three times that of the previous Ada Lovelace architecture.
The recently released RTX 5090 graphics card adopts NVIDIA's Blackwell architecture, which brings a qualitative leap in performance. According to NVIDIA, the AI computing power of the RTX 5090 reaches 4000 TOPS, three times that of the previous Ada Lovelace architecture.
In comparison to GPUs, the AI computing power of NPUs is significantly inferior.
In comparison to GPUs, the AI computing power of NPUs is significantly inferior.
In fact, even a single RTX 4080 or 4090 card does not provide abundant computing power for mainstream common AI applications, let alone the limited computing power of NPUs.
In fact, even a single RTX 4080 or 4090 card does not provide abundant computing power for mainstream common AI applications, let alone the limited computing power of NPUs.
2, NPU does not come with DRAM and cannot support the operation of large models on its own
2, NPU does not come with DRAM and cannot support the operation of large models on its own
Currently, in terms of hardware requirements, AI large models are all "large models of DRAM." NPUs do not inherently come with DRAM and rely on system RAM. This means that to run large models, an additional 64GB or more of DRAM must be equipped to work with the NPU – at this point, why not just use APU/GPU instead? If you're going to spend the extra money, it doesn't matter who runs it, right?
Currently, in terms of hardware requirements, AI large models are all "large models of DRAM." NPUs do not inherently come with DRAM and rely on system RAM. This means that to run large models, an additional 64GB or more of DRAM must be equipped to work with the NPU – at this point, why not just use APU/GPU instead? If you're going to spend the extra money, it doesn't matter who runs it, right?
Moreover, running AI large models on APU and GPU is well-supported by open-source adaptations, making them ready-to-use out of the box.
Moreover, running AI large models on APU and GPU is well-supported by open-source adaptations, making them ready-to-use out of the box.
3, There are few applications compatible with NPU, and its application scope is narrow
3, There are few applications compatible with NPU, and its application scope is narrow
Theoretically, NPUs can now run LLM large language models, stable diffusion image generation, common CV neural network inference (including Resnet, yolo), and whisper speech-to-text. Essentially, all AI inference workloads, which are fundamentally matrix operations, can be executed with low power consumption using an NPU.
Theoretically, NPUs can now run LLM large language models, stable diffusion image generation, common CV neural network inference (including Resnet, yolo), and whisper speech-to-text. Essentially, all AI inference workloads, which are fundamentally matrix operations, can be executed with low power consumption using an NPU.
However, in reality, for Windows laptops currently purchased by users, the scenarios where NPU can be utilized are limited to background blur in Windows Studio Effects and video editing in Jianying. The application scope is extremely narrow, and as of now, there are very few local programs supported by NPU.
However, in reality, for Windows laptops currently purchased by users, the scenarios where NPU can be utilized are limited to background blur in Windows Studio Effects and video editing in Jianying. The application scope is extremely narrow, and as of now, there are very few local programs supported by NPU.
In summary, the functions that NPU can actually perform right now are mostly superficial. This round of AI hype is mainly due to people seeing how chatbots like ChatGPT can solve many problems. Therefore, if NPU is to play a significant role, it needs to be able to run LLM large language models, and clearly, the current NPU in AI PCs cannot meet these demands.
Overall, the functionalities that NPUs can currently offer are mostly superficial. This round of AI hype is mainly due to the fact that people have seen chatbots like ChatGPT solve many problems. Therefore, if NPUs are to truly play a significant role, they need to be able to run large language models (LLMs), which current NPUs in AI PCs cannot handle.
Whether it's an NPU or a GPU doesn't matter, but localized AI is very much needed. At present, whether it's an AI PC or not isn't important; what matters more is whether it comes equipped with an NVIDIA GPU.
Whether it's an NPU or a GPU doesn't matter, but localized AI is very much needed. At present, whether it's an AI PC or not isn't important; what matters more is whether it comes equipped with an NVIDIA GPU.
 
03, The "True AI PC" from the Three Major Manufacturers
 
03, The "True AI PC" from the Three Major Manufacturers
 
 
03, The "True AI PC" from the Three Major Manufacturers
03, The "True AI PC" from the Three Major Manufacturers

Previously, although some manufacturers advertised the launch of AI PC products, in reality, these were mostly gimmicks, featuring only NPU chips without the capability to run true local large models. They could neither train nor perform inference.

Previously, although some manufacturers advertised the launch of AI PC products, in reality, these were mostly gimmicks, featuring only NPU chips without the capability to run true local large models. They could neither train nor perform inference.

The concept of AI PCs has been widely promoted for laptops. However, there is currently no slim and lightweight notebook that qualifies as a high-computational-power AI-specific computing device. Instead, traditional high-performance gaming laptops and desktops equipped with powerful GPUs can truly provide genuine AI productivity.

The concept of AI PC has been widely promoted in the context of laptops. However, currently, there is no ultrabook that can be called a high-computing-power, AI-dedicated computing device. Instead, it's the traditional high-performance gaming laptops and desktops equipped with powerful GPUs that truly provide genuine AI productivity.
The real AI PC still depends on manufacturers capable of developing high-performance GPUs, such as Nvidia and AMD.
The real AI PC still depends on manufacturers capable of developing high-performance GPUs, such as Nvidia and AMD.
At this year's CES at the beginning of the year, AMD launched the AI Max 300Strix Halo. Jensen Huang also unveiled Project DIGITS. Adding to these was Apple's Mac Pro from earlier. These three are powerful tools for local deployment of large models, which can be called "desktop AI supercomputers."
At this year's CES at the beginning of the year, AMD launched the AI Max 300Strix Halo. Jensen Huang also unveiled Project DIGITS. Adding to these was Apple's Mac Pro from earlier. These three are powerful tools for local deployment of large models, which can be called "desktop AI supercomputers."
AMD launched two versions of the Strix Halo: the consumer-grade Strix Halo, mainly used for consumer performance laptops (gaming laptops), and the commercial-grade Strix Halo Pro, primarily for mobile workstations. Leaked 3DMark test data shows that its flagship model, the Ryzen AI MAX+ 395, features 16 CPU cores based on the Zen 5 architecture, 32 threads; 40 GPU cores based on the RDNA 3.5 architecture, or the Radeon 8060S integrated graphics; with a maximum power of 120W, which is three times that of standard mobile APUs; it supports four-channel LPDDR5X memory, providing up to 256 GB/s bandwidth. Notably, the integrated Radeon 8060S graphics performance is more than three times that of the previous generation Radeon 890M, even approaching the level of the RTX 4060 discrete graphics card.
AMD launched two versions of the Strix Halo: the consumer-grade Strix Halo, mainly used for consumer performance laptops (gaming laptops), and the commercial-grade Strix Halo Pro, primarily for mobile workstations. Leaked 3DMark test data shows that its flagship model, the Ryzen AI MAX+ 395, features 16 CPU cores based on the Zen 5 architecture, 32 threads; 40 GPU cores based on the RDNA 3.5 architecture, or the Radeon 8060S integrated graphics; with a maximum power of 120W, which is three times that of standard mobile APUs; it supports four-channel LPDDR5X memory, providing up to 256 GB/s bandwidth. Notably, the integrated Radeon 8060S graphics performance is more than three times that of the previous generation Radeon 890M, even approaching the level of the RTX 4060 discrete graphics card.
NVIDIA refers to its released Project DIGITS as the "smallest AI supercomputer currently." Project DIGITS uses a custom "GB10" superchip, which integrates a GPU based on the Blackwell architecture and a Grace CPU developed in collaboration with MediaTek and ARM. According to the information, the Blackwell GPU can provide 1PFLOPS of FP4 computing power, while the Grace CPU includes 10 Cortex-X925 cores and 10 Cortex-A725 cores. The connection between the GPU and CPU is achieved through the large-scale supercomputing equivalent NVLINK-C2C chip-to-chip interconnect bus.
Project DIGITS is also equipped with an independent NVIDIA ConnectX interconnect chip, allowing the GPU inside the "GB10" superchip to be compatible with various interconnect technology standards, including NCCL, RDMA, GPUDirect, etc., thus enabling this "large integrated graphics" to be directly accessed by various development software and AI applications.
Meanwhile, Apple released the M3 series chips in 2023, equipped with the next generation of GPUs, marking the biggest leap in the history of Apple's chip graphics architecture. Not only are they faster and more energy-efficient, but they also introduce a new technology called "dynamic cache," along with hardware-accelerated ray tracing and mesh shading for the first time on Macs. The rendering speed is now 2.5 times faster than that of the M1 series chips. It's worth noting that the new M3 series chips bring up to 128GB of unified memory architecture. Apple claims that support for up to 128GB of memory unlocks workflows previously unachievable on laptops, such as AI developers using larger Transformer models with billions of parameters. Last year, Apple also released the M4 Pro chip, claiming performance surpasses that of AI PC chips.
While Apple released the M3 series chips in 2023, equipped with the next-generation GPU, it represents the biggest leap in the history of Apple's chip graphics architecture. Not only are they faster and more energy-efficient, but they also introduce a new technology called "dynamic cache," and for the first time bring hardware-accelerated ray tracing and mesh shading and other new rendering features to Mac. The rendering speed is now 2.5 times faster than the M1 series chips. It is worth noting that the all-new M3 series chips come with up to 128GB of unified memory architecture. Apple claims that support for up to 128GB of memory unlocks workflows that were previously unattainable on laptops, such as AI developers using larger Transformer models with billions of parameters. Last year, Apple also released the M4 Pro chip, which is claimed to outperform AI PC chips.
And all three adopt a technology called unified memory architecture. The benefit of a unified architecture is that it unifies what was previously separate memory and video memory (memory of the graphics card), thus reducing the need for data copying between memory and video memory when the CPU and GPU communicate. In addition, this technology can also make the computer's video memory larger, thereby breaking the bottleneck of insufficient video memory when running large models on consumer-grade graphics cards. It is worth noting that the unified memory design was not pioneered by NVIDIA; the Apple M1 was the first instance.
And all three adopt a technology called unified memory architecture. The benefit of a unified architecture is that it unifies what was previously separate memory and video memory (memory of the graphics card), thus reducing the need for data copying between memory and video memory when the CPU and GPU communicate. In addition, this technology can also make the computer's video memory larger, thereby breaking the bottleneck of insufficient video memory when running large models on consumer-grade graphics cards. It is worth noting that the unified memory design was not pioneered by NVIDIA; the Apple M1 was the first instance.
 
04, Deepseek kicks off the battle for desktop AI supercomputers
 
04, Deepseek kicks off the battle for desktop AI supercomputers
 
 
04, Deepseek kicks off the battle for desktop AI supercomputers
04, Deepseek kicks off the battle for desktop AI supercomputers

Recently, the severe shortage of online computing power from DeepSeek has fueled the demand for local deployment of large models, and the "true AI PCs" from the three major manufacturers have also begun to be deployed by vendors with DeepSeek.

Recently, the severe shortage of online computing power for DeepSeek has fueled the demand for local deployment of large models. The "true AI PCs" from the three major manufacturers have also started to deploy DeepSeek.

As a MoE model, DeepSeek requires high GPU memory but relatively lower computing power and memory bandwidth. This gives desktop AI supercomputers with large GPU memory through unified memory technology an opportunity.

As a MoE model, DeepSeek requires high GPU memory but relatively lower computing power and memory bandwidth. This gives desktop AI supercomputers with large GPU memory through unified memory technology an opportunity.
Previously, a foreign expert ran DeepSeek V3 using 8 M4 Pro Mac minis. Similarly, it is expected that DeepSeek V3 can be deployed using four Project DIGITS, and the generation speed should be much faster. According to AMD's own announcement, the strix halo architecture APU can deploy a 70B model, which is 2.2 times faster than the 4090 and consumes 87% less power.
Previously, a foreign expert ran DeepSeek V3 using 8 M4 Pro Mac minis. Similarly, it is expected that DeepSeek V3 can be deployed using four Project DIGITS, and the generation speed should be much faster. According to AMD's own announcement, the strix halo architecture APU can deploy a 70B model, which is 2.2 times faster than the 4090 and consumes 87% less power.
A netizen said, "I plan to replace my current notebook after the halo notebook is released. Local deployment of large models is indeed interesting, and in a few years, it might be possible to locally deploy a 671B INT8 or FP8 large model. Besides large models, with improved RAM and CPU configurations, other tasks will also be faster."
A netizen said, "I plan to replace my current notebook after the halo notebook is released. Local deployment of large models is indeed interesting, and in a few years, it might be possible to locally deploy a 671B INT8 or FP8 large model. Besides large models, with improved RAM and CPU configurations, other tasks will also be faster."
The AI track may provide domestic manufacturers with an opportunity to enter the PC chip field. Currently, many manufacturers are starting to market various AI all-in-one products. It is believed that if domestic manufacturers could launch a version of "Project DIGITS" with larger unified memory, such as a 256G version, it might be more popular.
The AI track may provide domestic manufacturers with an opportunity to enter the PC chip field. Currently, many manufacturers are starting to market various AI all-in-one products. It is believed that if domestic manufacturers could launch a version of "Project DIGITS" with larger unified memory, such as a 256G version, it might be more popular.
The concept of AI PC is like a little girl dressed up by anyone. The story, in fact, varies from one company to another. OEMs are flourishing, investing money and engineers into localized AI applications. Some software can run both locally and in the cloud, and cloud services can connect to domestic models for business, potentially making it a very good piece of cake.
The concept of AI PC is like a little girl that anyone can dress up. The story, in fact, varies from one company to another. Major OEMs are blooming with diversity, investing heavily in money and engineers to develop localized AI applications. Some software can run both locally and on the cloud, and cloud services can be integrated with domestic models for commercial use, which might turn out to be a very lucrative market.
Low latency + privacy protection may be the driving force behind the localization of AI applications such as large language models similar to GPT, SD image generation, voice cloning, AI frame interpolation, image matting, and redrawing.
Low latency + privacy protection may be the driving force behind the localization of AI applications such as large language models similar to GPT, SD image generation, voice cloning, AI frame interpolation, image matting, and redrawing.
Sufficient edge computing power + large memory (video memory) + highly optimized software, when combined, have the potential to address industry pain points and enable the mass deployment of AI terminals. Therefore, AI PCs are not entirely just hype; whether it's more accessible AI, more energy-efficient AI, or AI with more powerful computing capabilities, or even simpler and easier-to-use AI based on cloud and network, all these aspects are seeing further technological development and market exploration.
Sufficient edge computing power + large memory (video memory) + highly optimized software, when combined, have the potential to address industry pain points and enable the mass deployment of AI terminals. Therefore, AI PCs are not entirely just hype; whether it's more accessible AI, more energy-efficient AI, or AI with more powerful computing capabilities, or even simpler and easier-to-use AI based on cloud and network, all these aspects are seeing further technological development and market exploration.

【Copyright and Disclaimer】The above information is collected and organized by PlastMatch. The copyright belongs to the original author. This article is reprinted for the purpose of providing more information, and it does not imply that PlastMatch endorses the views expressed in the article or guarantees its accuracy. If there are any errors in the source attribution or if your legitimate rights have been infringed, please contact us, and we will promptly correct or remove the content. If other media, websites, or individuals use the aforementioned content, they must clearly indicate the original source and origin of the work and assume legal responsibility on their own.