South Korea's chip export revenue soars by 160.8%, setting a new monthly record

robot
Abstract generation in progress

Source: Securities Times Network Author: Mu Yang

Two major news stories have emerged in the global chip industry!

First, South Korea’s chip exports are booming again. On March 1, South Korea’s data showed that in February, semiconductor exports surged by 160.8% year-over-year to $25.16 billion, setting a new record for the highest monthly export value. Market analysts pointed out that this data indicates the global storage chip market is in a “super cycle” of “demand explosion,” driven by AI investments.

Second, NVIDIA plans to release a brand-new chip tailored specifically for OpenAI and other clients. Reports indicate that OpenAI has agreed to become one of the largest customers for this new processor, marking a “major victory” for NVIDIA.

South Korea’s chip exports surge 160.8%

On March 1, local time, South Korea’s Ministry of Trade, Industry and Energy released the “February Import and Export Trends” data, showing that exports in February increased by 29% year-over-year to $67.45 billion, hitting the highest level for the same month in history. The daily average export value jumped 49.3% to $3.55 billion, surpassing $3 billion for the first time ever.

Specifically, semiconductor exports performed remarkably well, strongly driving overall export growth. Fueled by robust demand from AI investments and sharp increases in storage chip prices, South Korea’s semiconductor exports in February skyrocketed by 160.8% YoY to $25.16 billion, reaching a monthly record high, with exports exceeding $20 billion for three consecutive months.

Among the top 15 export categories, five— including semiconductors— saw year-over-year increases, while automobile exports ($4.81 billion) and auto parts ($1.45 billion) declined by 20.8% and 22.4%, respectively.

In February, South Korea’s imports increased by 7.5% YoY to $51.94 billion. As a result, the trade surplus for the month was $15.51 billion, setting a new record for the same month and marking the 13th consecutive month of surplus.

Additionally, major Korean chip companies Samsung Electronics and SK Hynix have new developments.

On March 1, Samsung Electronics announced the launch of a global manufacturing AI transformation plan, aiming to upgrade all factories to “AI-driven factories” by 2030. This strategy covers the entire supply chain from material intake to finished product shipping, including deploying digital twin systems, dedicated AI agents, and humanoid/task robots, applied in quality control, predictive maintenance, logistics scheduling, and EHS safety management. The transformation aims to improve operational efficiency, quality standards, and on-site safety. Samsung Electronics plans to showcase related industrial AI achievements at the MWC conference in 2026.

Meanwhile, SK Hynix recently co-hosted the HBF Standardization Alliance launch event with SanDisk, officially releasing the global standardization strategy for the next-generation memory solution HBF (High Bandwidth Flash) designed for the AI inference era. SK Hynix stated that it will work with SanDisk to promote HBF as an industry-wide standard. The two companies have established a dedicated working group within the OCP framework to begin formal standardization work, strengthening their position in the AI chip market.

AI giant NVIDIA plans to launch a new chip

Notably, major clients of Samsung Electronics and SK Hynix—AI giant NVIDIA—are also making big moves.

According to The Wall Street Journal, NVIDIA plans to release a brand-new processor designed specifically to help OpenAI and other clients build faster, more efficient tools. This major strategic shift is expected to reshape the AI competition landscape.

Reports indicate that NVIDIA is designing a new system for AI inference computing, responsible for enabling AI models to respond to user requests. This new platform will be officially announced next month at the NVIDIA GTC developer conference in San Jose, and will incorporate chips designed by startup Groq.

Currently, inference computing has become a fiercely competitive industry focus. Competitors like Google and Amazon have launched their own chips to rival NVIDIA’s flagship products. Meanwhile, explosive growth in autonomous coding technology in the tech industry has created demand for new, more efficient chips capable of handling complex AI tasks.

Sources say that some insiders have revealed that OpenAI has agreed to become one of the largest customers for this new processor, marking a “major victory” for NVIDIA. As a core customer of NVIDIA, OpenAI has been seeking more efficient alternatives to NVIDIA chips over the past few months and recently signed a deal with a startup chip company to diversify its supply options.

Earlier this Friday, OpenAI announced a large-scale purchase of “dedicated inference computing power” from NVIDIA and accepted a $30 billion investment from the chip giant, hinting at the existence of this new processor. OpenAI also signed a significant new agreement to use Amazon’s Trainium chips.

NVIDIA has long dominated the design and sales of graphics processing units (GPUs), which are capable of executing billions of simple tasks simultaneously. However, since the AI boom began, the company is facing limitations with its flagship products for the first time. As the market shifts toward inference computing, NVIDIA is feeling pressure from some clients demanding chips that can more efficiently power AI applications.

NVIDIA’s high-performance Hopper, Blackwell, and Rubin series GPUs are regarded as top-tier products for training large-scale AI models, with high prices. Most analysts estimate that NVIDIA controls over 90% of the GPU market share.

NVIDIA CEO Jensen Huang has long claimed that NVIDIA GPUs lead the market in both training and inference scenarios, and this versatility is a core product appeal. But over the past year, as companies deploy AI agents and tools to disrupt hundreds of industries and generate huge subscription revenue, high-end computing demand has shifted from training to inference. AI agents are systems capable of autonomously completing tasks for users.

Many companies developing and operating AI agents have found that GPUs are too costly, consume too much power, and are not fully suited to the actual operational needs of models. As AI agents rapidly rise, NVIDIA faces immense pressure to develop lower-cost, more energy-efficient inference chips.

In January, OpenAI reached a multi-billion dollar computing partnership with Cerebras. Cerebras specializes in inference chips, and its CEO Andrew Feldman claims their chips are faster than NVIDIA GPUs. Reports from last fall indicated that OpenAI engineers had requested faster inference chips for AI agent coding applications, leading the company to negotiate with Cerebras.

In February, NVIDIA announced an expansion of its cooperation with Meta, including the company’s first large-scale deployment of pure CPUs to support Meta’s advertising targeting AI agents. This deal revealed NVIDIA’s strategic move: stepping outside the GPU domain to capture niche AI markets.

At the end of last year, NVIDIA agreed to a $20 billion licensing deal for key technology from Groq and brought in its core management team, including founder Jonathan Ross—one of Silicon Valley’s largest “talent acquisitions” to date.

Groq’s chips use a completely different architecture from NVIDIA’s, called language processing units, which are highly efficient for inference tasks. However, NVIDIA has not yet disclosed how it plans to utilize Groq’s technology.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin