A CXL memory module is displayed at the SK hynix booth during last year’s SEDEX held at COEX in Gangnam-gu, Seoul. /Yonhap News

As competition heats up in the high-bandwidth memory (HBM) sector, leading memory chipmakers are expanding their artificial intelligence (AI) semiconductor strategies to include Compute Express Link (CXL), an advanced memory interface technology. The shift reflects surging demand from Big Tech firms for next-generation AI data centers, where the ability to process massive volumes of data efficiently has become increasingly critical.

Servers powering these data centers typically incorporate a range of semiconductor components, including central processing units (CPUs), graphics processing units (GPUs), and DRAM. CXL is a state-of-the-art interface designed to optimize data transfers among these components. By maximizing performance with fewer chips, it offers a path to lower infrastructure costs. “It not only expands memory capacity but also significantly boosts data transfer speeds between semiconductors,” said an industry official. “That’s why it has become—along with HBM—one of the most closely watched technologies in the AI era.”

Graphics by Kim Hyun-kook

South Korea’s Samsung Electronics and SK hynix, together with U.S.-based Micron Technology—the youngest of the three major memory producers—are accelerating efforts to capture early leadership in the CXL memory market. According to market research firm Yole Intelligence, the global CXL market is projected to grow from $14 million in 2023 to $16 billion by 2028.

Samsung Electronics and SK hynix unveiled their latest CXL technologies and research progress at CXL DevCon 2025, held on Apr. 29 in California. Now in its second year, the event is hosted by the CXL Consortium, a global coalition of semiconductor companies.

At the conference, Samsung showcased its memory pooling technology, which leverages CXL to link multiple memory modules into a single shared pool. This enables users to flexibly access and allocate memory resources as needed. A first mover in the field, Samsung developed the industry’s first CXL-based DRAM in May 2021. In 2023, the company introduced a 128GB DRAM module supporting the CXL 2.0 standard, which completed customer validation by the end of that year. Samsung is now preparing to finalize validation for a 256GB version. “Samsung is determined to take the lead in the CXL market to avoid repeating the mistake of ceding ground in the HBM segment,” an industry insider said.

Graphics by Kim Hyun-kook

SK hynix, which holds a competitive edge in HBM, is seeking to apply that momentum to the CXL space, focusing particularly on its high-performance DRAM capabilities. On Apr. 23, the company completed customer validation for a 96GB DDR5 DRAM module based on the CXL 2.0 standard. “When applied to servers, this module delivers 50% more capacity and 30% higher bandwidth compared to standard DDR5 modules,” a company representative said. “It’s a technology that can dramatically reduce infrastructure costs for data center operators.” SK hynix is also pursuing validation for a 128GB variant.

Micron Technology, the world’s third-largest memory chipmaker, began rolling out CXL 2.0-based memory expansion modules last year, intensifying its push to close the technological gap with Samsung and SK hynix.

The rise of CXL comes amid a broader transformation in AI development—from training-heavy models to inference-driven architectures. Until recently, AI performance depended largely on how much data a model could ingest during the training phase. This stage favored hardware such as GPUs paired with HBM, like those found in NVIDIA’s AI accelerators.

Today, however, the focus is shifting to inference-based AI models, which not only provide responses based on pre-trained data but also generate new outputs through logical reasoning—even when the data is not explicitly contained in the training set. These inference workloads require not just access to large data sets but also rapid, efficient processing—precisely the advantage that CXL is designed to deliver. This growing need for high-efficiency data handling is driving the AI sector’s rising interest in this next-generation memory interface technology.