2023 JEDEC
Mobile/Client/AI-Computing Forum
Server/Cloud-Computing/Edge Forum

Date : May, 16th, 2023, 09:00 ~ 16:20

Location : ELTOWER, 213, Gangnamdaero, Seocho-gu. Seoul, Republic of Korea

Program

일정 발표내용/발표자(소속)
Program Moderator: Youngbin Lee, Samsung
09:35 ~ 09:40

JEDEC Welcome

Mian Quddus, JEDEC Board of Directors

09:40 ~ 10:05

How ChromeOS is Making Smarter Memory Decisions

Brian Geffon, Google

Over the past several years there has been a shift to focus on security and isolation, with it has come increasing levels of over provisioned devices. This talk will go into how ChromeOS is taking advantage of, or adding features to, the Linux Kernel to better handle resource constraints from user space. We will discuss why user space can do a better job managing memory in many situations. Finally, we will touch on how hardware can make this job easier going forward.

10:05 ~ 10:30

The Origins of CAMM (Compression Attached Memory Module)

Dr. Tom Schnell, Dell

CAMM is a Dell-patented memory module design, brought to JEDEC for industry standardization and adoption. CAMM solves great customer problems of performance limits, connector reliability, and service. CAMM also solves great OEM problems of thermals, memory bus routing in motherboards, system form factor constraints, EMC noise, and memory servicing. These problems have existed for years and now have been addressed. This presentation covers teaching on how to discover great problems, how to uncover creative solutions, the innovators dilemma in putting early ideas into product, the JEDEC journey, and the future direction of CAMM.

10:30 ~ 10:55

Supporting Energy Efficient Execution for AI (at the edge):

Amedeo Zuccaro, ST

This presentation provides and overview about the key aspects of supporting energy efficient execution of AI inferences in embedded devices as required for connected intelligent nodes.

10:55 ~ 11:15

Enlarged Coverage of Low Power Memories (LPDDR) with Better Performance / Low Power

Jeff Choi, SK hynix

Since LPDDR introduced to the industry to reduce the power consumption, it has been interested not only in power consumption but also in performance as well. LP memories has rapidly enhanced their speed / performance, providing the better performance than that of DDR memories. With the sense, new industry / application prefer to use LPDDR, enlarging the area of LP memories to cover. It becomes LPDDRx to be adopted in not only mobile / client space, but also AI / graphic / server area, etc. In this talk, it will be touched with how LP memories prepare for new requirements from the industry.

11:15 ~ 11:35

Memory, Test and Measurement and the Impacts of Changes in the Data Center

Brig Asay, Keysight

Perhaps no other technology will have a bigger change in the data center than memory over the next few years. With the move of the server to further disaggregate, memory must be faster with less latency. Faster memory, means even bigger test and measurement challenges. Previously difficult tasks, such as probing and decoding, only get harder for everyone over the next several years. This discussion will focus on those challenges and some of the best ways to overcome them.

11:35 ~ 11:55

LPDDR5: Everything Everywhere all at Once

Brett Murdock, Synopsys

LPDDR5 has for some time been the Jack of All Trades for memories while actually being the master of some. This presentation will discuss various applications for LPDDR5 and why LPDDR5 is the memory of choice. The presentation will discuss not only the view of the memory from the internal SoC.

12:00 ~ 13:00 Lunch Break
13:00 ~ 13:25

LPDDR Memory Subsystem Evolution for Various Applications

Eric Oh, Samsung

Memory centric architecture is the key for system enhancement and currently, LPDDR memory subsystem evolution gives value for various application. LPDDR memory has evolved to attain continuous improvement for high speed and power efficiency and data rate increase within limited power budget provides huge benefit for many application. In this presentation, we will summarize the LPDDR memory solution evolution trends for various market and address next generation LPDDR memory subsystem considerations to leap forward to the future.

13:25 ~ 13:45

In-Memory Computing for Neural Networks Using Multi-Level SONOS

Sergey Ostrikov, Infineon

In-memory computing (IMC) is a technology that aims to keep data and computation as close as possible to each other. One way to implement IMC is by using non-volatile memory (NVM), such as flash, with the goal to reduce data movement and reduce power consumption associated with data movement. AI applications rely on large amounts of kernel weights needed for computation. NVM-based IMC can perform computation in place of storage and thus eliminate the need to fetch the weights into a compute engine. This presentation explores the challenges associated with this approach, such as efficient propagation of intermediate computation results through a static memory array, and proposes a functional solution using TVM as an ML compiler.

13:45 ~ 14:05

Divergence of Memory Technology Needs for Client/Mobile and Cloud Server SOCs

Nagi Aboulenein, Ampere

We will discuss areas of divergence (and synergy) of client/mobile and server memory technology needs for future SOCs and platforms.

14:05 ~ 14:30

Memory History and Beyond

Osamu Nagashima, Micron

DRAM technology trends and industry focus features. Today’s DRAM application requirements and technologies.

14:30 ~ 14:50

Adaptable and Programmable System Architecture and Applications Driving DDR5 to Meet the Demands of the Next 5 Years

Thomas To, AMD

The explosion of data traffic makes data center/cloud computing workloads demand to grow exponentially. The data center processors are seeing mixture of file sizes, diversified data types and new algorithms for varying processing requirements. Adding to the challenge is the workload evolution, with cloud-based ML/AI (Hardware Machine Learning & Artificial Intelligence) being the first and foremost. The processing speed and bandwidth demand increase the data center burden. Example workloads targeted for acceleration are data analytics, networking application and cybersecurity. Adaptable system accelerator, such as implemented with FPGA, have bridged the computational gap by providing heterogenous acceleration to offload the burden. However, the new data path, such as in ML, is fundamentally different from the traditional CPU data path flow. This presentation will highlight the diverse applications of programmable system and contrast the different system memory (e.g., DDR5) requirement to traditional CPU system requirement. The discussion will stress on the balance among system cost, bandwidth and memory density requirement going forward.

14:50 ~ 15:10

Utilize New Memory Features to Enhance Intel Platform User Experience

Sanghyun Yoon, Intel

In this session, you will learn how the system applies DDR5/LDDR5x new features for memory initialization sequence in order to achieve higher bandwidth. Also, see some of the innovative approaches to improve client memory qualification and user experience.

15:10 ~ 15:35

Next-Generation Memory Access for Edge AI Computing: 8.533Gbps, 16Gbps and Beyond

Marc Greenberg, Cadence

Edge AI Applications require a very high memory bandwidth to perform AI functions. What’s the right memory for it? In this presentation we will discuss what’s necessary to implement a memory interface at the highest speeds available under the JEDEC standards – LPDDR5X-8533 and GDDR6-16G – and look at the future coming with even higher speed grades for these memory types.

15:35 ~ 15:55

LPDDR5 In System Validation

Barbara Aichinger, FuturePlus

LPDDR5 is a strong contender for use in the embedded market and even finding its way onto specialized memory modules. It has proliferated in several different package types, has very high speed data capture requirements and low power features. In this presentation we will take a look at how Engineers are validating LPDDR5 designs.

15:55 ~ 16:15

LPDDR5 Interface Test and Validation Methodology

Randy White, Keysight

Over time, as LPDDR speeds have increased, the fundamental approach used to move data has had to change. Traditional high speed digital timing and noise with min/typ/max specifications has given way in LPDDR5 to high speed serial approaches based on eye masks with jitter specifications. LPDDR5 must go a step further to deal with distorted eyes using tunable equalization. At each point the need to characterize and measure what’s defined in the spec has made Measurement Science and DFT increasingly important in defining the LPDDR spec. This session will focus on the Measurement Science behind the LPDDR5 specification.

16:15 ~ 16:20

Closing Remarks

Mian Quddus, JEDEC Board of Directors

일정 발표내용/발표자(소속)
Program Moderator: Youngsu Kwon, ETRI
09:35 ~ 10:00

Composable Memory Systems at Meta

Manoj Wadekar, Meta

AI and other applications have been driving new and dramatic use cases in data center. This is demanding major changes in the underlying hardware infrastructure. The last decade has seen significant changes to GPUs (Accelerators), CPUs, and networks. As a result, we are seeing dramatic growth in memory-bound workloads in the data center and there is a need to re-think memory solutions. AI/ML, Cache, Database, and Data Warehouse servers are driving the need for higher memory capacity and bandwidth.
The current memory hierarchy and solutions are limited to CPU-attached memory. However, CXL now opens up at least two new potential “Composable Memory Systems” in the next generation data center solutions. First, we have the potential to dramatically increase memory capacities in some platforms using memory expansion. Second, we can now build TCO-optimized memory tiers. This requires the industry to come together to develop HW/SW co-designed solutions. Meta will share its plans to enable Composable Memory Systems which are driving its future AI/ML and TCO-optimized memory servers.

10:00 ~ 10:20

Future Memory Technology Needs for Hyperscale Cloud Servers

Nagi Aboulenein, Ampere

What are future directions for memory technology requirements as seen through the lens of hyperscale cloud server SOCs.

10:20 ~ 10:40

Adaptable and Programmable System Architecture and Applications driving DDR5 to Meet the Demands of the Next 5 Years

Thomas To, AMD

The explosion of data traffic makes data center/cloud computing workloads demand to grow exponentially. The data center processors are seeing mixture of file sizes, diversified data types and new algorithms for varying processing requirements. Adding to the challenge is the workload evolution, with cloud-based ML/AI (Hardware Machine Learning & Artificial Intelligence) being the first and foremost. The processing speed and bandwidth demand increase the data center burden. Example workloads targeted for acceleration are data analytics, networking application and cybersecurity. Adaptable system accelerator, such as implemented with FPGA, have bridged the computational gap by providing heterogenous acceleration to offload the burden. However, the new data path, such as in ML, is fundamentally different from the traditional CPU data path flow. This presentation will highlight the diverse applications of programmable system and contrast the different system memory (e.g., DDR5) requirement to traditional CPU system requirement. The discussion will stress on the balance among system cost, bandwidth and memory density requirement going forward.

10:40 ~ 11:05

Data-Centric Computing

Dr. Sung Ryu, Samsung

The "memory wall" refers to the challenge in computer architecture of providing a sufficient amount of memory bandwidth to keep up with the processing power of a CPU. This problem arises because the speed of the CPU is increasing much faster than the speed of memory, and resulting in performance degradation. The solutions to memory wall issues have been designed for traditional von Neumann architectures and memory hierarchies. However, these existing architectures are not well suited for handling big data and large machine learning models, because the working set is too big to fit in the existing memory hierarchy.
For the solution to bandwidth wall issue, a new approach called Data-Centric computing has emerged as an alternative to the traditional compute-centric paradigm. This approach prioritizes determining the optimal location for computation based on the data's location and the computation's complexity, instead of merely transferring all data to the CPU. Data-Centric computing technologies includes Computational Storage, Processing In Memory (PIM), Processing Near-Memory (PNM), which show promising results for large data workload.

11:05 ~ 11:25

DDR5 In System Validation

Barbara Aichinger, FuturePlus

DDR5 is now being introduced in Servers, Desktops and Laptops and has two high speed channels, with Double Data Rate and Single Data Rate signals. UDIMMs, RDIMMs and SODIMMs modules are all pinned out differently and these modules have PMICs, SPD, HUB, TS’s, and RCD. Certainly, DDR5 is more complicated than DDR4! This presentation will review the lab validation problems facing Engineers currently working on DDR5. See how Engineers are solving these problems and what challenges they face.

11:25 ~ 11:45

DDR5 Interface Test and Validation Methodology

Randy White, Keysight

There’s the standard, and then there’s how to measure it. Usually the specification drives measurement procedures but at DDR5 speeds development must go hand-in-hand to ensure that what works in theory will not only work in practice, but can be confirmed on the lab bench and in production. This session focuses on the DDR5 measurement methodologies that have been driven by the specification and the practical considerations that have influenced the DDR5 specification. Probing and test fixturing, use of new DFT features in the DDR5 specification itself, measurement algorithms and automation, and specific examples are presented that enable characterization and troubleshooting of DDR5 memory and support devices, DIMMs, as well as entire systems, both server and embedded.

12:00 ~ 13:00 Lunch Break
13:00 ~ 13:05

JEDEC Welcome

Mian Quddus, JEDEC Board of Directors

13:05 ~ 13:30

Memory Market and Industry Technology Trend

TK Kim, Samsung

Comprehensive presentation for compute memory's market analysis and relevant up-to-date memory technology's introduction.

13:30 ~ 13:50

Choosing the right DRAM Memory for Custom Computing Chips: Bandwidth, Capacity and Power for DDR5, LPDDR5/5X, GDDR6 and HBM3

Marc Greenberg, Cadence

DDR5 is a popular DRAM memory for new server/cloud and edge designs. DDR5 is capable of providing very high memory capacity while mounted on DIMMs, CXL™ or directly attached to the PCB, making DDR5 the obvious choice for compute-heavy and big data server designs. Meanwhile there is rapid growth in specialized server machines for artificial intelligence / machine learning, cryptography and media, as well as edge applications of all types that may benefit from different memories optimized for different tradeoffs of bandwidth, power, capacity and form-factor. In this presentation we’ll discuss where DDR5 is a strong choice, and where LPDDR5/5X, GDDR6 or HBM3 may provide a better tradeoff for particular types of Server/Cloud and Edge designs.

13:50 ~ 14:10

DDR5, What to Innovate: DDR5 SDRAM, Module and Supporting Chips as a Whole

DY Lee, ONE Semiconductor

This presentation tries to explain what have been improved by DDR5 SDRAM generation from DRAM, Module to supporting chip as a whole. Memory bottleneck becomes more critical as time goes, and memory industry try to respond it. This presentation covers key innovation items made at DDR5 generation.

14:10 ~ 14:30

Intel Server New Memory Feature to Improve DDR5 Reliability and Performance

Taeyun Kim, Intel

In this session, you will learn how Intel take serious effort to improve DDR5 quality, by validation as well as utilizing DDR5 memory features to address customers’ memory quality and reliability concerns. Also, see some of the innovative approaches to improve server system performance

14:30 ~ 14:55

Memory Offerings for Data Centers: Now and Beyond

Eugene Hongbae Kim, SK hynix

The data centers are now the core of the new industrial revolution, and the role that memory solutions play has become ever more important as the amount of the data, fueled into many different services including AI like chatGPT, increases at an explosive rate. This presentation explains what is taking place and what can be anticipated in the memory offerings for data centers.

14:55 ~ 15:15

Memory, Test and Measurement and the Impacts of Changes in the Data Center

Brig Asay, Keysight

Perhaps no other technology will have a bigger change in the data center than memory over the next few years. With the move of the server to further disaggregate, memory must be faster with less latency. Faster memory means even bigger test and measurement challenges. Previously difficult tasks, such as probing and decoding, only get harder for everyone over the next several years. This discussion will focus on those challenges and some of the best ways to overcome them.

15:15 ~ 15:35

DDR5 RDIMM 6400Mbps Signal Integrity Analysis

Brett Murdock, Synopsys

It is well known throughout the memory industry that the performance of a DDR5 based system is heavily impacted by the system configuration. Many designers are chasing the DDR5 speed target of 6400Mbps but want as high capacity of DRAM as possible which translates into more loading/ranks. This presentation will show a signal integrity analysis of a dual-rank, single socket, DDR5 RDIMM based system targeting 6400 Mbps. The presentation will discuss timing budgets and highlight the data eye improvement seen when enabling receiver decision feedback equalization (DFE).

15:35 ~ 15:40

Closing Remarks

Mian Quddus, JEDEC Board of Directors