Chiplet strategy is key to tackling compute density challenges


// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>

Data center workloads change rapidly, demanding high compute density with different combinations of compute, memory, and I / O capacity. This leads to architectures that move from a one-size-fits-all, monolithic solution to disaggregated functions that can be scaled independently for specific applications.

It is imperative to adopt the latest process nodes to provide the necessary compute density. However, doing this with traditional monolithic SoCs has an inherent drawback due to escalating costs and time to market resulting in an unfavorable economy. To address this dilemma, chip-based integration strategies emerge where compute can benefit from the more advanced process nodes, while application-specific memory and I / O integrations can reside on nodes. mature processes.

Balaji baktha

Additionally, breaking down a solution into its composable parts opens the door to an ecosystem of partners who can independently develop optimized chipsets that can then be heterogeneously mixed and matched with a variety of highly differentiated and cost-effective solutions.

The chiplet approach establishes a balanced compromise, capable of providing a plethora of domain-specific solutions from a set of composable chiplet functions compared to a monolithic approach. Compute chips tend to quickly embrace advanced process nodes for the best performance, power, and surface area. Conversely, memory and I / O features use mixed signal capabilities that benefit less from the last node and require longer commit cycles, so chip integration on a process node. mature follow-up is more advantageous.

Since memory and I / O configuration is typically workload specific, integrating chips on a more cost effective node tends to be a high value, differentiated SoC development. On the other hand, the compute chip is becoming more general, able to amortize the higher cost of advanced nodes over a wider range of applications and a higher asset management opportunity. Finally, a system integrator can mix and match chips to handle a wide variety of applications and product references without incurring the high cost of registering new designs.

For a typical high-performance processor design, these benefits translate to savings of at least $ 20 million per product and a faster time to market of approximately 2 years. The cost savings come from reduced IP licenses, mask sets, EDA tools, and development efforts. The time-to-market advantage stems from a significant reduction in the complexity of integrating, verifying, and producing a solution compared to a monolithic approach. Finally, the packaging technology required to integrate multiple chipsets has already entered the mainstream and does not add significant risk to bring a more profitable product to market.

For a multi-vendor chip approach to become mainstream, two things must be in place: an open and standardized chip-to-chip (D2D) interface is required between the chips; an ecosystem of function-specific chips that can be easily integrated to meet different applications. Industry leaders are investing resources and efforts to ensure that these two factors are taken into account in the near future.

The Open Design-Specific Architecture (ODSA) working group within the Open Compute project was a natural home for the D2D standardization effort, ensuring that it could be effectively operated within the data center and applications up to the edge of the 5G network. Many vendors offer their highly portable PHY D2D Bunch-of-Wires (BoW) technology to provide the electrical physical layer between chips. In addition to the PHY layer, Ventana has created a lightweight link layer to efficiently transport standard interconnect protocols across chip interfaces.

The advantages of breaking down solutions into versatile composable functions on chips are highly dependent on the attributes of the D2D interface to achieve a good performance-power-cost compromise. BoW is considered to be a convincing solution because it can provide very high bandwidth, low latency, low power, at low cost. In addition, its circuit complexity is very low, allowing wider adoption by multiple customers and product lines. The initial configuration of the interface aims to provide raw bandwidth throughput of up to 128 Gb / s with less than 8 ns latency and active power consumption of less than 0.5 pJ / bit.

In addition, a rich ecosystem of partners is forming around the standardized D2D chiplet interface. Several established vendors are working on a range of high-speed serial and processing infrastructure that will support a broad market of solutions. In addition to data centers, the developing partner ecosystem is focused on other high growth market segments such as 5G infrastructure, advanced computing, automotive and end customer devices.

The RISC-V extensible ISA provides a solid foundation for delivering domain-specific acceleration in conjunction with a unified software framework. This is a key rationale for the creation of Ventana Micro Systems. We wanted to bring RISC-V into the high performance processor category with data center class processors that meet the specific needs of hyper-scalers and corporate clients. We chose to launch a chip-based approach within an ecosystem of partners to enable rapid adoption of the technology.

We have demonstrated that our compute chips can process and execute custom instructions within an integrated chip design. This approach provides the flexibility to support a range of solutions where customers can choose to keep their differentiation technology private on a separate chip or work directly with Ventana to achieve more optimal integration.

Chip-based integration is a necessary and well-suited approach to enable disruptive new trends such as disaggregated server, heterogeneous computing, and domain-specific acceleration within the data center and other high demand markets. growth. In addition to enabling rapid adoption of these emerging trends, it offers significant cost and time-to-market advantages over traditional monolithic SoCs.

The standardization of D2D interfaces within ODSA will allow a rich ecosystem to support these unique and differentiated integrations from a set of available chips. RISC-V ISA scalability provides the recipe for triggering domain-specific acceleration in record time, leveraging a production-ready compute chip and supporting ecosystem.

–Balaji Baktha is founder and CEO of Ventana Micro Systems.

About Alexander Estrada

Check Also

How to position your bond portfolio for higher interest rates

Bond yields continue to rise in 2022 with the benchmark 10-year Indian government bond (IGB) …

Leave a Reply

Your email address will not be published.