Skip to main content
Add Me To Your Mailing List
HomeEventsIEEE CSS/CASS/SMCS-Interconnect Meets Architecture: On-Chip Comm. in the Age of Heterogeneity

Events - Event View

This is the "Event Detail" view, showing all available information for this event. If the event has passed, click the "Event Report" icon to read a report and view photos that were uploaded.
IEEE CSS/CASS/SMCS-Interconnect Meets Architecture: On-Chip Comm. in the Age of Heterogeneity

Date and Time

Wednesday, October 4, 2023, 7:00 PM until 8:00 PM

Location

Webinar
PA  
USA

Category

Affiliate Group Event

Registration Info

Registration is required

About this event

IEEE Philadelphia Chapter of CSS/CASS/SMCS

Distinguished Lecture

 

Date: Wednesday, Oct 04, 2023

Time: 7:00 pm to 8:00 pm

Location: Online Webinar (link provided to registrants)

 

Interconnect Meets Architecture: On-Chip Communication in the Age of Heterogeneity

 

Speaker: Dr. Partha P Pande, Professor, School of Electrical Engineering & Computer Science

Washington State University

 

Abstract:Neural Networks, graph analytics, and other big-data applications have become vastly important for many domains. This has led to a search for proper computing systems that can efficiently utilize the tremendous amount of data parallelism that is associated with these applications. Generally, we depend on data centers and high-performance computing (HPC) clusters to run various big-data applications. However, the design of data centers is dominated by power, thermal, and physical constraints. On the contrary, emerging heterogeneous manycore processing platforms that consist of CPU and GPU cores along with memory controllers (MCs) and accelerators have small footprints. Moreover, they offer power and area-efficient tradeoffs for running big-data applications. Consequently, heterogeneous manycore computing platforms represent a powerful alternative to the data center-oriented type of computing. However, typical Network-On-Chip (NoC) infrastructures employed on conventional manycore platforms are highly sub-optimal to handle specific needs CPUs, GPUs, and accelerators. To address this challenge, we need to come up with a holistic approach to design an optimal network-on-chip (NoC) as the interconnection backbone for the heterogeneous manycore platforms that can handle CPU, GPU, and application-specific accelerator communication requirements efficiently. We will discuss design of a hybrid NoC architecture suitable for heterogeneous manycore platforms. We will also highlight effectiveness of machine learning-inspired multi-objective optimization (MOO) algorithms to quickly find a NoC that satisfies both CPU and GPU communication requirements. Widely used MOO techniques (e.g., NSGA-II or simulated annealing based AMOSA) can require significant amounts of time due to their exploratory nature. Therefore, more efficient, and scalable ML-based optimization techniques are required. We are going to discuss various features of a generalized application-agnostic heterogeneous NoC design that achieves similar levels of performance (latency, throughput, energy, and temperature) as application-specific designs..

 

Click HERE to register

Register Now