Lab

Research

AI Chip

“Build Next Generation AI Platform”

Recently, machine learning (ML) becomes the hottest computing paradigm as it revolutionizes how computers handle cognitive tasks based on a massive amount of observed data. As more industries are adopting the technology, we are facing fast-growing demand for hardware support to enable faster and more energy efficient processing. However, latest hardware solutions are often limited to a few popular algorithms such as multi-layer perceptron (MLP), convolutional neural networks (CNN), and recurrent neural networks (RNN), and considers only inference without training.

In this research, we will focus on hardware support for next generation AI/ML scenarios such as unsupervised learning, reinforcement learning, and genetic algorithms. Starting from workload analysis, we will try to find essential computational kernels for acceleration alongside with microprocessors to find right balance between energy efficiency and programmability.

Datacenter SoC

“Make Datacenters More Efficient”

Cloud computing is rapidly changing how enterprises run their services by offering a virtualized computing infrastructure over the internet. Datacenter is a power house behind the cloud computing, which physically hosts millions of computer servers, communication cables, and data storages. Hardware specialization for datacenter servers makes economic sense as its energy saving effect will be magnified by the number of servers. Although it is difficult to find dominant applications in datacenter, network and storage layer tend to have shared data processing pipelines across the workloads.

In this research, we aim to develop a specialized system-on-chip that not only accelerates common network and storage processing but also provide direct paths between virtual machines and network and storage devices in datacenters. If interested, please read further publications listed below.

Related Publications: MICRO 2016, ISCA 2014, Amazon Nitro

Memory centric computing

“Create Computer Systems Around Memories”

Traditionally CPU that executes arithmetic and logic calculation is the center of the computing systems while a few layers of memory are built around it to feed the data. However, as compute unit gets much faster than memory unit with technology scaling, it is no longer the most time and energy consuming part of the system. Instead, the cost of moving data to the locations where computations happen becomes the bottleneck.

Memory centric model takes an opposite approach to traditional compute centric model to address this expensive data movement problem. Data stays in different storage levels but the processing engines around them perform computations to avoid data movement across the hierarchy. This trend can be seen at multiple levels in hardware system. If the computation unit is embedded in storage devices like SSD, it is often called near data computing. If the computation unit is embedded near memory cells, it is called processing-in-memory. In this research, we will focus on application-specific scenarios that can maximize data reduction effect.

Related Publications: ISCA 2016, Arxiv 2019

Secure Hardware Platform for Internet-of-Things

“Build Secure IoT Platform”

Internet-of-Things (IoT) connects billions of physical objects by harnessing them wireless communication with embedded electronics. While modern CPUs are somewhat resilient to sophisticated attack scenarios with software patches, embedded microprocessors in IoT devices are vulnerable even to known attacks due to limited computing and power budget. To overcome this problem, we plan to build a secure IoT platform that supports important security features such as secure boot capability, run-time security monitoring, storage protection, and secure communication from circuit level. The goal of this platform is to provide higher security level than software-based approach within a power limited IoT device.