I am a PhD student (majoring in Computer Engineering) at Arizona State University. My interest lies in exploring challenges dealing with computing systems (in particular, about developing compiler solution and architecture design for programmable dataflow accelerators). My research at Compiler Microarchitecture Lab is advised by Prof. Aviral Shrivastava and it includes enabling automated acceleration of the workloads onto Coarse-Grained Reconfigurable Arrays (CGRAs). CGRAs are energy-efficient dataflow accelerators that can efficiently speed-up even non-vectorizable performance-critical loops. My prior industry experiences include compiler optimizations and code generation for embedded systems, and RTL design and verification for ASIC and FPGA platforms.
My interest lies in exploring challenges dealing with computing systems (in particular, about developing compiler solution and architecture design for programmable dataflow accelerators).
Dataflow accelerators including Coarse-Grained Reconfigurable Arrays (CGRAs), CGRA-like spatial architectures (e.g. Eyeriss system from MIT) or, systolic array based accelerators (e.g. Tensor Processing Unit) are recently being developed and analyzed by the industry and academia. Owing to their significantly high energy-efficiency, the dataflow accelerators are demonstrated as a very promising solution to accelerate compute- and memory-intense applications e.g. deep neural networks (DNN). So, I am recently investigating the throughput and energy-efficiency benefits in accelerating such applications through dataflow accelerators as well as optimizing the dataflow execution for the same. The research goal is to determine the optimized architecture and compiler solutions and to establish a complete system-stack with which, DNN models from TensorFlow-like libraries can be executed on CGRA-like dataflow accelerators.
Moreover, my research interests include:
Compiler Microarchitecture Lab
SCIDSE, ASU, Tempe
Travel Grant Awards from:
Recent Voluntary Activities: