Sahil Tyagi
Ph.D. Candidate (ABD)
styagi_AT_iu_DOT_edu
Intelligent Systems Engineering
Indiana University Bloomington

About Me

I will graduate soon and currently surfing the job market for full-time positions.

Hello! I’m Sahil (pronounced as ‘saa-hill’), a grad student advised by Dr. Martin Swany in the Department of Intelligent Systems Engineering (ISE) at the Luddy School of Informatics, Computing and Engineering of Indiana University Bloomington (IUB).

A researcher working at the intersection of deep learning, distributed systems and systems for ML/ML for systems. The focus of my Ph.D. studies is on building efficient computation and communication models to scale artificial neural networks across edge, cloud and high-performance computing (HPC). Currently working on topics like distributed training, federated learning, model compression, communication protocols optimized for deep learning, stream processing and differential privacy.

Outside of research, I’m an amateur photographer mostly shooting wildlife and space. Here are some pictures.


Education


Research Interests


Work Experience


Publications


Skills


Teaching Experience

Course Role Term
High-Performance Computing (ENGR-E317/517) Associate Instructor Spring 2024
Computer Networks (ENGR-E318/518, CSCI-P438/538) Associate Instructor Fall 2023, Fall 2022
Operating Systems (ENGR-E319/519, CSCI-P436/536) Associate Instructor Spring 2023
Distributed Systems (ENGR-E510, CSCI-B534) Associate Instructor Spring 2022, Spring 2021
Cloud Computing (ENGR-E516) Associate Instructor Fall 2021, Fall 2020, Fall 2019

Professional Services


Awards


Presentations and Talks

  1. 04/24: Guest lectures, “Parallel Computing with GPUs for Distributed ML Applications”, High-Performance Computing (HPC) course, Indiana University Bloomington, USA [pdf1], [pdf2].
  2. 12/23: Paper presentation, “Flexible Communication for Optimal Distributed Learning over Unpredictable Networks.” 2023 IEEE International Conference on Big Data, Sorrento, Italy [pdf].
  3. 11/23: Paper presentation, “Accelerating DistributedMLTraining via Selective Synchronization.” 2023 IEEE International Conference on Cluster Computing, Santa Fe, New Mexico, USA [pdf].
  4. 11/23: Poster presentation, “Accelerating Distributed ML Training via Selective Synchronization.” 2023 IEEE International Conference on Cluster Computing, Santa Fe, New Mexico, USA [pdf].
  5. 09/23: Invited talk, “Towards building efficient computation and communication models for distributed deep learning systems.” Mathematics and Computer Science (MCS) division, Argonne National Laboratory, Illinois, USA.
  6. 07/23: Paper presentation, “GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training.” 2023 IEEE International Conference on Cloud Computing, Chicago, Illinois [pdf].
  7. 05/23: Paper presentation, “Scavenger: A Cloud Service for Optimizing Cost and Performance of ML Training.” 2023 IEEE/ACMInternational Symposium on Cluster, Cloud and Internet Computing, Bengaluru, India [pdf].
  8. 05/23: Poster presentation, “Scavenger: A Cloud Service for Optimizing Cost and Performance of ML Training.” 2023 IEEE/ACMInternational Symposium on Cluster, Cloud and Internet Computing, Bengaluru, India [pdf].
  9. 12/22: Paper presentation, “ScaDLES: Scalable Deep Learning over Streaming Data at the Edge.” 2022 IEEE International Conference on Big Data, Osaka, Japan [pdf].
  10. 07/20: Paper presentation, “Taming Resource Heterogeneity in Distributed ML Training with Dynamic Batching.” 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems, virtual [pdf].
  11. 11/18: “Real-Time Anomaly Detection from Edge to HPC-Cloud”, Intel Speakerships at SC18 (Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 2018), Dallas, Texas, USA [pdf].

References

Available upon request..


Flag Counter