Greetings!
I am Jiong (煚), a Ph.D. candidate in Computer Science and Engineering at the University of Michigan working with Danai Koutra. My research interest is on Graph Representation Learning and Graph Neural Networks (GNNs). I am particularly interested in their limitations, robustness, scalability, fairness, and applications in complex, large-scale environments. A key aspect of my work involves enhancing GNN performance in heterophilous graphs (where connected nodes often have dissimilar labels and features). This research includes exploring the interplay between heterophily and other critical aspects of GNN research, such as robustness and distributed GNN training.
Additionally, I have experience with Large Language Models (LLM), gained during my applied scientist internship at Amazon. There, my focus is on training data curation and fine-tuning for semantic-based information retrieval using a two-tower model approach.
I was a Master’s student in Electrical and Computer Engineering at UM. Before joining UM, I received my bachelor degree at Xi’an Jiaotong University, where I was a student of the Special Class for the Gifted Young.
Apart from my study, I am a savvy tech user and a go-to friend for coding and tech questions.
Publications
2023
-
Jiong Zhu, Aishwarya Reganti, Edward Huang, Charles Dickens, Nikhil Rao, Karthik Subbian, and Danai Koutra. 2023.
Simplifying Distributed Neural Network Training on Massive Graphs: Randomized Partitions Improve Model Aggregation.
In Workshop on Localized Learning at ICML 2023 (LLW-ICML’23).
[ Paper ] [ Code ] -
Jiong Zhu, Yujun Yan, Mark Heimann, Lingxiao Zhao, Leman Akoglu, and Danai Koutra. 2023.
Heterophily and Graph Neural Networks: Past, Present and Future.
Bulletin of the IEEE Computer Society Technical Committee on Data Engineering 47, 2 (2023), 10–32.
[ Paper ]
2022
-
Jiong Zhu, Junchen Jin, Donald Loveland, Michael T Schaub, and Danai Koutra. 2022.
How does Heterophily Impact the Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications.
In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’22).
[ Paper ] [ Code ] -
Donald Loveland, Jiong Zhu, Mark Heimann, Ben Fish, Michael T Schaub, and Danai Koutra. 2022.
On graph neural network fairness in the presence of heterophilous neighborhoods
. In The 8th International Workshop on Deep Learning on Graphs (DLG-KDD’22).
[ Paper ]
2021
-
Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra. 2021.
Graph Neural Networks with Heterophily.
In Proceedings of the AAAI Conference on Artificial Intelligence, 11168–11176.
[ Paper ] [ Code ]
2020
-
Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. 2020.
Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs.
Advances in Neural Information Processing Systems 33, (2020).
[ Paper ] [ Poster ] [ Slides ] [ Code ]
2019
-
Yujun Yan, Jiong Zhu, Marlena Duda, Eric Solarz, Chandra Sripada, and Danai Koutra. 2019.
GroupINN: Grouping-based interpretable neural network for classification of limited, noisy brain data.
In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 772–782.
[ Paper ] [ Code ]