Â鶹´«Ã½AV

2020 Joint ComSoc/ITSoc Paper Award Winners
The Joint Communications Society/Information Theory Society Paper Award recognizes outstanding papers that lie at the intersection of communications and information theory. Any paper appearing in a ComSoc or IT Society publication during the previous three years is eligible for the award.
Jun 1, 2020

The Committee, made up of members from both societies, have decided the 2020 award will go to:

Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos and Kannan Ramchandran, "Speeding up Distributed Machine Learning Using Codes", Â鶹´«Ã½AV Transactions on Information Theory, vol. 64, no. 3, pp. 1514-1529, March 2018

Abstract:

Codes are widely used in many engineering applications to offer robustness against noise. In large-scale systems, there are several types of noise that can affect the performance of distributed machine learning algorithms-straggler nodes, system failures, or communication bottlenecks-but there has been little interaction cutting across codes, machine learning, and distributed systems. In this paper, we provide theoretical insights on how coded solutions can achieve significant gains compared with uncoded ones. We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling. For matrix multiplication, we use codes to alleviate the effect of stragglers and show that if the number of homogeneous workers is n, and the runtime of each subtask has an exponential tail, coded computation can speed up distributed matrix multiplication by a factor of log n. For data shuffling, we use codes to reduce communication bottlenecks, exploiting the excess in storage. We show that when a constant fraction α of the data matrix can be cached at each worker, and n is the number of workers, coded shuffling reduces the communication cost by a factor of (α+ n/1)y (n) compared with uncoded shuffling, where y (n) is the ratio of the cost of unicasting n messages to n users to multicasting a common message (of the same size) to n users. For instance, y (n) ≃ n if multicasting a message to n users is as cheap as unicasting a message to one user. We also provide experimental results, corroborating our theoretical gains of the coded algorithms.

Authors:

K angwook Lee , School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea

Kangwook Lee is a postdoctoral researcher in the School of Electrical Engineering, KAIST. Kangwook earned his Ph.D. in EECS from UC Berkeley in 2016, under the supervision of Kannan Ramchandran. He is a recipient of the KFAS Fellowship from 2010 to 2015. His research interests lie in information theory and machine learning.

Maximilian Lam, Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA, USA

Maximilian Lam is a computer science student at UC Berkeley whose main research interests are systems and machine learning.

Ramtin Pedarsani, Department of Electrical and Computer Engineering, University of California at Santa Barbara, Santa Barbara, CA, USA

Ramtin Pedarsani is an Assistant Professor in ECE Department at the University of California, Santa Barbara. He received the B.Sc. degree in electrical engineering from the University of Tehran, Tehran, Iran, in 2009, the M.Sc. degree in communication systems from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland, in 2011, and his Ph.D. from the University of California, Berkeley, in 2015. His research interests include networks, machine learning, stochastic systems, information and coding theory, and transportation systems. Ramtin is a recipient of the Â鶹´«Ã½AV international conference on communications (ICC) best paper award in 2014.

Dimitris Papailiopoulos, Department of Electrical and Computer Engineering, University of Wisconsin–Madison, Madison, WI, USA

Dimitris Papailiopoulos is an Assistant Professor of Electrical and Computer Engineering at the University of Wisconsin-Madison and a Faculty Fellow of the Grainger Institute for Engineering. Between 2014 and 2016, Papailiopoulos was a postdoctoral researcher in EECS at UC Berkeley and a member of the AMPLab. His research interests span machine learning, coding theory, and distributed algorithms, with a current focus on coordination-avoiding parallel machine learning and the use of erasure codes to speed up distributed computation. Dimitris earned his Ph.D. in ECE from UT Austin in 2014, under the supervision of Alex Dimakis. In 2015, he received the Â鶹´«Ã½AV Signal Processing Society, Young Author Best Paper Award.

Kannan Ramchandran, Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA, USA

Kannan Ramchandran (Ph.D.: Columbia University, 1993) is a Professor of Electrical Engineering and Computer Sciences at UC Berkeley, where he has been since 1999. He was on the faculty at the University of Illinois at Urbana-Champaign from 1993 to 1999, and with AT&T Bell Labs from 1984 to 1990. He is an Â鶹´«Ã½AV Fellow, and a recipient of the 2017 Â鶹´«Ã½AV Kobayashi Computers and Communications Award, which recognizes outstanding contributions to the integration of computers and communications. His research awards include an Â鶹´«Ã½AV Information Theory Society and Communication Society Joint Best Paper award for 2012, an Â鶹´«Ã½AV Communication Society Data Storage Best Paper award in 2010, two Best Paper awards from the Â鶹´«Ã½AV Signal Processing Society in 1993 and 1999, an Okawa Foundation Prize for outstanding research at Berkeley in 2001, an Outstanding Teaching Award at Berkeley in 2009, and a Hank Magnuski Scholar award at Illinois in 1998. His research interests are at the intersection of signal processing, coding theory, communications and networking with a focus on theory and algorithms for large-scale distributed systems.