Data Science relies on scalable resource allocation algorithms to operate data centers consisting of massive numbers of servers, as well as machine learning algorithms to extract information from data. In the first part of the talk, we will consider a resource allocation problem that arises in data centers and show that there exist low-complexity algorithms which use queue-length information to maximize throughput while ensuring that the packet delays do not scale with the size of the data center. Our result resolved a long-standing open conjecture at the intersection of probability, algorithms, and networks. In the second part of the talk, we will consider a reinforcement learning algorithm to estimate the Q-function of a given policy. We will present the first comprehensive finite-time performance bounds for the algorithm, which can potentially be used to characterize its sample complexity. The first part is joint work with Siva Theja Maguluri and the second part is joint work with Lei Ying.
R. Srikant is the Fredric G. and Elizabeth H. Nearing Endowed Professor of Electrical and Computer Engineering and the Coordinated Science Lab at the University of Illinois at Urbana-Champaign. His research interests are in the areas of applied probability, stochastic networks, and control theory, with applications to machine learning, cloud computing, and communication networks. He is the recipient of the 2019 IEEE Koji Kobayashi Computers and Communications Award and the 2015 IEEE INFOCOM Achievement Award. He has also received several Best Paper awards, including the 2017 Applied Probability Society Best Publication Award and the 2015 IEEE INFOCOM Best Paper Award. He was the Editor-in-Chief of the IEEE/ACM Transactions on Networking from 2013-2017.