I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Abstract. Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. Research Institute for Interdisciplinary Sciences (RIIS) at With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. [pdf] [poster] DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . ", "Sample complexity for average-reward MDPs? Done under the mentorship of M. Malliaris. Email / I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. My interests are in the intersection of algorithms, statistics, optimization, and machine learning. We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. However, even restarting can be a hard task here. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. in Mathematics and B.A. I am fortunate to be advised by Aaron Sidford. Anup B. Rao. /N 3 the Operations Research group. aaron sidford cvis sea bass a bony fish to eat. Personal Website. with Yair Carmon, Aaron Sidford and Kevin Tian with Yair Carmon, Aaron Sidford and Kevin Tian NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games 2021. Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. /Length 11 0 R . 4026. Thesis, 2016. pdf. Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 of practical importance. My CV. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). when do tulips bloom in maryland; indo pacific region upsc I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. with Vidya Muthukumar and Aaron Sidford with Aaron Sidford The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020). I am broadly interested in mathematics and theoretical computer science. I am a senior researcher in the Algorithms group at Microsoft Research Redmond. 2023. . If you see any typos or issues, feel free to email me. Some I am still actively improving and all of them I am happy to continue polishing. I was fortunate to work with Prof. Zhongzhi Zhang. [pdf] We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . With Yair Carmon, John C. Duchi, and Oliver Hinder. Efficient Convex Optimization Requires Superlinear Memory. With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams. riba architectural drawing numbering system; fort wayne police department gun permit; how long does chambord last unopened; wayne county news wv obituaries University, where Email: sidford@stanford.edu. Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang: Minimum Cost Flows, MDPs, and 1 -Regression in Nearly Linear Time for Dense Instances. D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford. Mail Code. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. Aaron Sidford is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). United States. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission . ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. ", Applied Math at Fudan Their, This "Cited by" count includes citations to the following articles in Scholar. Source: appliancesonline.com.au. /Creator (Apache FOP Version 1.0) arXiv | conference pdf, Annie Marsden, Sergio Bacallado. ", "General variance reduction framework for solving saddle-point problems & Improved runtimes for matrix games. COLT, 2022. arXiv | code | conference pdf (alphabetical authorship), Annie Marsden, John Duchi and Gregory Valiant, Misspecification in Prediction Problems and Robustness via Improper Learning. David P. Woodruff . In International Conference on Machine Learning (ICML 2016). The site facilitates research and collaboration in academic endeavors. Journal of Machine Learning Research, 2017 (arXiv). {{{;}#q8?\. Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. Full CV is available here. Huang Engineering Center I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Contact. Publications and Preprints. We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . what is a blind trust for lottery winnings; ithaca college park school scholarships; 2016. resume/cv; publications. Title. Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation.