We are a research group at UC Berkeley that focuses on mathematical optimization and its applications to natural language processing, machine learning and deep learning.
News
-
From Gap safe rules to working sets for faster sparse solvers, Talk by Alexandre Gramfort
On November 15th Alexandre Gramfort, Researcher at INRIA, will give a talk entitled “From Gap safe rules to working sets for faster sparse solvers”. This will take place at 10am in Soda Hall, room 405.
-
Invited talk "Stochastic Gradient Descent: Going As Fast As Possible But Not Faster" by Alice Schoenauer-Sebag
On Friday 29th of September we will have the visit of Alice Schoenauer-Sebag, who will give a presentation on her recent paper Stochastic Gradient Descent: Going As Fast As Possible But Not Faster.
-
Breaking the nonsmooth barrier gets a spotlight at NIPS 2018
The paper “Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization”, by Fabian Pedregosa, Rémi Leblond and Simon Lacoste-Julien gets a spotlight at the prestigious NIPS conference, which will be held in Los Angeles in December. This work will also be presented in the upcoming baylearn workshop.