diff --git a/_publications/2015-10-01-paper-title-number-3.md b/_publications/2015-10-01-paper-title-number-3.md index 4dedde557ce00..8b09a3e83342e 100644 --- a/_publications/2015-10-01-paper-title-number-3.md +++ b/_publications/2015-10-01-paper-title-number-3.md @@ -19,4 +19,4 @@ authors: 'Tareq Si Salem, Gözde Özcan, Iasonas Nikolaou, Evimaria Terzi citation: 'Si Salem, Tareq, Özcan, Gözde, Nikolaou, Iasonas, Terzi, Evimaria, & Ioannidis, Stratis (2024). "Online Submodular Maximization via Online Convex Optimization." Proceedings of the AAAI Conference on Artificial Intelligence.' --- -The contents above will be part of a list of publications, if the user clicks the link for the publication than the contents of section will be rendered as a full page, allowing you to provide more information about the paper for the reader. When publications are displayed as a single page, the contents of the above "citation" field will automatically be included below this section in a smaller font. +We study monotone submodular maximization under general matroid constraints in the online setting. We prove that online optimization of a large class of submodular functions, namely, weighted threshold potential functions, reduces to online convex optimization (OCO). This is precisely because functions in this class admit a concave relaxation; as a result, OCO policies, coupled with an appropriate rounding scheme, can be used to achieve sublinear regret in the combinatorial setting. We show that our reduction extends to many different versions of the online learning problem, including the dynamic regret, bandit, and optimistic-learning settings.