09:00 to 09:50 Laurent Cohen (CNRS & Université Paris-Dauphine); (CNRS & Université Paris-Dauphine)Geodesic Methods for Interactive Image Segmentation using Finsler metrics Minimal paths have been used for long as an interactive tool to find edges or tubular structures as cost minimizing curves. The user usually provides start and end points on the image and gets the minimal path as output. These minimal paths correspond to minimal geodesics according to some adapted metric. They are a way to find a (set of) curve(s) globally minimizing the geodesic active contours energy. Finding a geodesic distance can be solved by the Eikonal equation using the fast and efficient Fast Marching method. Different metrics can be adapted to various problems. In the past years we have introduced different extensions of these minimal paths that improve either the interactive aspects or the results. For example, the metric can take into account both scale and orientation of the path. This leads to solving an anisotropic minimal path in a 2D or 3D+radius space. We recently introduced the use of Finsler metrics allowing to take into account the local curvature in order to smooth the path. It can also be adapted to take into account a region term inside the closed curve formed by a set of minimal geodesics.  Co-authors: Da Chen and J.-M. Mirebeau INI 1 09:50 to 10:40 Tom Goldstein (University of Maryland)Automating stochastic gradient methods with adaptive batch sizes This talk will address several issues related to training neural networks using stochastic gradient methods.  First, we'll talk about the difficulties of training in a distributed environment, and present a new method called centralVR for boosting the scalability of training methods.  Then, we'll talk about the issue of automating stochastic gradient descent, and show that learning rate selection can be simplified using "Big Batch" strategies that adaptively choose minibatch sizes. INI 1 10:40 to 11:10 Morning Coffee 11:10 to 12:00 Vladimir Kolmogorov (Institute of Science and Technology (IST Austria))Valued Constraint Satisfaction Problems I will consider the Valued Constraint Satisfaction Problem (VCSP), whose goal is to minimize a sum of local terms where each term comes from a fixed set of functions (called a "language") over a fixed discrete domain. I will present recent results characterizing languages that can be solved using the basic LP relaxation. This includes languages consisting of submodular functions, as well as their generalizations. One of such generalizations is k-submodular functions. In the second part of the talk I will present an application of such functions in computer vision. Based on joint papers with Igor Gridchyn, Andrei Krokhin, Michal Rolinek, Johan Thapper and Stanislav Zivny. INI 1 12:00 to 12:50 Sung Ha Kang (Georgia Institute of Technology)Efficient numerical Methods For Variational inpainting models Co-authors: Maryam Yashtini (Georgia Institute of Technology), Wei Zhu (The University of Alabama) Recent developments of fast algorithms, based on operator splitting, augmented Lagrangian, and alternating minimization, enabled us to revisit some of the variational image inpainting models. In this talk, we will present some fast algorithms for Euler's Elastica image inpainting model, and variational edge-weighted image colorization model based on chromaticity and brightness models. Main ideas of the models and algorithms, some analysis and numerical results will be presented. INI 1 12:50 to 14:00 Lunch @ Wolfson Court 14:00 to 14:50 Jalal Fadili (Other)Sensitivity Analysis with Degeneracy: Mirror Stratifiable Functions This talk will present a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coin mirror-stratifiable''. These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this mirror-stratifiable'' structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms. Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata. We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations,  allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions.This is a joint work with Jérôme Malick and Gabriel Peyré. INI 1 14:50 to 15:40 Zuoqiang Shi (Tsinghua University)Low dimensional manifold model for image processing In this talk, I will introduce a novel low dimensional manifold model for image processing problem. This model is based on the observation that for many natural images, the patch manifold usually has low dimension structure. Then, we use the dimension of the patch manifold as a regularization to recover the original image. Using some formula in differential geometry, this problem is reduced to solve Laplace-Beltrami equation on manifold. The Laplace-Beltrami equation is solved by the point integral method. Numerical tests show that this method gives very good results in image inpainting, denoising and super-resolution problem. This is joint work with Stanley Osher and Wei Zhu. INI 1 15:40 to 16:10 Afternoon Tea 16:10 to 17:00 Gabriele Steidl (University of Kaiserslautern)Convex Analysis in Hadamard Spaces joint work with M. Bacak, R. Bergmann, M. Montag and J. PerschThe aim of the talk is two-fold: 1. A well known result of H. Attouch states that the Mosco convergence of a sequence of proper convex lower semicontinuous functions defined on a Hilbert space is equivalent to the pointwise convergence of the associated Moreau envelopes. In the present paper we generalize this result to Hadamard spaces. More precisely, while it has already been known that the Mosco convergence of a sequence of convex lower semicontinuous functions on a Hadamard space implies the pointwise convergence of the corresponding Moreau envelopes, the converse implication was an open question. We now fill this gap.  Our result has several consequences. It implies, for instance, the equivalence of the Mosco and Frolik-Wijsman convergences of convex sets. As another application, we show that there exists a~complete metric on the cone of proper convex lower semicontinuous functions on a separable Hadamard space such that a~sequence of functions converges in this metric if and only if it converges in the sense of Mosco. 2. We extend the parallel Douglas-Rachford algorithm  to the manifold-valued setting. INI 1 19:30 to 22:00 Formal Dinner at Emmanuel College
 09:00 to 09:50 Mila Nikolova (CNRS (Centre national de la recherche scientifique)); (ENS de Cachan)Alternating proximal gradient descent for nonconvex regularised problems with multiconvex coupling terms Co-author: Pauline Tan There has been an increasing interest in constrained nonconvex  regularized block multiconvex optimization problems. We introduce an  approach that effectively exploits the multiconvex structure of the coupling term and enables complex application-dependent regularization terms to be used. The proposed Alternating Structure-Adapted Proximal gradient descent algorithm enjoys simple well defined updates. Global convergence of the algorithm to a critical point is proved using the so-called Kurdyka-Lojasiewicz  property. What is more, we prove that a large class of useful objective functions obeying our assumptions are subanalytic and thus satisfy the Kurdyka-Lojasiewicz property. Finally, present an application of the algorithm to big-data air-born sequences of images. INI 1 09:50 to 10:40 Michael Unser (EPFL - Ecole Polytechnique Fédérale de Lausanne)Representer theorems for ill-posed inverse problems: Tikhonov vs. generalized total-variation regularization In practice, ill-posed inverse problems are often dealt with by introducing a suitable regularization functional. The idea is to stabilize the problem while promoting "desirable" solutions. Here, we are interested in contrasting the effect Tikhonov vs. total-variation-like regularization. To that end, we first consider a discrete setting and present two representer theorems that characterize the solution of general convex minimization problems subject to $\ell_2$ vs. $\ell_1$ regularization constraints. Next, we adopt a continuous-domain formulation where the regularization semi-norm is a generalized version of total-variation tied to some differential operator L. We prove that the extreme points of the corresponding minimization problem are nonuniform L-splines with fewer knots than the number of measurements. For instance, when L is the derivative operator, then the solution is piecewise constant, which confirms a standard observation and explains why the solution is intrinsically sparse. The powerful aspect of this characterization is that it applies to any linear inverse problem. INI 1 10:40 to 11:10 Morning Coffee 11:10 to 12:00 Pierre Weiss (Université de Toulouse)Estimation of linear operators from scattered impulse responses Co-authors: Paul Escande (Université de Toulouse), Jérémie Bigot (Université de Toulouse) In this talk, I will propose a variational method to reconstruct operators with smooth kernels from scattered and noisy impulse responses. The proposed approach relies on the formalism of smoothing in reproducing kernel Hilbert spaces and on the choice of an appropriate regularization term that takes the smoothness of the operator into account. It is numerically tractable in very large dimensions and yields a representation that can be used for achieving fast matrix-vector products. We study the estimator's robustness to noise and analyze its approximation properties with respect to the size and the geometry of the dataset. It turns out to be minimax optimal. We finally show applications of the proposed algorithms to reconstruction of spatially varying blur operators in microscopy imaging.Related Links INI 1 12:00 to 12:50 Olga Veksler (University of Western Ontario)Adaptive and Move Making Auxiliary Cuts for Binary Pairwise Energies Co-author: Lena Gorelick (University of Western Ontario) Many computer vision problems require optimization of binary non-submodular energies. In this context, local iterative submodularization techniques based on trust region (LSA-TR) and auxiliary functions (LSA-AUX) have been recently proposed. They achieve state-of-the-art-results on a number of computer vision applications. We extend the LSA-AUX framework in two directions. First, unlike LSA-AUX, which selects auxiliary functions based solely on the current solution, we propose to incorporate several additional criteria. This results in tighter bounds for configurations that are more likely or closer to the current solution. Second, we propose move-making extensions of LSA-AUX which achieve tighter bounds by restricting the search space. Finally, we evaluate our methods on several applications. We show that for each application at least one of our extensions significantly outperforms the original LSA-AUX. Moreover, the best extension of LSA-AUX is comparable to or better than LSA-TR on four out of six applications. INI 1 12:50 to 14:00 Lunch @ Wolfson Court 14:00 to 14:50 Thomas Vogt (Universität zu Lübeck)Optimal Transport-Based Total Variation for Functional Lifting and Q-Ball Imaging Co-Author: Jan Lellmann (Institute of Mathematics and Image Computing, University of Lübeck)One strategy in functional lifting is to consider probability measures on the label space of interest, which can be discrete or continuous. The considered functionals often make use of a total variation regularizer which, when lifted, allows for a dual formulation introducing a Lipschitz constraint. In our recent work, we proposed to use a similar formulation of total variation for the restoration of so-called Q-Ball images. In this talk, we present a mathematical framework for total variation regularization that is inspired from the theory of Optimal Transport and that covers all of the previous cases, including probability measures on discrete and continuous label spaces and on manifolds. This framework nicely explains the above-mentioned Lipschitz constraint and comes with a robust theoretical background. INI 1 14:50 to 15:40 Martin Holler (University of Graz)Total Generalized Variation for Manifold-valued Data Co-authors: Kristian Bredies (University of Graz), Martin Storath (University of Heidelberg), Andreas Weinmann (Darmstadt University of Applied Sciences) Introduced in 2010, the total generalized variation (TGV) functional is nowadays amongst the most successful regularization functionals for variational image reconstruction. It is defined for an arbitrary order of differentiation and provides a convex model for piecewise smooth vector-space data. On the other hand, variational models for manifold-valued data have become popular recently and many successful approaches, such as first- and second-order TV regularization, have been successfully generalized to this setting. Despite the fact that TGV regularization is, generally, considered to be preferable to such approaches, an appropriate extension for manifold-valued data was still missing. In this talk we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present concrete instances that fulfill the proposed axioms. We prove well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data. INI 1 15:40 to 16:10 Afternoon Tea 16:10 to 17:00 Tammy Riklin raviv (Ben-Gurion University)Variational Methods to Image Segmentation In the talk I will present variational methods to image segmentation with application to brain MRI tissue classification. In particular I will present an `unconventinal'  use of the multinomial logistic regression function.The work is based on a joint work with Jacob Goldberger, Shiri Gordon and Boris Kodner. INI 1