The Manifold Moving Least-Squares (Manifold-MLS) Framework: estimating manifolds and reconstructing their atlas from discrete data sets
The Manifold Moving Least-Squares (Manifold-MLS) Framework: estimating manifolds and reconstructing their atlas from discrete data sets
Differentiable manifolds are an indispensable ‘language’ in modern physics and mathematics. As such, there are a plethora of analytic tools designed to investigate manifold-based models (e.g., connections, differential forms, curvature tensors, parallel transport, bundles). However, in order to facilitate these tools, one normally assumes access to the manifold’s atlas of charts (i.e., local parametrizations). In recent years, manifold-based modeling has permeated into data analysis as well, usually in order to avoid working in high dimensions. However, in these data-driven models, charts are not accessible and the only information at hand is the samples themselves. As a result, a common practice in Manifold Learning is to project the data into a lower-dimensional Euclidean domain, while maintaining some notion of distance (e.g., geodesic or diffusion).
In this talk, we introduce an alternative approach named the Manifold Moving Least-Squares (Manifold-MLS) that, given a finite set of samples, reconstructs an atlas of charts and provides an approximation of the manifold itself. Under certain (non-restrictive) sampling assumptions, we prove that the Manifold-MLS produces a smooth Riemannian manifold approximating the sampled one, even in case of noisy samples. We show that the approximation converges to the sampled manifold in case the number of samples tends to infinity, and give the exact convergence rates.