|Home | About | People | Research | Undergraduate | Graduate | Seminars | Courses | Outreach | Math Life|
Seminars, Colloquia, and Conferences
Seminars, Colloquia, and Conferences
The colloquium meets on Fridays at 4:00pm in Bromfield-Pearson 101, unless otherwise indicated.
We will present the notion of a CAT(0) and more generally CAT(k)-space which are spaces with certain upper curvature bounds. While these curvature properties are in general hard to verify one can deduce many algebraic properties of a group from an action on such a space. We show that braid groups with at most 6 strands are CAT(0), i.e. act geometrically on a CAT(0) space, using the close connection between these groups, the associated non-crossing partition complexes and the embeddability of their diagonal links into spherical buildings.
We will describe a construction which allows one to go from one 3-manifold to another, called Dehn surgery. Then we will use this construction to build an infinite graph whose vertices correspond to 3-manifolds. Along the way, properties of 3-manifolds and properties of infinite metric graphs will be discussed. New results stated are joint with Neil Hoffman.
The spherical mean value of a continuous function f on a Riemannian manifold M is the average of f over a sphere of any radius r. We'll explore the relation between the spherical mean value operator and harmonic functions and, more generally, eigenfunctions of invariant differential operators. Then, we'll look at the spherical mean value operator as an integral transform and examine questions about its kernel and range. Finally, we look at how we can use the mean value operator to solve certain differential equations. We'll show that a common theme that occurs throughout is the action of (mostly) compact groups on manifolds.
The talk hopes to gives some motivation(s) for why one should study algebraic and algebro-geometric objects defined "over fields of characteristic p > 0", for a prime number p. The talk will be almost exclusively expository: it will (at least loosely) explain a number of examples, constructions, and perspectives. Here are some examples I hope to discuss or at least mention: the Frobenius morphism and the Galois group of a rational polynomial, finite simple groups, normality of Schubert varieties, the Hasse principle, and possibly the Weil conjectures.
In this talk, I will give an overview of how microlocal analysis is used to understand limited data problems in tomography, in particular, the limited angle problem. All of the challenges of limited data tomography are illustrated by this classical problem.
Diffusive Optical Tomography (DOT) is an emerging technology for breast tumour detection and brain imaging in which the region of interest is illuminated with near infrared light at a specific wavelength and the data are comprised of observations of the resulting scattered diffuse fields. The characterization of tumours as well as reconstruction of the large scale structure of the breast can be mathematically described as an ill-posed, non-linear inverse problem. New technology allows for the collection of hyperspectral data. Although we anticipate that the availability of more information using multiple wavelengths increases the accuracy of the reconstruction, the use of hyperspectral data poses a significant computational burden in the context of image recovery. In this talk, I will discuss various computational challenges and our approach to tackle the large-scale inverse problem.
The H_∞ norm is a well known and important measure arising in many applications that quantifies the least amount of perturbation a linear dynamical system with inputs and outputs can incur such that it may no longer be asymptotically stable. Unfortunately, merely calculating it is an expensive proposition of finding a global optimum to a nonconvex and nonsmooth optimization problem, with the standard algorithm requiring cubic cost per iteration to compute it.
The reconstruction of tomographic slices from x-ray CT data is an interesting and challenging problem in medical imaging. In particular, there are many applications, such as digital breast tomosynthesis, dental tomography, electron microscopy etc., where the data is available at a limited angular range only. In this case the reconstruction problem is severely ill-posed and the traditional reconstruction methods, such as filtered backprojection (FBP), do not perform well. To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process.
A growing number of our technologies (telecommunications, radar, solar energy, etc) rely on the manipulation of linear waves at the wavelength scale, and advances in numerical modeling continue to be key to such progess. We study the common grating diffraction problem where time-harmonic scalar waves scatter from a periodic medium. We develop new solvers for 2D and 3D cases, for either isolated obstacles or a connected "bumpy surface". After reviewing the integral equation method for solving PDEs, I will explain the two innovations that allow our solvers to be highorder and optimal O(N) complexity: 1) new surface quadrature schemes compatible with the fast multipole method (extending our recent QBX scheme to 3D), and 2) robust ways to "periodize" the integral equation so that the unknowns live only on a single period of the geometry. (This includes joint work with Leslie Greengard, Zydrunas Gimbutas, Adrianna Gillman, Andreas Klockner, and Mike O'Neil.)
A famous classical result characterizes harmonic functions on the unit disk by the Poisson integral formula. This can be generalized to the Poisson transform, which is the integral transform by the complex powers of the Poisson kernel. In 1970, Helgason proved that any eigenfunction of the Laplacian on the Poincar´e disk is given by the image of the Poisson transform of a boundary value on the unit circle. It is well-known that via the group action of M¨obius transformations (which preserve the unit disk), the Poincar´e disk can be regarded as the real hyperbolic surface, i.e a symmetric space of noncompact type. Using Lie group techniques, this can be generalized to a similar characterization problem for symmetric spaces of noncompact type. Helgason (Advances in Math. (1970)) conjectured a characterization of joint eigenfunctions in terms of the Poisson transform in the general case, and this conjecture was proved by six Japanese mathematicians (Ann. of Math. (1978)). After that, from the point of view of spectral theory, Strichartz (J. Funct. Anal. (1989)) formulated a conjecture concerning a different image characterization of the Poisson transform of the L2-space on the boundary. Except for a special case, the Strichartz conjecture had remain unsolved up to the present moment. In this talk, we employ techniques in scattering theory to present a positive answer to the Strichartz conjecture.
A large part of computational neuroscience (not all of it) is the study of differential equations models of nerve cells and brain networks. Nobody can know at this point whether this sort of modeling will play an important role in understanding the brain. In this talk, I will give examples from my own work.
Extracting the salient features from a dataset that consists of many datapoints in highdimensional space is a problem of wide contemporary interest. Many techniques for extracting such information rely on the assumption that the high-dimensional observations are created by a system that depends on a relatively small number of variables. These techniques are collectively referred to as nonlinear dimensionality reduction techniques, and most of the associated theoretical results assume the data lies on a smooth, compact submanifold of R^n.
A key assumption of neoclassical economic theory is that economic agents trade to further their own best interests without making mistakes. In the real world, agents make mistakes, resulting in losses for some and gains for others. Because the effect of these mistakes can be to anybody's benefit or detriment, one might think that their net effect would average away, and that the principal results of neoclassical theory would be robust in this regard, albeit with some level of superposed noise. It turns out that this intuition is incorrect, and that the effect of a constant rate of mistakes, no matter how small or infrequent they may be, is a gross distortion in the overall distribution of wealth, tending to concentrate it in the hands of a small minority of agents. This effect can be kept in check by some amount of redistribution – for example by taxation and public spending or by price controls. In this work, we show that the combination of these mechanisms is sufficient to explain the general form of Pareto's Law of wealth distribution, a key empirical macroeconomic result that has resisted a microeconomic explanation for over a century. This is interesting, but the consequences of this observation are far broader than that. In essence, contrary to most Western economic policy of the past three decades – but perfectly consistent with its observed consequences – the more "free" the market, the fewer are the mechanisms for wealth redistribution, and the greater is the tendency toward oligarchy. In this light, it is not surprising that the sudden withdrawal of price controls and state subsidies called for by the "shock therapy" imposed on the states of the former Soviet Union in the early 1990s led directly to the oligarchies that currently prevail in many of those states.
The study of materials possessing so-called topological order has been an area of remarkable interest in Physics over the past decade, notably including the award of a Nobel prize to the discoverers of the most famous of such materials, Graphene. It's quite natural to ask: What exactly about these materials is topological? In this talk, I aim to answer that question by presenting a simple toy model of a 1D topological material that has recently been realized experimentally through a photonic analog, and using this to illustrate more complex cases. Along the way, I will make some hopefully entertaining asides revealing unexpected mathematical links between such apparently disparate things as conformal mapping and cloaking devices, Chebyshev polynomials and butterflies.
The Mixed Finite Element Method shows great potential for the discretization of a large class of partial differential equations (PDEs) that model physical problems of practical relevance in various fields of engineering, including fluid-dynamics, solid mechanics, and electromagnetism. Many applications of these models feature a multi-physics and multi-scale nature that poses a substantial challenge to state-of-theart solvers. Upscaling techniques can reduce computational cost by solving coarse scale models that take into account interactions at different scales.
|©2014 Tufts University. All rights reserved. Site designed & maintained by Tufts Technology Services (TTS). Major artwork contributions by Lun-Yi Tsai.|