Past Events

Decomposing Low-Rank Symmetric Tensors

Joe Kileel (The University of Texas at Austin)

In this talk, I will discuss low-rank decompositions of symmetric tensors (a.k.a. higher-order symmetric matrices).  I will start by sketching how results in algebraic geometry imply uniqueness guarantees for tensor decompositions, and also lead to fast and numerically stable algorithms for calculating the decompositions.  Then I will quantify the associated non-convex optimization landscapes.  Finally, I will present applications to Gaussian mixture models in data science, and rigid motion segmentation in computer vision.  Based on joint works with João M. Pereira, Timo Klock and Tammy Kolda.

Data-Model Fusion to Predict the Impacts of Climate Change on Mosquito-borne Diseases

Carrie Manore (Los Alamos National Laboratory)

Mosquito-borne diseases are among the many human-natural systems that will be impacted by climate change. All of the life stages and development rates of mosquitoes are impacted by temperature and other environmental factors, and often human infrastructure provides habitat  (irrigation, containers, water management, etc). This poses a very interesting mathematical modeling problem: how do we account for relevant factors, capture the nonlinearities, and understand the uncertainty in our models and in the data used to calibrate and validate the models? I will present several models, ranging from continental to fine scale and from statistical and machine learning to mechanistic, that we are using to predict mosquito-borne diseases and how they will be impacted by climate change. Over 30 people have worked together on this project, including students, postdocs, and staff. Our team is interdisciplinary and tasked with addressing critical national security problems around human health and climate change.

Stability and Generalization in Graph Convolutional Neural Networks

Ron Levie (Ludwig-Maximilians-Universität München)

In recent years, the need to accommodate non-Euclidean structures in data science has brought a boom in deep learning methods on graphs, leading to many practical applications with commercial impact. In this talk, we will review the mathematical foundations of the generalization capabilities of graph convolutional neural networks (GNNs). We will focus mainly on spectral GNNs, where convolution is defined as element-wise multiplication in the frequency domain of the graph. 

In machine learning settings where the dataset consists of signals defined on many different graphs, the trained GNN should generalize to graphs outside the training set. A GNN is called transferable if, whenever two graphs represent the same underlying phenomenon, the GNN has similar repercussions on both graphs. Transferability ensures that GNNs generalize if the graphs in the test set represent the same phenomena as the graphs in the training set. We will discuss the different approaches to mathematically model the notions of transferability, and derive corresponding transferability error bounds, proving that GNNs have good generalization capabilities.

Ron Levie received the Ph.D. degree in applied mathematics in 2018, from Tel Aviv University, Israel. During 2018-2020, he was a postdoctoral researcher with the Research Group Applied Functional Analysis, Institute of Mathematics, TU Berlin, Germany. Since 2021 he is a researcher in the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence, Department of Mathematics, LMU Munich, Germany. Since 2021, he is also a consultant at the project Radio-Map Assisted Pathloss Prediction, at the Communications and Information Theory Chair, TU Berlin. He won excellence awards for his MSc and PhD studies, and a Post-Doc Minerva Fellowship. He is a guest editor at Sampling Theory, Signal Processing, and Data Analysis (SaSiDa), and was a conference chair of the Online International Conference on Computational Harmonic Analysis (Online-ICCHA 2021).

His current research interests are in theory of deep learning, geometric deep learning, interpretability of deep learning, deep learning in wireless communication, harmonic analysis, signal processing, wavelet theory, uncertainty principles, continuous frames, and randomized methods.


Pointers on AI/ML Career Success

Paritosh Desai (Google Inc.)

While there are many commonalities between academic research and roles in the industry for applied math professionals, there are also important differences. These differences are material in shaping career outcomes in the industry and we try to elaborate on them by focusing on two broad themes for people with academic research backgrounds. First, we will look at the common patterns related to applied AI/ML problems across multiple industries and specific challenges around them. Second, we will discuss emergent requirements for success in the industry setting. We will share principles and anecdotes related to data, software engineering practices, and empirical research based upon industry experiences.

Intelligent Randomized Algorithms for the Low CP-Rank Tensor Approximation Problem

Alex Gittens (Rensselaer Polytechnic Institute)

In the context of numerical linear algebra algorithms, where it is natural to sacrifice accuracy in return for quicker computation of solutions whose errors are only slightly larger than optimal, the time-accuracy tradeoff of randomized sketching has been well-characterized. Algorithms such as Blendenpik and LSRN have shown that carefully designed randomized algorithms can outperform industry standard linear algebra codes such as those provided in LAPACK.

For numerical tensor algorithms, where the size of problems grow exponentially with the order of the tensor, it is even more desirable to use randomization. However, in this setting, the time-accuracy tradeoff of randomized sketching is more difficult to understand and exploit, as:

  1. in the first place, tensor problems are non-convex, 
  2. the properties of the data change from iteration to iteration, and
  3. straightforward applications of standard results on randomized sketching allow for the error to increase from iteration to iteration.

On the other hand, the iterative nature of such algorithms opens up the opportunity to learn how to sketch more accurately in an online manner.

In this talk we consider the problem of speeding up the computation of low CP-rank (canonical polyadic) approximations of tensors through regularized sketching. We establish for the first time a sublinear convergence rate to approximate critical points of the objective under standard conditions, and further provide algorithms that adaptively select the sketching and regularization rates.

Alex Gittens is an assistant professor of computer science at Rensselaer Polytechnic Institute. He obtained his PhD in applied mathematics from CalTech in 2013, and BSes in mathematics and electrical engineering from the University of Houston. After his PhD, he joined the eBay machine learning research group, then the AMPLab (now the RISELab) at UC Berkeley, before joining RPI. His research interests lie at the intersection of randomized linear algebra and large-scale machine learning, in particular encompassing nonlinear and multilinear low-rank approximations; sketching for nonlinear and multilinear problems; and scalable and data-dependent kernel learning.

New Methods for Disease Prediction using Imaging and Genomics

Eran Halperin (UnitedHealth Group)

Diagnosis and prediction of health outcomes using machine learning has shown major advances over the last few years. Some of the major challenges remaining in the field include the sparsity of electronic health records data, and the scarcity of high-quality labeled data. In this talk, I will present a couple of examples where we partially address these challenges. Specifically, I will provide an overview of a new neural network architecture for the analysis of three-dimensional medical imaging data (optical coherence tomography) under scarce labeled data and demonstrate applications in age-related macular degeneration. Then, I will describe in more detail a new Bayesian framework for the imputation of electronic health records (addressing sparsity) using DNA methylation data. Our framework involves a tensor deconvolution of bulk DNA methylation to obtain cell-type-specific methylation from bulk data, which we demonstrate is predictive of many clinical outcomes.  

Dr. Eran Halperin is the SVP of AI and Machine Learning in Optum Labs (United Health Group), and a professor in the departments of Computer Science, Computational Medicine, Anesthesiology, and Human Genetics at UCLA. Prior to his current position, he held research and postdoctoral positions at the University of California, Berkeley, the International Computer Science Institute in Berkeley, Princeton University, and Tel-Aviv University. Dr. Halperin’s lab developed computational and machine learning methods for a variety of health-related applications, including different genomic applications (genetics, methylation, microbiome, single-cell RNA), and medical applications (medical imaging, physiological waveforms, and electronic medical records). He published more than 150 peer-reviewed publications, and he received various honors for academic achievements, including the Rothschild Fellowship, the Technion-Juludan prize for technological contribution to medicine, the Krill prize, and he was elected as an International Society of Computational Biology (ISCB) fellow.

From Perception to Understanding: The Third Wave of AI

Tetiana Grinberg (Intel Corporation)

What is the next big thing in AI? What does one need to know and prepare for to remain relevant as the industry undergoes transformation? Why is this industry transformation a necessity? In this talk, we will discuss the strengths and weaknesses of traditional Deep Learning approaches to knowledge-centric tasks and look at a blueprint hybrid architecture that could offer solutions to the problems of scalability, reliability and explainability faced by large Deep Learning models of today. Finally, we will discuss the relevant skills that are needed for one to participate at the forefront of this research.

Tanya Grinberg is a Machine Learning Data Scientist with the Emergent AI lab at Intel Labs. Previously, she co-founded Symbiokinetics, a startup focused on developing AI-assisted robotic interfaces for medical applications like neurological rehab. Her research interests include embodiment, concept formation, and human-compatible value system design. 

Licensed to Analyze? An In-Depth Look at the Data Science Career: Defining Roles, Assessing Skills

Hamit Hamutcu (Initiative for Analytics and Data Science Standards (IADSS))

As the role of data and analytics is expanding very rapidly in creating new business models or changing existing ones, demand for data science and analytics professionals is growing at an increasing rate. However, almost every company in the industry has a unique way of defining roles and assigning titles in data related positions. For any given role or title, such as ‘Data Scientist’ or ‘Data Analyst’, we see a big variety of role definitions and expected knowledge and skills. This creates inefficiencies and makes it difficult for companies to find the right match for a given position, leverage analytics skills effectively and retain talent. It also makes it hard for professionals to understand what a certain position requires and develop their own development plans. This has resulted in a chaotic market that is confusing to employers, academic and training institutions, and candidates.

In order to address this, Initiative for Analytics and Data Science Standards (IADSS) has been launched and has kicked-off a global scale research and thought leadership effort. Our goal is to gain insight into the data profession in the industry and help support the development of standards regarding role definitions, required skills and, career advancement paths.

In this presentation we will share our research findings and recommendations on skill requirements for a variety of data science roles, career paths in the industry, and latest practices of organizations for recruiting, training, and managing data science resources.

Hamit brings 25 years of industry and consulting experience in the areas of analytics and data-driven strategy.

In his current role as Senior Advisor for The Institute for Experiential AI at Northeastern University, as part of the leadership team, he focuses on strategy and organizational development to launch programs that contribute to and work with the global AI ecosystem.

Hamit is the co-founder of the Initiative for Analytics and Data Science Standards. IADSS aims to develop industry standards for the knowledge and skills required in data science roles and is a best-practice and research hub for the data science profession. The Initiative is also working on an innovative framework for data literacy through its Data Citizen program.  

He is also co-founder of Analytics Center, a leading platform in EMEA that provides training, advisory services and organizes strategic events on big data, advanced analytics, disruptive technologies, and new business models.

Previously, Hamit was a partner for Peppers & Rogers Group in Stamford, Connecticut, where he headed the Global Analytics Group and oversaw the growth of the analytics practice. He helped his clients develop best-practice analytics organizations, build data infrastructure, and deploy models to support business goals. He was a founding partner for the Europe, Middle East, and Africa offices and grew the firm in the region. He delivered projects and managed teams across the globe in industries such as logistics, financial services, telecom, and government. 

Earlier in his career, Hamit held several marketing analytics and technology positions at FedEx in Memphis, Tennessee, where he led IT and business teams to leverage enormous amounts of company data generated to serve its customers better.

Hamit is a frequent speaker, writer, and board member at various startups and nonprofit organizations. Hamit also volunteers as a mentor with Endeavor to support entrepreneurs and innovation by mentoring startups and acting as a jury member. He earned his Bachelor of Science degree in electronics engineering at Bogazici University in Istanbul. He received his MBA from the University of Florida.

The Scattering Transform for Texture Synthesis and Molecular Generation

Michael Perlmutter (University of California, Los Angeles)

The scattering transform is a wavelet-based feed-forward network originally introduced by S. Mallat to improve our theoretical understanding of convolutional neural networks (CNNs). Like the front end of a CNN, it produces a latent representation of input signal through an alternating sequence of convolutions and non-linearities. Following Mallat's original paper, subsequent work has shown that this latent representation can be used to synthesize new input signals such as textures. In a somewhat orthogonal extension, there has also been a number of papers which have shown how to adapt the scattering transform to graph-structured data.

In my talk, I will present a new network which combines these two ideas and uses the graph scattering transform to generate new molecules with the intended application being drug discovery. In order to ensure that the molecules produced by our network satisfy the laws of chemistry and resemble actual drugs, we use a regularized autoencoder to learn a compressed representation of the scattering coefficients of each graph and a generative adversarial network (GAN) to produce new molecules directly from this compressed representation.

Michael Perlmutter is a Hedrick Assistant Adjunct Professor in the department of mathematics at the University of California, Los Angeles. Previously he has held postdoctoral positions in the department of Statistics and Operations Research at the University of North Carolina at Chapel Hill and in the department of Computational Math., Science and Engineering at Michigan State University. He earned his PHD in Mathematics from Purdue University.

Certified Robustness against Adversarial Attacks in Image Classification

Fatemeh Sheikholeslami (Bosch Center for Artificial Intelligence)

Researchers have repeatedly shown that it is possible to craft adversarial attacks, i.e., small perturbations that significantly change the class label, on deep classifiers and considerably degrade their performance. This fragility can significantly hinder the deployment of deep learning-based methods in safety-critical applications. To address this, adversarial attacks can be defended against either by building robust classifiers or, by creating classifiers that can detect the presence of adversarial perturbations. I will talk about a couple of algorithms that we have developed at BCAI which provide certified defenses against different threat models.

Fatemeh Sheikholeslami received her PhD in Electrical Engineering from University of Minnesota in 2019, under the supervision of Professor Georgios Giannakis. She is currently a Machine Learning Research Scientist at Bosch Center for Artificial Intelligence with the Safe and Robust Deep Learning group.