Events

Upcoming Events

There are no upcoming events matching your criteria.

Past Events

Introduction To New Longitudinal Reproductive Health Data In IPUMS PMA

Last month, IPUMS PMA released the harmonized version of Performance Monitoring for Action's redesigned core family planning survey, which includes a longitudinal panel of childbearing women for analyzing contraceptive and fertility dynamics over time. Join to learn how to download a dataset with panel data already linked and how to analyze these new data.

IPUMS provides census and survey data from around the world integrated across time and space. IPUMS integration and documentation makes it easy to study change, conduct comparative research, merge information across data types, and analyze individuals within family and community contexts. Data and services available free of charge.

IMA Industrial Problems Seminar with Nicholas Johnson (3M)

 

There will be more information added in the future. Check the IMA event page for more information.

Data Science Poster Fair

As partial fulfillment of their degree requirements, Data Science MS students present their research conducted under the guidance of a faculty advisor at a designated Poster Fair event. A list of presenters and their capstone projects can be found on the Data Science website.

This event will be open to the public and anyone who is interested in attending is welcome. Please forward this information on to anyone you think may be interested.  Light refreshments and snacks will be provided.

We ask those who plan to attend the event to RSVP by filling out this form to assist our event staff for planning purposes.

If you have any questions about the event please contact Allison Small at small126@umn.edu.
 

IMA Data Science Seminar with Martin Molina-Fructuoso (NC State)

There will be more information added in the future. Check the IMA event page for more information.

MnRI Seminar: Noah Goldfarb

The potential role of artificial intelligence for hidradenitis suppurativa severity assessment

Hidradenitis suppurativa (HS) is a painful, devastating inflammatory skin condition characterized by inflammatory nodules, abscess and draining tunnels that significantly effect patients’ quality of life. HS is fairly common condition, with a prevalence of 0.1% in the Unites States. Currently, only one medication is federal drug administration (FDA) approved for the treatment of HS.

The overarching goal of the proposed study is to determine whether AI is capable of improving the accuracy and reliability of HS severity and activity assessments for the use in clinical trials. The initial stage will be training the algorithm with images from a single location, the axilla. If successful, future studies will incorporate other locations of the body to create an HS AI severity assessment tool which can be validated for clinical trial use.

About Dr. Noah Goldfarb

Dr. Noah Goldfarb graduated from SUNY Stony Brook School of Medicine in 2009 and completed a combined residency in internal medicine and dermatology in 2014. He is currently an Assistant Professor at the University of Minnesota in the Departments of  Medicine and Dermatology and staff physician at the Minneapolis VA Health Care System. Dr Goldfarb’s clinical interests include autoimmune skin diseases, hidradenitis suppurativa (HS) and complex medical dermatology.

He runs an HS specialty clinic and combined rheum-derm clinic at University site and continues to attend as a hospitalist at the VA, working with medical students and residents. Dr. Goldfarb is also passionate about education. He is the medical student coordinator at the VA, the dermatology interest group advisor at the University of Minnesota, the Dermatology Pathophysiology Discipline Director for the Human Health & Disease 3 Course (HHD3) for second year medical students and the Residency Program Co-Director for the Combined Internal Medicine/Dermatology program.

ECE Colloquium: Prof. Bouman

Plug-and-Play: A Framework for Integrating Physics and Machine Learning in CT Imaging

This talk presents emerging methods for the integration of physics-based and machine learning (ML) models with novel acquisition methods to push CT technology well beyond traditional limits. For example, while ML methods such as deep neural networks offer unprecedented ability to model complex behavior, they typically lack the flexibility and accuracy of traditional physics-based methods for modeling imaging sensors. In order to address this dilemma, we present plug-and-play methods as a general framework for getting the ``best of both worlds’’ by integrating traditional physics-based models based on probability distributions with action-based ML models. Throughout the talk, we present state-of-the-art examples using imaging modalities including computed tomography (CT), transmission electron microscopy (STEM), synchrotron beam imaging, optical sensing, scanning electron microscopy (SEM), and ultrasound imaging.

Charles A. Bouman is the Showalter Professor of Electrical and Computer Engineering and Biomedical Engineering at Purdue University. His research is in the area of Computational Imaging, with applications in medical, scientific, and commercial imaging. He received his B.S.E.E. degree from the University of Pennsylvania, M.S. degree from the University of California at Berkeley, and Ph.D. from Princeton University in 1989. He is a member of the National Academy of Inventors, a Fellow of the IEEE, AIMBE, IS&T, and SPIE. He is the recipient of the 2021 IEEE Signal Processing Society, Claude Shannon-Harry Nyquist Technical Achievement Award, the 2014 Electronic Imaging Scientist of the Year award, and the IS&T’s Raymond C. Bowman Award; and in 2020, his paper on Plug-and-Play Priors won the SIAM Imaging Science Best Paper Prize.

How Well Can We Generalize Nonlinear Learning Models in High Dimensions?

Modern learning algorithms such as deep neural networks operate in regimes that defy the traditional statistical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study the generalization properties of algorithms in high dimensions. We first show that algorithms in high dimensions require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the over-parametrized regime. We, then, provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural networks and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms. Joint work with Ofer Zeitouni, more information in https://arxiv.org/abs/2103.14723.

About Inbar Seroussi
Inbar Seroussi is a postdoctoral fellow in the mathematics department at the Weizmann Institute of Science, hosted by Prof. Ofer Zeitouni. Previously, she completed her Ph.D. in the applied mathematics department at Tel-Aviv University under the supervision of Prof. Nir Sochen. Her research interest includes modeling of complex and random systems in high dimensions with application to modern machine learning, physics, and medical imaging. She develops and uses advanced tools drawn from statistical physics, stochastic calculus, and random matrix theory.

Data Science in Business vs. Academia

This talk discusses similarities and differences between doing data science in academic and business environment. What are the relevant main differences between these environments? Why are the problem of different complexities? What is helpful to know? It builds on my years of experience doing both. All questions are welcome.

About Philippe Barbe
Philippe Barbe, PhD, is Senior Vice President of Content Data Science at Paramount (formerly ViacomCBS). In this role Philippe is responsible for data science modeling to inform content exploitation decisions across Paramount businesses. His team builds predictive models that support highly critical multi-million dollar content-related decisions in collaboration with many data science and research groups across Paramount.

Philippe received a PhD in mathematics and statistics from University Pierre et Marie Curie in Paris, France (currently Sorbonne University) and degree in management and government from ENSAE. He worked for over 20 years at the CNRS, as mathematician specialized in data science and related fields. He authored or co-authored 5 books and numerous scientific papers. He has been an invited professor in many universities worldwide, including Yale and GeorgiaTech in the US. He has been working in the media and entertainment industry since 2015.

Deep Neural Networks Explainability: Algorithms and Applications (CS&E Colloquium)

Abstract

Deep neural networks (DNN) have achieved extremely high prediction accuracy in a wide range of fields such as computer vision, natural language processing, and recommender systems. Despite the superior performance, DNN models are often regarded as black-boxes and criticized for the lack of interpretability, since these models cannot provide meaningful explanations on how a certain prediction is made. Without the explanations to enhance the transparency of DNN models, it would become difficult to build up trust and credibility among end-users. In this talk, I will present our efforts to tackle the black-box problem and to make powerful DNN models more interpretable and trustworthy. First, I will introduce post-hoc interpretation approaches for predictions made by two standard DNN architectures, including Convolution Neural Network (CNN) and Recurrent Neural Network (RNN). Second, I will introduce the usage of explainability as a debugging tool to improve the generalization ability and fairness of DNN models.

About Mengnan Du

Mengnan Du is currently a Ph.D. student in Computer Science at Texas A&M University, under the supervision of Dr. Xia Ben Hu. His research is on the broad area of trustworthy machine learning, with a particular interest in the areas of explainable, fair, and robust DNNs. He has had around 40 papers published in prestigious venues such as NeurIPS, AAAI, KDD, WWW, NAACL, ICLR, CACM, and TPAMI. He received over 1,200 citations with an H-index of 11. Three of his papers were selected for the Best Paper (Candidate) at WWW 2019, ICDM 2019, and INFORMS 2019, respectively. His paper on Explainable AI was also highlighted on the cover page of Communications of the ACM, January 2020 issue. He served as the Registration Chair of WSDM’22, and is the program committee member of conferences including NeurIPS, ICML, ICLR, AAAI, ACL, EMNLP, NAACL, etc.

Data Science @ Meta

There are different business and research organizations at Meta (Facebook) for Data scientists and/or Machine Learning Eng/Researcher, each trying to accomplish different goals. It is important to get familiar with these different organizations and their goals at Meta before applying for any jobs and to know how to best prepare for each.

Zeinab will present the optimal way to land a job in big tech companies transferring from academia by 1) what you need to invest in your last 1-2 years of graduate study/post-graduate career and 2) when to apply. Then, I will give a summary of different Data Scientist jobs at various Meta organizations (the Product/Eng, Infrastructure Team, or Marketing Science/Sales Team). The lecture will be followed by a summary of Machine Learning Researcher (User Value and Advertisers Value) and Artificial Intelligence roles at Meta.