2023 CS&E Research Showcase
The CS&E Research Showcase is a bi-annual event that features the collective works of students and faculty in the Department of Computer Science & Engineering. The event will feature over 60 posters, as well as a keynote addresses from Eugene Spafford, the founder and executive director of the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University, and Ed Chi, CS&E alumni award winner and Distinguished Scientist at Google. See below for more information about the speakers.
Additionally, the event will feature the Fall 2023 Data Science Poster Fair. This event is held each semester and feature the capstone project and poster presentation for graduating data science master's student.
This event is open to the public and all interested undergraduate and graduate students, alumni, staff, faculty, and industry professionals are encouraged to attend. To let us know you'll be joining us, please fill out our RSVP form below. We ask those who plan to attend to RSVP by Friday, November 10.
Eugene Spafford - Professor, Executive Director of CERIAS Emeritus
A Perspective on Cybersecurity History and Futures
Cybersecurity is about 60 years old. As such, it is a relatively new field, with much of its early history being centered in computing. As technology and computing uses have advanced, new challenges, threats, and solutions have appeared. Today’s cybersecurity landscape includes issues related to people, laws, privacy, safety, and fundamental questions of ethics, in addition to issues of technology.
In this talk, I will recap some of the history and developments of computing that have had implications for cybersecurity and related areas. I will discuss some of the current challenges and some of what I see as developments and challenges over the next few decades. Many of these are more general issues in computing, developing as we adapt to new technologies and constraints.
Eugene H. Spafford is a professor of Computer Sciences at Purdue University. He is also the founder and Executive Director Emeritus of the Center for Education and Research in Information Assurance and Security (CERIAS). He has worked in computing as a student, researcher, consultant, and professor for more than 45 years. Some of his work is at the foundation of current security practice, including intrusion detection, incident response, firewalls, integrity management, and forensic investigation. His most recent work has been in cybersecurity policy, security of real-time systems, and future threats. He has also been a pioneer in education, including starting and heading the oldest degree-granting cybersecurity program.
Dr. Spafford has been recognized with significant honors from various organizations. These include being elected as a Fellow of the American Academy of Arts and Sciences (AAA&S), and the Association for the Advancement of Science (AAAS); a Life Fellow of the ACM, the IEEE, and the (ISC)2; a Life Distinguished Fellow of the ISSA; and a member of the Cyber Security Hall of Fame — the only person to ever hold all these distinctions. In 2012 he was named one of Purdue’s inaugural Morrill Professors — the university’s highest award for the combination of scholarship, teaching, and service. In 2016, he received the State of Indiana’s highest civilian honor by being named as a Sagamore of the Wabash.
Among many other activities, he is editor-in-chief of the journal Computers & Security, serves on the Board of Directors of the Computing Research Association, and is a member of the National Security Advisory Board for Sandia Laboratories.
Ed Chi - Distinguished Scientist at Google and Alumni Award Winner (Ph.D., 1999; M.S., 1998; B.S., 1994)
The LLM (Large Language Model) Revolution: Implications from Chatbots and Tool-use to Reasoning
Deep learning is a shock to our field in many ways, yet still many of us were surprised at the incredible performance of Large Language Models (LLMs). LLM uses new deep learning techniques with massively large data sets to understand, predict, summarize, and generate new content. LLMs like ChatGPT and Bard have seen a dramatic increase in their capabilities---generating text that is nearly indistinguishable from human-written text, translating languages with amazing accuracy, and answering your questions in an informative way. This has led to a number of exciting research directions for chatbots, tool-use, and reasoning:
- Chatbots: LLM chatbots that are more engaging and informative than traditional chatbots. First, LLMs can understand the context of a conversation better than ever before, allowing them to provide more relevant and helpful responses. Second, LLMs enable more engaging conversations than traditional chatbots, because they can understand the nuances of human language and respond in a more natural way. For example, LLMs can make jokes, ask questions, and provide feedback. Finally, because LLM chatbots can hold conversations on a wide range of topics, they can eventually learn and adapt to the user's individual preferences.
- Tool-use, Retrieval Augmentation and Multi-modality: LLMs are also being used to create tools that help us with everyday tasks. For example, LLMs can be used to generate code, write emails, and even create presentations. Beyond human-like responses in Chatbots, later LLM innovators realized LLM’s ability to incorporate tool-use, including calling search and recommendation engines, which means that they could effectively become human assistants in synthesizing summaries from web search and recommendation results. Tool-use integration have also enabled multimodal capabilities, which means that the chatbot can produce text, speech, images, and video.
- Reasoning: LLMs are also being used to develop new AI systems that can reason and solve problems. Using Chain-of-Thought approaches, we have shown LLM's ability to break down problems, and then use logical reasoning to solve each of these smaller problems, and then combine the solutions to reach the final answer. LLMs can answer common-sense questions by using their knowledge of the world to reason about the problem, and then use their language skills to generate text that is both creative and informative.
In this talk, I will cover recent advances in these 3 major areas, attempting to draw connections between them, and paint a picture of where major advances might still come from. While the LLM revolution is still in its early stages, it has the potential to revolutionize the way we interact with AI, and make a significant impact on our lives.
Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (LaMDA/Bard), neural recommendations, and reliable machine learning. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013.
Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center's Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.