September 23, 2021
Title: Making AI Robust and Versatile: a Path to AGI?
Abstract: Modern AI systems have achieved impressive results in many specific domains, from image and speech recognition to natural language processing and mastering complex games such as chess and Go. However, they often remain inflexible, fragile and narrow, unable to continually adapt to a wide range of changing environments and novel tasks without “catastrophically forgetting” what they have learned before, to infer higher-order abstractions allowing for systematic generalization to out-of-distribution data, and to achieve the level of robustness necessary to “survive” various perturbations in their environment – a natural property of most biological intelligent systems, and a necessary property for successfully deploying AI systems in real-life applications. In this talk, we will provide a brief overview of some modern approaches towards making AI more “broad” (versatile) and robust, including transfer learning, domain generalization, invariance principle in causality, adversarial robustness and continual learning. Furthermore, we briefly discuss the role of scale, and summarize recent advances in training large-scale unsupervised models, such as GPT-3, CLIP, DALL-e, which demonstrate remarkable improvements in transfer, both forward (few-shot generalization to novel tasks) and backward (alleviating catastrophic forgetting). We also emphasize the importance of developing an empirical science of AI behaviors, and focus on rapidly expanding field of neural scaling laws, which allow us to better compare and extrapolate behavior of various algorithms and models with increasing amounts of data, model size and computational resources.
Biography: Irina Rish is an Associate Professor in the Computer Science and Operations Research Department at the Université de Montréal (UdeM) and a core faculty member of MILA – Quebec AI Institute. She holds Canada Excellence Research Chair (CERC) in Autonomous AI and a Canadian Institute for Advanced Research (CIFAR) Canada AI Chair. She received her MSc and PhD in AI from University of California, Irvine and MSc in Applied Mathematics from Moscow Gubkin Institute. Dr. Rish’s research focus is on machine learning, neural data analysis and neuroscience-inspired AI. Before joining UdeM and MILA in 2019, Irina was a research scientist at the IBM T.J. Watson Research Center, where she worked on various projects at the intersection of neuroscience and AI, and led the Neuro-AI challenge. She received multiple IBM awards, including IBM Eminence & Excellence Award and IBM Outstanding Innovation Award in 2018, IBM Outstanding Technical Achievement Award in 2017, and IBM Research Accomplishment Award in 2009. Dr. Rish holds 64 patents, has published over 80 research papers in peer-reviewed conferences and journals, several book chapters, three edited books, and a monograph on Sparse Modeling.
September 30, 2021
Title: Skin-Inspired Organic Electronics
Abstract: Skin is the body’s largest organ, and is responsible for the transduction of a vast amount of information. This conformable, stretchable, self-healable and biodegradable material simultaneously collects signals from external stimuli that translate into information such as pressure, pain, and temperature. The development of electronic materials, inspired by the complexity of this organ is a tremendous, unrealized materials challenge. However, the advent of organic-based electronic materials may offer a potential solution to this longstanding problem. Over the past decade, we have developed materials design concepts to add skin-like functions to organic electronic materials without compromising their electronic properties. These new materials and new devices enabled arrange of new applications in medical devices, robotics and wearable electronics. In this talk, I will discuss basic material design concepts for realizing stretchable, self-healable and biodegradable conductive or semiconductive materials. I will show our methods for scalable fabrication of stretchable electronic circuit blocks. Finally, I will show a few examples of applications we are pursuing uniquely enabled by skin-like organic electronics when interfacing with biological systems, such as low-voltage electrical stimulation, high-resolution large area electrophysiology, “morphing electronics” that grows with biological system and genetically targeted chemical assembly – GTCA.
Bio: Zhenan Bao is Department Chair and K.K. Lee Professor of Chemical Engineering, and by courtesy, a Professor of Chemistry and a Professor of Material Science and Engineering at Stanford University. Bao founded the Stanford Wearable Electronics Initiate (eWEAR) in 2016 and serves as the faculty director.
Prior to joining Stanford in 2004, she was a Distinguished Member of Technical Staff in Bell Labs, Lucent Technologies from 1995-2004. She received her Ph.D in Chemistry from the University of Chicago in 1995. She has over 600 refereed publications and over 100 US patents with a Google Scholar H-Index >175.
Bao is a member of the National Academy of Engineering, the American Academy of Arts and Sciences and the National Academy of Inventors. She is a Fellow of MRS, ACS, AAAS, SPIE, ACS PMSE and ACS POLY.
Bao was selected as Nature’s Ten people in 2015 as a “Master of Materials” for her work on artificial electronic skin. She was awarded the MRS Mid-Career Award in 2021, the inaugural ACS Central Science Disruptor and Innovator Prize in 2020, the Gibbs Medal by the Chicago session of ACS in 2020, the Wilhelm Exner Medal by Austrian Federal Minister of Science 2018, ACS Award on Applied Polymer Science 2017, the L’Oréal-UNESCO For Women in Science Award in the Physical Sciences 2017, the AICHE Andreas Acrivos Award for Professional Progress in Chemical Engineering in 2014, ACS Carl Marvel Creative Polymer Chemistry Award in 2013, ACS Cope Scholar Award in 2011, the Royal Society of Chemistry Beilby Medal and Prize in 2009, the IUPAC Creativity in Applied Polymer Science Prize in 2008.
Bao is a co-founder and on the Board of Directors for C3 Nano and PyrAmes, both are silicon-valley venture funded start-ups. She serves as an advising Partner for Fusion Venture Capital.
October 7, 2021
Title: Carbon Emissions and Large Neural Network Training
Abstract: The demand for computation by machine learning (ML) has grown rapidly over the past few years. Together with this growth come a number of costs, including energy. Estimating the energy cost is an important step towards measuring the environmental impact of ML and identifying strategies to be more sustainable. Yet, estimating energy costs is challenging without detailed information about how models are trained and used.
Here we take one step further than previous studies on this topic and calculate the energy use and carbon footprint of several recent large models (T5, Meena, GShard, Switch Transformer, and GPT-3). We also refine earlier estimates for the neural architecture search that found Evolved Transformer. We evaluate more precise data quantifying model types, datacenter efficiency, processor efficiency, and energy mix. We also discuss the training and inference life cycles of machine learning models.
We highlight significant opportunities to further improve our energy efficiency and want to share these observations with ML practitioners as well as others interested in the energy consumption and environmental footprint of ML training. The factors we found to be most impactful within Google include:
- Large but sparsely activated DNNs can consume <1/10th the energy of large, dense DNNs without sacrificing accuracy despite using the same number or even more parameters.
- Geographic location matters for ML workload scheduling since the fraction of carbon-free energy and resulting CO2 emissions vary ~5X-10X, even within the same country and the same organization.
- Specific datacenter infrastructure matters, as Cloud datacenters can be ~1.4-2X more energy efficient than typical datacenters and the ML-oriented accelerators inside them can be ~2-5X more effective than off-the-shelf systems.
Remarkably, the choice of DNN, datacenter, and processor can reduce the carbon footprint up to ~100-1000X; even given the same DNN, the choice of datacenter and processor can save up to ~10-100X. As a result of these insights, we are now optimizing where and when large models are trained.
Bio: David Patterson is a UC Berkeley professor, Google distinguished engineer, RISC-V International Vice-Chair, and RISC-V International Open Source Laboratory Director. His best known projects are RISC and RAID. He co-authored seven books, including Computer Architecture: A Quantitative Approach, and shared the 2017 ACM A.M. Turing Award with his co-author John Hennessy.
October 28, 2021
Title: Are Machines Learning?
Abstract: In this talk, I will briefly survey my group’s recent works on building operational systems for face recognition, vehicle re-identification, and action recognition using deep learning. While reasonable success can be claimed, many open problems still remain to be addressed. These include bias detection and mitigation, domain adaptation and generalization, and handling adversarial attacks. Some of our recent work addressing these challenges will be presented.
Bio: Prof. Rama Chellappa is a Bloomberg Distinguished Professor in the Departments of Electrical and Computer Engineering and Biomedical Engineering at Johns Hopkins University (JHU). At JHU, he is also affiliated with CIS, CLSP, Malone Center and MINDS. Before coming to JHU in August 2020, he was a Distinguished University Professor, a Minta Martin Professor of Engineering, a Professor in the ECE department and the University of Maryland Institute Advanced Computer Studies at the University of Maryland (UMD). He holds a non-tenure position as a College Park Professor in the ECE department at UMD. During 1981-1991, he was an assistant and associate professor in the Department of EE-Systems at University of Southern California. He received the M.S.E.E. and Ph.D. Degrees in Electrical Engineering from Purdue University, West Lafayette, IN. His current research interests span many areas in image and signal processing, computer vision, artificial intelligence, and machine learning. Prof. Chellappa is a recipient of many awards from IEEE, IAPR, UMD and USC. Some notable awards are the 2020 Jack S. Kilby Medal for Signal Processing from the IEEE, the K.S. Fu Prize from the International Association of Pattern Recognition (IAPR), the Society and Technical Achievement Awards from the IEEE Signal Processing Society, the Technical Achievement Award from the IEEE Computer Society and the Inaugural Leadership Award from the IEEE Biometrics Council. At UMD, he has received numerous college and university level recognitions for research, teaching, innovation and mentoring of undergraduate students. Prof. Chellappa served as the EIC of IEEE Transactions on Pattern Analysis and Machine Intelligence, as a Distinguished Lecturer of the IEEE Signal Processing Society, as the President of IEEE Biometrics Council and as General and Technical Program Chair/Co-Chair for several IEEE international and national conferences and workshops. He is a Fellow of AAAI, AAAS, ACM, IAPR, IEEE, NAI and OSA and holds eight patents.
January 27, 2022
Title: Scaling SerDes Beyond 100Gb/s in Advanced CMOS Technologies
Abstract: Over the past two decades, high-speed wireline data rates have doubled every three-to-four years to keep pace with aggregate system bandwidth requirements. Communication standards for networking and storage, like Ethernet and OIF-CEI, tend to be the first to shift to higher data rates in order to support bandwidth density requirements for datacenters, supercomputers, telecom and AI hardware. Today, SerDes IPs up to 116Gb/s are reaching maturity and pathfinding for SerDes transceivers capable of sending data over 200Gb/s is well underway.
Maintaining this exponential bandwidth trend while staying within acceptable die area and system thermal limits has clearly benefitted from continuous CMOS process technology scaling. However, the rate of bandwidth increase and required improvements in energy efficiency have exceeded the benefits of process technology scaling alone. SerDes system and circuit architecture have had to evolve and improve to fill this gap. In addition, the benefits of scaled CMOS process technology come with challenges for transistor and interconnect reliability and the parasitics for scaled geometries. Increasing data rates and relatively constant link distances have gradually required some longer-reach copper interconnects to be replaced by optical channels. But, for now, electrical signaling continues to be the primary way that data gets on and off of the chips, packages and boards at the heart of high-bandwidth systems.
This presentation will start by providing an introduction to SerDes, including standards, basic signal integrity and link equalization and clocking architecture. Next it will describe system and circuit design techniques that have extended per-lane bandwidth to 100Gb/s, including PAM-4 modulation and ADC/DSP-based receivers, along with the benefits and challenges of designing high-speed transceivers in scaled CMOS technologies. Finally it will show some recent design and measurement results for the next leap in SerDes data rates up to 224Gb/s.
Bio: Frank leads the I/O Circuit Technology group within Advanced Design at Intel in Hillsboro, Oregon, where he is a Senior Principal Engineer. His team coordinates circuit-process co-design for wireline I/O at Intel. They also design and test the first I/Os on each new CMOS process technology. From 2003 until 2011 he was a member of the Signaling Research group in Intel Labs. His work at Intel spans high-speed and low-power transceivers, clock generation and distribution, equalization, analog design in scaled CMOS and on-die measurement techniques.
Frank received the BS, MS, and PhD degrees in electrical engineering from Stanford University in 1997, 2000 and 2004, respectively. He has published over 45 papers in peer-reviewed conferences and journals. He has received the ISSCC Jack Kilby Award, IEEE Journal on Solid-State Circuits Best Paper Award and TCAS Darlington Best Paper Award. Frank has been on the ISSCC Technical Program Committee since 2012 including five years as the Wireline Subcommittee chair. He currently serves as the ISSCC 2022 Forums Chair. He is a Senior Member of the IEEE and served as an IEEE Distinguished Lecturer. Frank currently chairs the IEEE SSCS Distinguished Lecturer Program.
February 10, 2022
Title: Control Theory for Synthetic Biology
Abstract: Engineering biology has tremendous potential to impact applications ranging from energy, to environment, to health. As the sophistication of engineered biological networks increases, the ability to predict system behavior becomes more limited. In fact, while a system’s component may be well characterized in isolation, the salient properties of this component often change in rather surprising ways once it interacts with other components in the cell or when the intra-cellular environment changes. This context-dependence of biological circuits makes it difficult to perform rational design and often leads to lengthy, combinatorial, design procedures where each module needs to be re-designed ad hoc when other parts are added to a system. Rather than relying on such ad-hoc design procedures, control theoretic approaches may be used to engineer “insulation” of circuit components from context, thus enabling modular composition through specified input/output connections. In this talk, I will give an overview of modularity failures in genetic circuits, focusing on problems of loads, and introduce a control-theoretic framework, founded on the concept of retroactivity, to address the insulation question. Within this framework, insulation can be mathematically formulated as a disturbance rejection problem; however, classical solutions are not directly applicable due to bio-physical constraints. I will thus introduce solutions relying on time-scale separation, a key feature of biomolecular systems, which we used to build two classes of devices: load drivers and the resource decouplers. These devices aid modularity, facilitate predictable composition of genetic circuits, and show how control theoretic approaches can address pressing challenges in engineering biology.
Bio: Domitilla Del Vecchio received the Ph. D. degree in Control and Dynamical Systems from the California Institute of Technology, Pasadena, and the Laurea degree in Electrical Engineering (Automation) from the University of Rome at Tor Vergata in 2005 and 1999, respectively. From 2006 to 2010, she was an Assistant Professor in the Department of Electrical Engineering and Computer Science and in the Center for Computational Medicine and Bioinformatics at the University of Michigan, Ann Arbor. In 2010, she joined the Department of Mechanical Engineering at the Massachusetts Institute of Technology (MIT), where she is currently Professor and member of the Synthetic Biology Center. She is a IEEE Fellow and a recipient of the Newton Award for Transformative Ideas during the COVID-19 Pandemic (2020), the 2016 Bose Research Award (MIT), the Donald P. Eckman Award from the American Automatic Control Council (2010), the NSF Career Award (2007), the American Control Conference Best Student Paper Award (2004), and the Bank of Italy Fellowship (2000). Her research focuses on developing techniques to make synthetic genetic circuits robust to context and on applying these to biosensing and cell fate control for regenerative medicine applications.
March 17, 2022
4:00 p.m. EDT
Title: Classical and Quantum Electromagnetics: What is the Difference?
Abstract: Electromagnetics has influenced electrical engineering pervasively since the advent of Maxwell’s equations in 1865 . Classical electromagnetics has impacted electrical engineering technologies all the way from statics to optics. It has influenced technologies from nanometer length scales to planetary length scales. What was unbeknownst to Maxwell was that the equations he put together are also valid in the quantum world. This is because photons are electromagnetic in origin: the field associated with photons is electromagnetic field, which can be quantized to accommodate or carry photons.
In this talk, we will discuss the impact of classical electromagnetics in a wide swath of technologies especially related to electrical engineering. The recent advent of quantum technologies calls for new ways to solve the quantum Maxwell’s equations. In addition, a way to track the state of the quantum system is necessary, and this is obtained by solving the quantum state equation. This field of quantum electromagnetics is still in its infancy, but it is hoped that the knowledge base generated for classical electromagnetics can be reused for quantum electromagnetics. Since photons are used in quantum communications, quantum computers, quantum sensing, many new future technologies can be impacted by quantum electromagnetics.
Bio: W.C. Chew received all his degrees from MIT. His research interests are in wave physics, specializing in fast algorithms for multiple scattering imaging and computational electromagnetics in the last 30 years. His recent research interest is in combining quantum theory with electromagnetics, and differential geometry with computational electromagnetics. After MIT, he joined Schlumberger-Doll Research in 1981. In 1985, he joined U Illinois Urbana-Champaign, was then the director of the Electromagnetics Lab from 1995-2007. During 2000-2005, he was the Founder Professor, 2005-2009 the YT Lo Chair Professor, and 2013-2017 the Fisher Distinguished Professor. During 2007-2011, he was the Dean of Engineering at The University of Hong Kong. He joined Purdue U in August 2017 as a Distinguished Professor. He has co-authored three books, many lecture notes, over 450 journal papers, and over 600 conference papers. He is a fellow of various societies, and an ISI highly cited author. In 2000, he received the IEEE Graduate Teaching Award, in 2008, he received the IEEE AP-S CT Tai Distinguished Educator Award, in 2013, elected to the National Academy of Engineering, and in 2015 received the ACES Computational Electromagnetics Award. He received the 2017 IEEE Electromagnetics Award. In 2018, he served as the IEEE AP-S President. He is a distinguished visiting professor at Tsinghua U, China, Hong Kong U, and National Taiwan U.
March 24, 2022
4:00 p.m. EDT
Title: Let the Data Flow!
Abstract: As the benefits from Moore’s Law diminish, future computing performance improvements will rely on specialized application accelerators. To justify the expense of designing an accelerator it should accelerate an important set of application areas. In my talk, I will explain how Reconfigurable Dataflow Accelerators (RDAs) can be used to accelerate a broad set of data intensive applications. RDAs can accelerate Machine Learning (ML) by efficiently executing the hierarchical dataflow that exists in many ML applications and models. I will explain how RDAs can also be used to accelerate irregular applications using a new programming model called Dataflow Threads. I will talk about future research directions for dataflow architectures including sparse ML applications, networking applications and dataflow architecture compilers
Bio: Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.
In 2017 Olukotun co-founded SambaNova Systems, a Machine Learning and Artificial Intelligence company, and continues to lead as their Chief Technologist. Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high- throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.
Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.
Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering.
Kunle received his Ph.D. in Computer Engineering from The University of Michigan.