fall 2019 semiannual meeting | october 29 & 30,...

38
Fall 2019 Semiannual Meeting | October 29 & 30, 2019 UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN GEORGIA INSTITUTE OF TECHNOLOGY NORTH CAROLINA STATE UNIVERSITY CENTER PROPRIETARY

Upload: others

Post on 16-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

Fall 2019 Semiannual Meeting | October 29 & 30, 2019

UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

GEORGIA INSTITUTE OF TECHNOLOGY

NORTH CAROLINA STATE UNIVERSITY

CENTER PROPRIETARY

Page 2: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 1

WELCOME

Welcome to the Fall 2019 semiannual meeting of CAEML! We are delighted to have you here with us on the Illinois campus. As at all the center’s meetings, a significant portion of our time will be devoted to learning about one another’s recent research. In the spirit of the industry/university cooperative research program, we will applaud each other’s successes and offer assistance to overcome technical hurdles.

The student poster session is always one of the meeting highlights; this time, that session is scheduled for Tuesday evening. We want the students to have the opportunity to get technical advice from the assembled industry experts, and thus we request that the attendees try to visit all or most of the posters and speak with the presenting students.

During our two days together, the industry advisory board and the center leadership will have discussions and set policy; other industry attendees and faculty will be invited to contribute to many of the discussions. The items to be addressed include (but are not limited to) the following.

• Prioritization of new project proposals for funding.

• Topics for the next set of educational webinars.

• Cultivating broad engagement of member companies. (The center should not be overly reliant on a single champion, who might someday change jobs.)

• How to quickly integrate new members into the center (an important endeavor, as a few of the old members are stepping away and new ones getting ready to join).

The other site directors and I thank each of you for your support and technical contributions to the center.

Elyse Rosenbaum Center Director, University of Illinois at Urbana-Champaign

Paul Franzon Site Director, North Carolina State University

Madhavan Swaminathan Site Director, Georgia Tech

Page 3: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 2

AGENDA

7:45 AM Registration, Breakfast, and Networking

8:30 AM Welcoming Remarks | Rashid Bashir, Dean, Illinois Grainger College of Engineering

8:45 AM Introductions | Chris Cheng, Industry Advisory Board ChairAround the room introductions of meeting attendees

9:05 AM LIFE form training by Dee Hoffman, NSF IUCRC Assessment Coordinator

9:30 AM State of Center Report | Elyse Rosenbaum, CAEML Director

9:50 AM Closed door IAB meeting | Room 5070 ECEB| Attendance restricted to IAB and NSF1. Membership & Recruiting discussion with Site Directors2. Approval of previous meeting minutes 3. Project voting considerations and processes4. Member hosting of Semiannual meeting5. Any other businessLIFE Form Review Leader sign up will be passed around during meeting

Concurrent CAEML Faculty/Student meeting | 3002 ECEB

10:40 AM 10 Minute COFFEE BREAK

PROGRESS REPORTS FOR PROJECTS IN FINAL YEAR OF FUNDING [5 minute presentation, 5 minutes Q&A/LIFE forms]

10:50 AM 1A1 Progress Report: Modular Machine Learning for Behavioral Modeling of Microelectronic Circuits and Systems | M. Raginsky & A. Cangellaris

11:00 AM 2A2 Progress Report: Machine Learning for Trusted Platform DesignA. Raychowdhury & M. Swaminathan

11:10 AM 2A4 Progress Report: Machine Learning to Predict Successful FPGA Compilation Strategy S. Lim & M. Swaminathan

11:20 AM 2A7 Progress Report: Applying Machine Learning to Back End IC Design | R. Davis and P. Franzon

11:30 AM 3A4 Progress Report: High-Dimensional Structural Inference for Non-linear Deep Markov or State Space Time Series Models | D. Baron, R. Davis, P. Franzon

CAEML Fall 2019 Semiannual Meeting at the University of IllinoisTuesday, October 29, 2019 Room 3002 Electrical and Computer Engineering Building (ECEB)

Page 4: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 3

PROGRESS REPORTS & PROPOSALS FOR CONTINUING PROJECTS [10 minute presentation, 5 minutes Q&A/LIFE forms]

11:40 AM 3A1 Progress Report & Y2 Proposal: NL2PPA: Netlist-to-PPA Prediction Using Machine Learning Design | S. Lim

11:55 AM 3A2 Progress Report & Y2 Proposal: Fast, Accurate PPA Model-Extraction | R. Davis, P.

Franzon, D. Baron

12:10 PM 3A3 Progress Report & Y2 Proposal: RNN Models for Computationally-Efficient Simulation of Circuit Aging including Stochastic Effects | E. Rosenbaum, M. Raginsky

12:25 PM LUNCH

1:15 PM 3A5 Progress Report & Y2 Proposal: High Speed Bus Physical Design Analysis through Machine Learning | X. Chen & M. Swaminathan | 15 minutes, 5 min Q&A/Life

1:35 PM 3A6 Progress Report & Y2 Proposal: Enabling Side-Channel Attacks on Post-Quantum Protocols through Machine Learning | A. Aysu, P. Franzon

1:50 PM 3A7 Progress Report & Y2 Proposal: Design Space Exploration using DNN | M. Swaminathan

NEW PROJECT PROPOSALS[20 minute presentation, 5 minutes Q&A, 5 minutes LIFE forms]

2:05 PM Proposal P19-12: Mitigating the Curse of Dimensionality in Electronic Systems Modeling via Physics-Aware Universal Approximation by Dynamic Neural Nets | M. Raginsky

2:35 PM Proposal P19-1: Quantum Computing based Machine Learning for EDA (QCML) | P. Franzon,G. Byrd, D. Stancil

3:05 PM 15 Minute COFFEE BREAK

3:20 PM Proposal P19-5: Inverse Design of Interconnect Using Deep Learning | M. Swaminathan

3:50 PM Proposal P19-8: FPGA Hardware Accelerator for Real Time Security | P. Franzon, A. Aysu

4:20 PM Proposal P19-10: Physical Design Parameter Optimization (PDPPO) using Reinforcement Learning | S. Lim

4:50 PM Presentations Complete

5:00 PM Poster Session at City View, 45 E. University Ave, Champaign

6:45 PM Dinner at City View

AGENDA

Page 5: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 20194

AGENDAWednesday, October 30, 2019 Room 3002 Electrical and Computer Engineering Bldg

7:45 AM Registration, Breakfast, and Networking

8:30 AM LIFE Form review and discussion Led by C. Cheng & D. Hoffman

Discussion: Broadening Engagement of Project Mentors Discussion: Redesign of Project reports to include mentors, collaborations and publications

9:50 AM COFFEE BREAK 10:00 AM Discussions and voting for January 1, 2020 allocations Attendance restricted to NSF & IAB

12:00 Noon IAB Report out, Action items and plans for next meetingLed by C. Cheng & E. Rosenbaum

12:30 PM Boxed Lunches Adjourn

Page 6: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 5

Aydin Aysuhttps://www.ece.ncsu.edu/people/aaysu/Aydin Aysu is currently an Assistant Professor in the Electrical and Computer Engineering Department of North Carolina State University. Prior to joining NC State, he

was a Post-Doctoral Research Fellow at the University of Texas at Austin from 2016 to 2018. He received his Ph.D. degree in Computer Engineering from Virginia Tech in 2016. He received his M.S. and B.S. degree in Electronics Engineering from Sabanci University, Istanbul, Turkey, respectively in 2010 and 2008.

Dr. Aysu’s research aims countering cyberattacks that target hardware vulnerabilities. His research interests lie at the intersection of applied cryptography, digital hardware design, and computer architectures. He currently co-chairs the security track at the IEEE International Conference on Reconfigurable Computing and FPGAs, and he is a guest editor for the special issue “Digital Threats of Hardware Security” of ACM Digital Threats: Research and Practice. He is a recipient of the 2019 NSF Research Initiation Initiative award. His papers have been nominated for the best-paper award at IEEE Hardware Oriented Security and Trust Conference in 2018 and 2019, and won the best paper award at 2019 ACM Great Lakes Symposium on VLSI.

Dror Baron ece.ncsu.edu/people/dzbaronDror Baron received the B.Sc. (summa cum laude) and M.Sc. degrees from the Technion - Israel Institue of Technology, Haifa, Israel, in 1997 and 1999, and the Ph.D. degree from the University of Illinois at Urbana-

Champaign in 2003, all in electrical engineering

From 1997 to 1999, he was a Modem Designer at Witcom Ltd. From 1999 to 2003, he was a Research Assistant at

the University of Illinois at Urbana-Champaign, where he was also a Visiting Assistant Professor in 2003. From 2003 to 2006, he was a Postdoctoral Research Associate in the Department of Electrical and Computer Engineering at Rice University, Houston, TX. From 2007 to 2008, he was a Quantitative Financial Analyst with Menta Capital, San Francisco, CA. From 2008 to 2010 he was a Visiting Scientist in the Electrical Engineering Department at the Technion. Dr. Baron joined the Department of Electrical and Computer Engineering at North Carolina State Universty in 2010, and is currently an associate professor. His research interests include information theory and statistical signal processing.

Andreas Cangellarisece.illinois.edu/directory/profile/cangellaDr. Cangellaris is the Vice Chancellor for Academic Affairs and Provost and M.E. VanValkenburg Professor in Electrical and Computer Engineering at the University

of Illinois at Urbana-Champaign. He earned his M.S. and Ph.D. degrees in Electrical Engineering at the University of California, Berkeley, in 1983 and 1985, respectively. From 2013-2017 Cangellaris was the Dean of the College of Engineering and prior to that served as Head of the Department of Electrical and Computer Engineering.

He is broadly recognized for his research in applied and computational electromagnetics and applications to the signal integrity of integrated electronic circuits and systems. His research has produced several design methods and computer tools that are used widely in the microelectronics industry. He has written or co-written more than 250 papers. He joined the faculty at Illinois in 1997. He was an Associate Provost Fellow on the Urbana campus from 2006 to 2008. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and the recipient of a Humboldt Foundation Research Award, the U.S. Army Research Laboratory Director’s Coin, and the IEEE Microwave Theory & Techniques Distinguished Educator Award.

CAEML RESEARCH LEADERS

Page 7: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 20196

Xu Chenhttps://ece.illinois.edu/directory/profile/xuchen1

Dr. Chen received his B.S., M.S., and Ph.D. degrees, all in Electrical Engineering, from the University of Illinois Urbana-Champaign in 2005, 2014, and 2018. He is currently a Teaching Assistant Professor at the Department of Electrical and Computer Engineering at

the University of Illinois Urbana-Champaign. His research interests are in stochastic modeling for electromagnetics and circuits, computer-aided design, uncertainty quantification, and machine learning.

From 2005 to 2012, he was an engineer with IBM in Poughkeepsie, NY. He has also worked at Electronic Design Automation group at Apple. He is a recipient of the IEEE EDAPS Best Symposium Paper Award in 2017, and was also a recipient of the Raj Mittra Outstanding Research Award, Mavis Future Faculty Fellowship, Harold L. Oleson Award, and Ernest A. Reid Fellowship Award. He is a member of IEEE and SIAM.

Rhett Davisece.ncsu.edu/people/wdavis

W. Rhett Davis is a Professor of Electrical and Computer Engineering at North Carolina State University. He received B.S. degrees in electrical engineering and computer engineering from North Carolina State University,

Raleigh, in 1994 and M.S. and Ph.D. degrees in electrical engineering from the University of California at Berkeley in 1997 and 2002, respectively. He received the National Science Foundation’s Faculty Early Career Development (CAREER) award in 2007, and received the Distinguished Service Award from the Silicon Integration Initiative (Si2) in 2012 for his research in the development of standards for electronic design automation (EDA) and his development of the FreePDK open-source, predictive process design kit. He is working with Si2 to develop standards for system-level power modeling and compact modeling of device reliability. He has been an IEEE member since 1993 and a Senior Member since 2011. He has published over 50 scholarly journal and conference articles. He has worked at Hewlett-Packard (now Keysight) in Boeblingen, Germany

and consulted for Chameleon Systems, Qualcomm, BEECube, and Silicon Cloud International. Dr. Davis’s research is centered on electronic design automation for integrated systems in emerging technologies. He is best known for his efforts in design enablement, 3DIC design, thermal analysis, circuit simulation, and power modeling for systems-on-chip and chip multiprocessors.

Brian Floydpeople.engr.ncsu.edu/bafloyd

Brian Floyd received the B.S. with highest honors, M. Eng., and Ph.D. degrees in electrical and computer engineering from the University of Florida, Gainesville in 1996, 1998, and 2001, respectively. From 2001 to 2009, he worked at the IBM T. J. Watson Research

Center in Yorktown Heights, New York, first as a research staff member and then later as the manager of the millimeter-wave circuits and systems group. His work at IBM included the development of silicon-based millimeter-wave transceivers, phased arrays, and antenna-in-package solutions. In 2010, Dr. Floyd joined the Department of Electrical and Computer Engineering at North Carolina State University as an Associate Professor. His research interests include RF and millimeter-wave circuits and systems for communications, radar, and imaging applications.

Dr. Floyd has authored or co-authored over 90 technical papers and has 25 issued patents. He currently serves on both the steering and technical program committees for the IEEE RFIC Symposium. From 2006 to 2009, he served on the technical advisory board of the Semiconductor Research Corporation (SRC) integrated circuits and systems science area, and currently serves as a thrust leader for the SRC’s Texas Analog Center of Excellence. He received the 2016 NC State Outstanding Teacher Award, the 2015 NC State Chancellor’s Innovation Award, the 2014 IBM Faculty Award, the 2011 DARPA Young Faculty Award, the 2004 and 2006 IEEE Lewis Winner Awards for best paper at the International Solid-State Circuits Conference, and the 2006 and 2011 Pat Goldberg Memorial Awards for the best paper within IBM Research.

Page 8: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 7

Paul Franzonece.ncsu.edu/erl/faculty/paulf

Paul D. Franzon is currently a Cirrus Logic Distinguished Professor of Electrical and Computer Engineering and Director of Graduate Programs at North Carolina State University. He earned his Ph.D. from the University of Adelaide, Australia.

He has also worked at AT&T Bell Laboratories, DSTO Australia, Australia Telecom, and three companies he cofounded: Communica, LightSpin Technologies, and Polymer Braille Inc. His current interests center on the technology and design of complex microsystems incorporating VLSI, MEMS, advanced packaging, and nano-electronics. He has led several major efforts and published over 300 papers in those areas. In 1993, he received an NSF Young Investigators Award; in 2001, he was selected to join the NCSU Academy of Outstanding Teachers; and in 2003, he was named an Alumni Undergraduate Distinguished Professor. He received the Alcoa Research Award in 2005, and the Board of Governors Teaching Award in 2014. He served with the Australian Army Reserve for 13 years as an Infantry Soldier and Officer. He is a Fellow of the IEEE.

Chuanyi Jijic.ece.gatech.edu

Chuanyi Ji’s research is in large-scale networks, machine learning, and big data sets. She received a B.S. degree from Tsinghua University, Beijing, China, in 1983, an M.S. degree from the University of Pennsylvania,

Philadelphia in 1986, and a Ph.D. degree from the California Institute of Technology, Pasadena in 1992, all in Electrical Engineering. She was an Assistant and Associate Professor from 1991 to 2001 at the Rensselaer Polytechnic Institute (RPI), Troy, NY. She was a visitor/consultant at Bell Labs Lucent, Murray Hill, NJ in 1999, and a visiting faculty member at the Massachusetts Institute of Technology, Cambridge in 2000. She is an Associate Professor with the Georgia Institute of Technology, Atlanta, which she joined in 2001. Dr. Ji’s awards include a CAREER award from NSF and an Early CAREER award from RPI. She was a co-founder of a startup company on network monitoring and management.

Negar Kiyavashhttps://www.ece.gatech.edu/faculty-staff-directory/negar-kiyavash

Negar Kiyavash is a joint Associate Professor in the H. Milton Stewart School of Industrial & Systems Engineering (ISyE) and the School of Electrical and Computer Engineering (ECE) at Georgia Institute of Technology (Gatech).

Prior to joining Gatech, she was a Willett Faculty Scholar and a joint Associate Professor of Industrial and Enterprise Engineering (IE) and Electrical and Computer Engineering (ECE) at the University of Illinois . Her research interests are in design and analysis of algorithms for network inference and security. She is a recipient of NSF CAREER and AFOSR YIP awards and the Illinois College of Engineering Dean’s Award for Excellence in Research.

Sung-Kyu Limece.gatech.edu/faculty-staff-directory/sung-kyu-lim

Sung Kyu Lim received the B.S., M.S., and Ph.D. degrees from the University of California at Los Angeles in 1994, 1997, and 2000, respectively. He joined the School of Electrical and

Computer Engineering, Georgia Institute of Technology in 2001, where he is currently Dan Fielder Endowed Chair Professor. His current research interests include modeling, architecture, and electronic design automation (EDA) for 3D ICs. His research on 3D IC reliability is featured as Research Highlight in the Communication of the ACM in 2014. His 3D IC test chip published in the IEEE International Solid-State Circuits Conference (2012) is generally considered the first multi-core 3D processor ever developed in academia. Dr. Lim is a recipient of the National Science Foundation Faculty Early Career Development (CAREER) Award in 2006. He received the Best Paper Awards from the IEEE Asian Test Symposium (2012) and the IEEE International Interconnect Technology Conference (2014). He has been an Associate Editor of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) since 2013. He received the Class of 1940 Course Survey Teaching Effectiveness Award from Georgia Institute of Technology (2016).

Page 9: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 20198

Maxim Raginskycsl.illinois.edu/directory/faculty/maximMaxim Raginsky received B.S. and M.S. degrees in 2000 and a Ph.D. degree in 2002 from Northwestern University, all in Electrical Engineering. He has held research positions with Northwestern, the University of

Illinois at Urbana-Champaign (where he was a Beckman Foundation Fellow from 2004 to 2007), and Duke University. In 2012, he returned to UIUC, where he is currently an Associate Professor and William L. Everett Fellow with the Department of Electrical and Computer Engineering, and a member of the Decision and Control Group in the Coordinated Science Laboratory. His research interests cover probability and stochastic processes, deterministic and stochastic control, machine learning, optimization, and information theory. Much of my recent research is motivated by fundamental questions in modeling, learning, and simulation of nonlinear dynamical systems, with applications to advanced electronics, autonomy, artificial intelligence, and quantum information science.

Arijit Raychowdhuryece.gatech.edu/faculty-staff-directory/arijit-raychowdhury

Arijit Raychowdhury (M-’07, SM-’13) is currently an Associate Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology where he currently holds the ON Semiconductor Junior Research

Professorship. He received his Ph.D. degree in Electrical and Computer Engineering from Purdue University and his B.E. in Electrical and Telecommunication Engineering from Jadavpur University, India. He joined Georgia Tech in January, 2013. His industry experience includes five years as a Staff Scientist in the Circuits Research Lab, Intel Corporation and a year as an Analog Circuit Designer with Texas Instruments Inc. His research interests include digital and mixed-signal circuit design, design of on-chip sensors, memory, and device-circuit interactions.

Dr. Raychowdhury holds more than 25 U.S. and international patents and has published over 100 articles in

journals and refereed conferences. He is the winner of the NSF CRII Award, 2015; Intel Labs Technical Contribution Award, 2011; Dimitris N. Chorafas Award for outstanding doctoral research, 2007; the Best Thesis Award, College of Engineering, Purdue University, 2007; Best Paper Awards at the International Symposium on Low Power Electronic Design (ISLPED) 2012, 2006; IEEE Nanotechnology Conference, 2003; SRC Technical Excellence Award, 2005; Intel Foundation Fellowship 2006, NASA INAC Fellowship 2004, and the Meissner Fellowship 2002. Dr. Raychowdhury is a Senior Member of the IEEE.

Elyse Rosenbaumelyse.ece.illinois.edu

Elyse Rosenbaum is the Melvin and Anne Louise Hassebrock Professor in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She received the B.S. degree (with distinction) from Cornell University in 1984,

the M.S. degree from Stanford University in 1985, and the Ph.D. degree from the University of California, Berkeley in 1992, all in electrical engineering. From 1984 through 1987, she was a Member of Technical Staff at AT&T Bell Laboratories in Holmdel, NJ.

Dr. Rosenbaum’s present research interests include component and system-level ESD reliability, ESD-robust high-speed I/O circuit design, compact modeling, mitigation strategies for ESD-induced soft failures, and machine-learning aided behavioral modeling of microelectronic components and systems. She has authored nearly 200 technical papers. From 2001 through 2011, she was an editor for IEEE Transactions on Device and Materials Reliability. She is currently an editor for IEEE Transactions on Electron Devices. Dr. Rosenbaum was the General Chair for the 2018 International Reliability Physics Symposium.

Dr. Rosenbaum was the recipient of a Best Student Paper Award from the IEDM, as well as Outstanding and Best Paper Awards from the EOS/ESD Symposium. She has received an NSF CAREER award, an IBM Faculty Award, and the ESD Association’s Industry Pioneer Recognition Award. Dr. Rosenbaum is a Fellow of the IEEE.

Page 10: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 9

José Schutt-Ainéece.illinois.edu/directory/profile/jesa

José E. Schutt-Ainé received a B.S. degree in electrical engineering from the Massachusetts Institute of Technology, Cambridge, in 1981, and M.S. and Ph.D. degrees from the University of Illinois at

Urbana-Champaign (UIUC) in 1984 and 1988, respectively. He was an Application Engineer at the Hewlett-Packard Technology Center, Santa Rosa, CA, where he was involved in research on microwave transistors and high-frequency circuits. In 1983, he joined UIUC, and then joined the Electrical and Computer Engineering Department and became a member of the Electromagnetics and Coordinated Science Laboratories. He is a consultant for several corporations. His current research interests include the study of signal integrity and the generation of computer-aided design tools for high-speed digital systems. Dr. Schutt-Ainé is the recipient of several research awards, including the 1991 National Science Foundation (NSF) MRI Award, the National Aeronautics and Space Administration Faculty Award for Research (1992), the NSF MCAA Award (1996), and a UIUC National Center for Supercomputing Applications Faculty Fellow Award (2000). He is an IEEE Fellow and is currently serving as Co-Editor-in-Chief of the IEEE Transactions on Components, Packaging and Manufacturing Technology (T-CPMT).

Madhavan Swaminathanc3ps.gatech.edu; epsilonlab.ece.gatech.edu

Madhavan Swaminathan is the John Pippin Chair in Microsystems Packaging & Electromagnetics in the School of Electrical and Computer

Engineering (ECE) and Director of the 3D Systems Packaging Research Center (PRC), Georgia Tech (GT). He also serves as the Site Director for the NSF Center for Advanced Electronics through Machine Learning (CAEML) and Theme Leader for Heterogeneous Integration, SRC JUMP ASCENT Center. He formerly held the position of Founding Director, Center for Co-Design of Chip, Package, System (C3PS), Joseph M. Pettit Professor in Electronics in ECE and Deputy Director of the Packaging Research Center (NSF ERC), GT. Prior to joining GT, he was with

IBM working on packaging for supercomputers. He is the author of 500+ refereed technical publications, holds 30 patents, primary author and co-editor of 3 books, founder and co-founder of two start-up companies, and founder of the IEEE Conference Electrical Design of Advanced Packaging and Systems (EDAPS), a premier conference sponsored by the EPS society. His research has been recognized with 22 best paper and best student paper awards. In addition, his most recent awards include the D. Scott Wills ECE Distinguished Mentor Award (2018), the Georgia Tech Outstanding Achievement in Research Program Development Award (2017), the Distinguished Alumnus Award from the National Institute of Technology Tiruchirappalli (NITT) in India (2014), and the Outstanding Sustained Technical Contribution Award from the IEEE Components, Packaging, and Manufacturing Technology Society (2014). He is an IEEE Fellow and has served as the Distinguished Lecturer for the IEEE EMC society. He received his MS/PhD degrees in Electrical Engineering from Syracuse University in 1989 and 1991, respectively.

Page 11: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 10

CURRENT PROJECT SUMMARIES

Project Summary Title: Modular Machine Learning for Behavioral Modeling of Microelectronic Circuits and Systems

Date: 9/1/17

Center: Center for Advanced Electronics through Machine Learning (CAEML) Tracking No.: 1A1 Project Leader(s): Maxim Raginsky (UIUC) and Andreas Cangellaris (UIUC)Phone(s): (217) 244-1782 E-mail(s): [email protected], [email protected]

Proposed Budget: $ 96k Type: Continuing

Other Faculty Collaborator(s): Chuanyi Ji (Georgia Tech)Project Description: The project focuses on theoretical foundations and modular algorithmic solutions for ML-driven design, simulation, and verification of high-complexity, multifunctional electronic systems. Behavioral system modeling provides a systematic approach to reconciling the variety of physics-based and simulation-based models, expert knowledge, and other possible means of component description commonly introduced in electronic systems modeling. In complex electronic systems, each component model comes with its own sources of errors, uncertainty, and variability, and the same applies to the way components and subsystems are connected and interact with each other in the integrated system. The modularity offered by the behavioral approach will be leveraged to develop mathematical tools for assessing the performance and minimal data requirements for learning a low-complexity representation of the system behavior, one component or subsystem at a time, from measured and simulated data even in highly complex and uncertain settings. We will develop and implement the full ML algorithmic pipeline and quantify its end-to-end performance in applications pertinent to multifunctional electronic system design, simulation, and verification.Progress to Date (if applicable): (1) Analyzed local and global stability of gradient descent with backpropagation, the standard method for training complex nonlinear models, such as neural networks. (2) Developed methodology for learning stable recurrent neural network models that compose well with circuit simulators. (3)Developed methodology for integrating flexible probabilistic generative models into passive macromodeling pipeline.

Experimental plan (current year only): In the context of behavioral system modeling, both learning algorithms and their outputs are probabilistic programs that consist of deterministic transformations (nominal device models), random variable generators (to capture noise and component/process variability), and probabilistic conditioning (to capture constraints or relations among internal and external variables). Furthermore, significant structural complexity of realistic electronic systems leads to chaotic behavior of the electromagnetic fields responsible for EMI events. As such, in addition to often being computationally prohibitive, a deterministic approach to computer-aided investigation of performance tradeoffs may not be sufficient to inform design decisions. The use of probabilistic program formalism willallow us to develop robust and mathematically sound techniques for capturing all sources of noise and variability in behavioral models and for quantifying the concentration of typical system behavior around the mean or median nominal model. In addition, it will pave the way for more efficient and meaningful predictive EMI analysis at the system level.Related work elsewhere and how this project differs: To date, behavioral modeling of electronic systems in the presence of uncertainty/variability is dominated by approaches that propagate the stochastic attributes of any input parameters to the output. In contrast, our method will produce a low-complexity representation of the system behavior from measured and simulated data that lends itself to expedient, yet accurate simulation.Proposed milestones for the current year: (1) Theoretical and algorithmic framework for modular ML. (2) Identify test structure for system-level EMI modeling.Proposed deliverables for the current year: (1) Design and characterization of each element of the ML pipeline as a probabilistic program, including tools for uncertainty quantification in behavioral models. (2) Report on the application of probabilistic modeling to expedite EMI modeling and simulation of realistic electronic systems.Potential Member Company Benefits: System designers are confronting increasing demands on end-to-end system functionality integration and resilience, while facing competitive time-to-market and low-cost constraints. The increased complexity of these systems hinders high-fidelity predictive modeling and performance simulation, which, in turn, may lead to overly conservative designs that unnecessarily sacrifice performance and even increase cost. This project will tackle these pressing industry challenges.Estimated Start Date: 1/1/18 Estimated Project Completion Date: 12/31/18

Page 12: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 11

Project Summary Title: Machine learning for trusted platform design Date: 9/1/17

Center: Center for Advanced Electronics through Machine Learning (CAEML) Tracking No 2A2 Project Leader(s): Arijit Raychowdhury (GT) & Madhavan Swaminathan (GT)Phone(s): (404) 227-0087 E-mail(s): {Arijit.raychowdhury, madhavan}@ece.gatech.eduProposed Budget: $ 54,345 (year 1) Type: (New/Continuing/Follow-up) NewOther Faculty Collaborator(s): Chuanyi Ji (GT)Project Description: Several IoT applications are emerging with RF transceivers and Wireless Power Transfer units integrated in a single module. These are smart modules with embedded signal processing. An example of an

architectural embodiment is shown in Figure 1 where a single RF carrier is used for bi-directional RF communication and power delivery. Depending on the application, the module may or may not contain an encryption engine (128b AES) for security. Since many of these systems will be autonomous, trusted platforms are required for keeping such objects secure over the product’s lifetime. Such IoT devices are prone to three types of attacks: 1) Side channel attack (SCA) through RF carrier, 2) Power channel attack through power delivery network and 3) EM channel attack through near or far field coupling. Our objective in this project is to use Machine Learning (ML) to i) assess if the system is under attack (ex: identifying constructive and/or destructive links, ii) developing counter measures (ex: shut down the system or modify the security key) and iii) performing i) and ii) in very short time periods (ex: within milli-seconds after the attack occurs). Progress to Date (if applicable): We currently have a prototype of the Wireless Power Transfer (WPT) module functioning at ~1GHz along with models. We also have experience in developing models of RF transceivers and AES (Advanced Encryption Standard) engines. We will use Chip Whisperer board for Power SCA evaluation. We plan to use this prior work to develop data for Machine Learning. In addition, evaluation boards can be used for testing.Experimental plan (current year only): We recognize that model development is hard and model based prediction is computationally difficult due to the high dimensionality of the system. We therefore need to develop models that have high sensitivity to model parameters and have short detection latencies. Our approach therefore is to use Deep Learning techniques that can predict attacks in milli-seconds. We will develop ANN based observers that can be used to monitor internal system states, which allows us to use predictive models and state estimation theory to identify attacks and develop counter measures. As an example, a state estimator (observer) can be trained to detect variations using current and voltage sensors embedded at different points in the power delivery network loop. Changes in the loop’s states can be detected which can be used by the observer to detect attacks. Related work elsewhere and how this project differs: Prior work on security has focused primarily on developing AES engines that are resilient to cyber-attacks. The concept of using an observer is new.Proposed milestones for the current year: Develop ML methods based on both diagnostic and active learning techniques using ANN and Bayesian Inference methods. Apply Deep Learning techniques for detecting minute changes in the system response in milli-seconds.Proposed deliverables for the current year: Models of the system that include RF communication, WPT and security blocks that includes near field coupling through RF coils at ~1GHz using HFSS, ADS and Matlab; Model of observer; Algorithms developed in Matlab for deep learning based on ANN and Bayesian Inference methods; Model based demonstration of identification of cyber-attacks through RF, Power and EM Channels in milli-seconds.Projected deliverables for Year 2 (if applicable): Model based demonstration of counter measures; prototype development with observer; demonstration of trusted platform using prototype or evaluation board; software for deep learning in matlab.Potential Member Company Benefits: Designers must anticipate every form of attack to preventaccess to embedded systems and data. Along with already existing work on AES and TPM (Trusted Platform Module), we believe that this approach will provide an added level of security for trusted platform design.Estimated Start Date: 1/1/18 Estimated Project Completion Date: 12/31/19

Figure 1: Trusted Platform for IoT

Page 13: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 12

Project Synopsis Title: Machine Learning to predict successful FPGA compilation strategy

Date: 9/1/2017

Center: Center for Advanced Electronics through Machine Learning (CAEML) Tracking No.: 2A4 Project Leader(s): Sung Kyu Lim (Georgia Tech)Phone(s): (404) 894-0373 E-mail(s): [email protected] Budget: $ 50,000 Type: (New/Continuing/Follow-up) NewOther Faculty Collaborator(s): noneProject Description: The goal of our project is to build machine learning (ML) models that produce FPGA compilation recipes that show high success rate and fast compilation time. We assume the following are given as the inputs: (1) RTL and its timing constraint, (2) target FPGA device, (3) FPGA compilation toolset (synthesizer, mapper, placer, router), and (4) compilation recipe (optimization goal, effort level, logic restructuring, random number seed, etc.). Our ML model predicts whether the given recipe leads to compilation success (= RTL fits to the FPGA and its timing goals are met). The model also predicts compilation time. For a given set of recipes under consideration, we use this model to select successful ones and order them based on their predicted compilation time. Because ML-based prediction is quick, we can afford to examine many candidate recipes and choose the best one.

The input to our ML model includes circuit structure-related parameters such as the number of LUTs, FFs, global signals, IOs, net-size distribution, etc. Using these inputs, our ML will predict compilation success rate and runtime. Our special focus will be on the impact of local congestion on compilation failure. We seek toolset parameters and recipes that effectively avoid local congestion and thus improve success rate and time. We believe that congestion can be considered during all major steps of FPGA compilation. During synthesis, we can choose the options that will minimize the number of nets and pins. During mapping, the connections among different logic elements can be minimized. Placement can be guided to minimize local congestion, while routing is the actual step that decides which routing switches to use and thus a more direct impact on routability. However, congestion avoidance may come at the cost of runtime, timing, and power degradation. Our goal is to seek recipes (and potentially improvement on the compilation engines themselves) that strike a balance between congestion, compilation time, and other key metrics.

We will also build models for silicon interposer-based multi-FPGA systems, where FPGA partitioning becomes a key step in deciding the number of IOs required, demand for interposer interconnects, and ultimately the overall compilation success.FPGA partitioning is performed under a strict pin constraint, which is non-trivial to satisfy. In addition, depending on how partitioning is done, on-interposer routing demand may exceed supply. Our goal is to seek related ML-model parameters and recipes that help alleviate burdens imposed on partitioning. Progress to Date (if applicable): noneExperimental plan (current year only): We will write codes to access FPGA mapping database, extract key parameters that affect compilation success and runtime, build and train ML models, and process and optimize recipes. Related work elsewhere and how this project differs: Researchers from Tianjin University used ML to optimize FPGA architecture parameters, while Argonne National Lab developed an ML-based tool for FPGA timing and power closure. ML tools from Plunify are producing FPGA compilation recipes for timing closure, and University of Guelph specifically targeted FPGA placement as the key enabler for ML-prediction. However, none of these work address the impact of congestion nor target multi-FPGA systems. Proposed milestones for the current year: Successful recipe constructions and accurate runtime prediction for single and multiple FPGA systems.Proposed deliverables for the current year: Our ML models, scripts, codes, and additional compilation database built in our labProjected deliverables for Year 2 (if applicable): Our year 2 plan will target ML models and toolsets that help fix/enhance the original RTL codes to improve compilation success and runtime.Potential Member Company Benefits: Our tool will help Synopsys strengthen their FPGA compilation toolset. In addition, FPGA designers in other member companies will save time searching for good recipes.Estimated Start Date: 1/1/18 Estimated Project Completion Date: 12/31/19

Page 14: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 13

Project Summary Title: Applying Machine Learning to Back End IC Design Date: 9/1/17Center: Center for Advanced Electronics through Machine Learning (CAEML) Tracking No.: 2A7 Project Leader(s): Davis, Franzon (NCSU)

Phone(s): ( 919 ) 515-7351 E-mail(s): {wdavis, paulf}@ncsu.edu

Proposed Budget: $53,925 Type: (New / Continuing / Follow-up) Follow-up

Other Faculty Collaborator(s): Baron (NCSU)

Project Description: In 2017, we proposed a one year project to investigate automating back end flows for ASICs. After being recommend by the IAB, we were asked to break this into three projects (1) back end flows; (2) FPGA flows, and (3) CNN for DRC investigation.In this project, we will two major objectives (1) how to set up a synthesis and physical design flow to meet specific goals, and (2) what the trade-offs are for a design between this setup and the design goals.The first goal towards achieving these objectives will be to determine how to set up the tools for a specific design for specific goals. Both the Cadence and Synopsys backend tools have many options that have a strong impact on the achievable speed, resource allocation and compile time. Surrogate modeling will be used to capture these relationships in a global fashion. The outcome of this step will satisfy both objectives for a specific design. The second goal would be to determine how to achieve this mapping for a variety of designs. Classification techniques will be used to classify designs so that these objectives can be met for a specific design without having to run that design through the tool flow. Instead the design will be run through the classifier and the possible classifications will be used to determine the setup and tradeoffs for that design. Possible classification vectors include net/gate ratio by region, net-span distribution, timing criticality coming out of synthesis, slack distribution, etc. A key objective would be to work out the suitability of various classification vectors. Past work focused on the complete flow including placement and routing. This work will include details on power and clock insertion.

Progress to Date (if applicable): Though not a renewal, a trial investigation found that we could produce a surrogate model with sufficient accuracy for Cadence flows. The production of these models has been automated.

Experimental plan (current year only): We will start by producing surrogate models for a range of designs, both those sourced at NCSU and those obtained by from CAEML member companies. These models will be used to determine that setup automation and tradeoff automation can be easily achieved for a specific design.Then we will start evaluating possible classification vectors that can be used to characterize designs. Vectors will be evaluated for their correlation to design objectives and tool set up alternatives.We will investigate detailed flows for automated power rail and clock insertion.

Related work elsewhere and how this project differs: There is no similar work we are aware of.

Proposed milestones for the current year: (1) establish the viability of using surrogate modeling to guide tool setup for a specific design and specific design goals, including understanding the impact on tradeoffs; (2) establish the viability of using surrogate models for power rail and clock insertion, and (d) an understanding of how to classify designs so that this goal can be achieved for a class of designs, not just individually characterized ones.

Proposed deliverables for the current year: Reports on above.

Projected deliverables for Year 2 (if applicable): Ability to setup the tools to achieve specific objectives for a design, without the need to characterize that design first.

Potential Member Company Benefits: Improved design convergence for ASIC back end flows.

Estimated Start Date: 1/1/18 Estimated Project Completion Date: 12/31/19

Page 15: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 14

I/UCRC Executive Summary - Project Synopsis Date: 8/19/18 Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: NL2PPA: Netlist-to-PPA Prediction Using Machine Learning

Tracking No.: Project Leader: Sung Kyu Lim, Georgia TechCo-investigator(s):

Phone(s): 404-894-0373 E-mail(s): [email protected]

Type: (New, Continuing1, or Sequel2) New Thrust(s): T2 Industry Need and Project’s Potential Benefit to Member Companies: This proposal addresses the SLA2PPA (system-level architecture to power, performance, area) topic requested by the IAB. Design time and cost saving is the most obvious benefit to all member companies.

Project Description: This project aims to build machine learning models and develop associated tools to predict PPA results of a given architecture without having to undergo a lengthy physical design process. We assume that the given RTL is already synthesized so that we focus on predicting the PPA impact of physical design process. Using the predicted PPA results, designers can fix and/or improve RTL in turn. The following figure illustrates the overall flow of our approach.

The input to our ML model includes the target technology specs (technology node, Vdd, target frequency, etc), netlist info (# IPs/gates/nets, connectivity, etc), physical design options (footprint, placement density, P&R algorithms, clock/power network options, etc), and other key features that will help improve the prediction accuracy. We will implement and compare popular ML approaches to achieve our single-digit accuracy goals. Until the member companies provide us with a design database, we will build our own by conducting physical design with various meaningful design settings.

Progress to Date (if applicable): We already own open-source benchmark RTLs, their 2D and 3D IC layouts, and sign-off PPA simulations. We also have access to various commercial EDA tools.

Work Plan (year 2019 only): We will focus on feasibility of our approach, whether accurate prediction is even feasible for the SLA2PPA problem. Related work elsewhere and how this project differs: Initial efforts have been made such as predicting routing congestion from placement, etc. Our model does not require partial physical design as the input, but accepts early design options including netlist info and physical design options. Proposed deliverables for the current year: design database, ML model, associated tools

Projected deliverables for Year 2 (if applicable): Once our feasibility study proves promising, we will concentrate on improving the models with better feature extraction, better ML models, better database, etc.

Budget Request and Justification: $60K (one graduate student and travel support)

Start Date: 1/1/19 Proposed project completion date: 12/31/2020

3A1

Page 16: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 15

I/UCRC Executive Summary - Project Synopsis Date: 8/17/2018 Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: Fast, Accurate PPA Model-ExtractionTracking No.: 3A2 Project Leader: Prof. W. Rhett Davis (NCSU)

Co-investigator(s): Profs. Paul Franzon and Dror Baron (NCSU)Phone(s): (919) 515 - 5857 E-mail(s): {wdavis,paulf,dzbaron}@ncsu.edu Type: New Thrust(s): T2 Industry Need and Project’s Potential Benefit to Member Companies: Fast and accurate estimation of the impact of System-Level Architecture to Power, performance, and area (SLA2PPA). Specifically, this project focuses on elimination of the complicated gate-level simulations needed to make accurate predictions of power, which typically occur very late in the design process. Extraction of system-level power models is extremely difficult, because the data-points are so few and so noisy, while the number of possible model parameters is so huge. This project will develop a comprehensive data-mining methodology to maximize the accuracy of PPA predictions while minimizing the data-collection effort.Project Description: This work will demonstrate a methodology to extract and validate a PPA model from a parameterized system-level design block. We assume that each block has parameterized RTL source-code and a test-bench with embedded architectural event-counters. The physical-design flow prediction framework developed at NCSU (project 2A7) will manage the generation of physical-design data from RTL and the extraction of performance, area, and static-power prediction models. This work will focus on dynamic power-prediction, which is the missing piece. Switching activity will be gathered from gate-level simulations with varying types of parameters, including architectural (e.g. execution lanes), system-environment (e.g. latency in cache-fills), and event-count (e.g. branch mis-predicts) parameters, and time-granularity. A machine-learning flow wrapped around the simulation environment will train a model that predicts dynamic power in terms of the varied parameters. We will explore advanced model formulations, including multi-variate probability density functions expressed with a sparse polynomial chaos (PC) expansion, which would allow fast determination of combinations of coefficients to yield the best accuracy. Adaptive sampling techniques will be explored (including the LOLA-Voronoi algorithm) to minimize the number of simulations. If time permits, we will demonstrate this approach in conjunction with a fast, parallel gate-level power calculator developed in partnership with Si2 and IEEE SA P2416 that models temperature variation and a full-range of PVT parameters. Progress to Date (if applicable): We have executed detailed gate-level simulations to gather power of a parameterized super-scalar processor developed at NCSU called AnyCore RISC-V. We expect to use this framework to demonstrate the PPA model-extraction environment.Work Plan (year 2019 only): (First 6 months) We will show our first data-set, demonstrating the full range of parameters to be fit. (End of first year) We will demonstrate an initial model-training environment with random sampling. Related work elsewhere and how this project differs: Most PPA research is currently focused on extraction of higher-level power models from predictive models such as McPAT. This effort differs in that it seeks to create high-level models from data gathered either from gate-level simulation or measurement. It makes use of numerical techniques that have been successfully applied to electronic design, but not yet to PPA prediction. Proposed deliverables for the current year: Data-set, training-flow and tutorial on how to train the model, and an analysis of model quality and project outcomes. Projected deliverables for Year 2 (if applicable): An enhanced model-training environment will be demonstrated that includes process, voltage, and temperature (PVT) variation and advanced techniques to handle adaptive sampling larger number of parameters (>100). Budget Request and Justification: $54K/year, including support for 1 student ($30K salary & benefits, $16K tuition), $4K travel, and $4K indirect costs. Start Date: 1/1/19 Proposed project completion date: 12/31/20

 

Page 17: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 16

I/UCRC Executive Summary - Project Synopsis Date: 8/17/2018 Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: RNN Models for Computationally-Efficient Simulation of Circuit Aging Including Stochastic Effects

Tracking No.: Project Leader: Elyse Rosenbaum (Univ. of Illinois at Urbana-Champaign) Co-investigator(s): Max Raginsky (Univ. of Illinois at Urbana-Champaign)

Phone(s): (217) 333-6754 E-mail(s): [email protected] [email protected]: New Thrust(s): T3 & T5 Industry Need and Project’s Potential Benefit to Member Companies: Design guard banding is employed widely to prevent transistor aging due to HCI and BTI from unduly limiting the long-term yield of an integrated circuit. Performance and cost penalties are incurred from guard banding. The proposed project will facilitate the use of smaller guard bands by allowing aging to be addressed as part of design-technology co-optimization (DTCO). To achieve this, a method for accurate and efficient simulation of circuit aging will be developed. For DTCO, the simulations must cover the range of use conditions, i.e., the “mission profile,” which includes the input vector. Furthermore, the deterministic and stochastic aspects of aging both must be simulated. Physical considerations suggest that transistor degradation will show greater variance as transistor dimensions are scaled down1 and this claim is well validated by experimental data2. Today, aging simulations are too slow to be practical, and the accuracy of yield projections are questionable because tool flows don’t support sampling from non-Gaussian distributions nor do they take into account the correlation between time-zero variance and that arising from aging. Project Description: Circuit reliability simulators (e.g. Cadence RelXpert) perform transient simulation to obtain the voltage waveform at every terminal of the component transistors. Using either built-in or user-supplied models of HCI and BTI induced degradation, the effective age of each transistor is calculated based on the waveforms and the specified operating time. Finally, a new set of model parameters is generated for each transistor, and the aged circuit is simulated. This procedure is repeated for many different inputs (e.g. input vectors) as well as across process corners, resulting in a long turn-around time. Furthermore, if the circuit being simulated is large, a single SPICE (transistor level) simulation can be slow. In this work, recurrent neural network models will be used to reduce the overall simulation time, with the ultimate goal being a 100x reduction. The circuits to be modeled range from small library cells to large IP blocks.

In the previous project 1A6, RNNs were used to construct circuit models for transient simulation. Because the RNN is a behavioral model, it will be straightforward to include operating time as a model input. The RNN output will be a snapshot of the circuit response at a given time. The objective is to learn a single RNN model that can be used to predict the output of the circuit in response to an arbitrary input, both when the circuit is fresh and after it has been operated for a user-specified amount of time. As part of 1A6, the researchers developed a Verilog-A implementation of RNN models so that those models can be used with commercial circuit simulators (e.g., Spectre, HSPICE, and ADS).

A circuit’s input-output relationship has a stochastic component due to manufacturing variations (“process variations”) as well as to the non-deterministic trap creation and charge trapping processes that underlie transistor aging. This project will develop a method to encode the stochastic aspects of the input-output relationship in the RNN model. One potential approach is to treat the elements of the weight matrices as random variables rather than fixed deterministic constants. It will need to be established what structured classes of distributions can yield a good match to the measurement data. Project 1A1 uses a variational autoencoder (VAE) for generative modeling, in which the output of a neural network provides the mean and variance of normally distributed latent variables; however, VAE can work with any other parametric class of distributions. In particular, since it is well known that transistor parameter shifts (e.g. ) due to BTI do not follow a Gaussian distribution1-3, we will need to consider alternative probabilistic models for the RNN parameters that will provide a good description of the population of aged circuits.

The input and output of the RNN may not be what the circuit designer considers to be the input and output signals. To illustrate, consider the simple case of a circuit that implements the Boolean function (NOR gate). A completely general RNN model would take VA, VB, VDD and VZ as its inputs and IA, IB, IDD and IZ as its outputs. The required amount of training data and the model complexity would be reduced if, instead, the only model inputs were VA and VB and its output were VZ. However, that reduced-size model will be suitable only if the following conditions are met: (i) fixed load at node Z; (ii) slew rate at nodes A and B is independent of the circuit’s input impedance; (iii) negligible supply voltage droop or simultaneous switching noise. This project will develop procedures for training data selection; the objective will be to limit the amount of data needed even in the case of a “general” model. The training data are obtained from time-consuming transistor-level SPICE simulations of fresh and aged circuits. Fortunately, a full

3A3

Page 18: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 17

factorial design of experiment is not necessary or appropriate. For example, in the case of the NOR gate, the voltage waveform at node Z (which is an input to the RNN) is not independent of the voltage waveforms at nodes A and B.

The most commonly used formulation of the RNN is in discrete time. In project 1A6, we convert the trained RNN from discrete time to continuous time, for compatibility with commercial (SPICE type) circuit simulators, which use a variable time step. As a result of the transformation from discrete to continuous time, the final set of equations includes a term ∙��,4 where � is the time-step of the training data and � � � � �the numerical stability of the model is affected by the choice of Here, it is proposed to investigate the use of continuous-time RNNs. This will eliminate the need to both identify a suitable value of and to provide training data with a constant time step.

During RNN training, one may encounter the so-called vanishing gradient problem, which prevents the learning of an accurate model for transient simulation. The problem arises if the ratio � �⁄ is too large, where is the time constant or response time of the system being modeled. We have observed the vanishing gradient problem to occur during circuit modeling only if the system being modeled contains the die as well as off-chip passive elements; therefore, it is not expected to pose a significant problem for modeling of library cells and IP blocks. This claim is made even though aging proceeds on a much slower time scale than the transistor switching delay because operating time will be a scalar input to the model, similar to, say, temperature. In the event that the vanishing gradient problem arises, Raginsky has suggested adding one new (scalar) term to the recurrence equation that should enhance the model’s ability to capture long-term dependencies; for additional flexibility, we can add a tunable coefficient for every state-space dimension, thus allowing for learning of multiple (and possibly widely separated) time constants.

There is a body of literature on stability analysis of continuous time RNN. We propose to develop a method to ensure that a stable model is learned during the training process. In 1A6, a regularization term (penalty term) was added to the cost function and that remains our favored approach.

If a parameter’s mean degradation is independent of its time-zero variability, then the variances will be additive and the generative models for time-zero variability and aging variability can be sampled independently. Degradation due to BTI is widely assumed to be independent of time-zero variability, i.e., parameter drift is independent of the parameter initial value3. However, there are data that suggest a weak correlation5. In contrast, there is clear evidence that degradation due to HCI is correlated with the initial parameter values for a given sample; in fact, the distribution of parameters such as VTH may be narrowed by aging6. We will initially assume that the two processes are independent (the error introduced will result in conservative projections of reliability) and later develop a procedure to handle the correlation.

The RNN model of each library cell will be validated during the training process. When the various RNNs are connected together to emulate a larger circuit, the simulated overall response must be correct. If the per-cell model errors are additive, the simulated response of the full circuit could be highly inaccurate. Project 1A1 is using techniques from nonlinear system analysis to identify conditions under which various interconnections of learned RNN models will provide a stable and accurate approximation to an unknown composite system. We will build on this toolbox to address the above problem of accurately predicting the overall system response. 3. D. Angot et al., in 2013 IEDM. 4. Z. Chen et al., in 2017 EPEPS. 5. A. Kerber & T. Nigam, in 2014 IRPS. 6. C. Zhou et al., in 2018 IRPS.

Progress to Date: In project 1A6, a novel regularization term was introduced to ensure that the learned (discrete time) RNN is Lyapunov stable. A Verilog A implementation of the RNN was developed. Work Plan (year 2019 only): Train continuous time RNN (CTRNN) to emulate a circuit. Enforce stability of the CTRNN circuit model. Introduce stochastic parameters into RNN circuit model. Related work elsewhere and how this project differs: The proposed work uniquely seeks to address two challenges simultaneously: (1) eliminate the need to run transistor-level simulation for every input waveform of interest; (2) model the stochastic component of aging. To our knowledge, only Maricau and Gielen7 have addressed both challenges; they construct a response surface model to map from a transistor parameter distribution to a circuit performance distribution. However, in that prior work, very simple statistical distributions for just a small number of transistor parameters are assumed a priori. 7. E. Maricau & G. Gielen, IEEE Trans. CAD, Dec. 2010. Proposed deliverables for the current year: How-to guide for training a CTRNN using data obtained by circuit simulation. Manuscript on stability of CTRNN circuit model. Projected deliverables for Year 2: Report or manuscript on modeling the distribution of aged circuits, accounting for dependency on time-zero variability. Procedures for ensuring end-to-end accuracy of concatenated RNN models. Budget Request and Justification: $76,000. 1.5 graduate research assistants; one student will be half-time on 1A1. Travel to the CAEML meetings and 1 conference. Computing charges. Start Date: 1/1/19 Proposed project completion date: 12/31/2020

Page 19: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 18

I/UCRC Executive Summary - Project Synopsis Date: 8/17/2018 Center: Center for Advanced Electronics through Machine Learning (CAEML)

Title: High-dimensional structural inference for non-linear deep Markov or state space time series models

Tracking No.: 3A4 Project Leader: Dror Baron (NCSU)Co-investigator(s): W. Rhett Davis (NCSU), Paul Franzon (NCSU)

Phone(s): (919) 513-7974 E-mail(s): [email protected]

Type: New Thrust(s): T1

Industry Need and Project’s Potential Benefit to Member Companies: In many applications, a time series of high-dimensional latent vector variables is observed indirectly from noisy measurements. As a motivating application, consider an array of hard drives that are prone to possible failure, where data about the drives’ performance is collected. The data is later used to predict possible future failures, and the system can respond accordingly. A common challenge in these systems is that the data is very high-dimensional, and conventional machine learning approaches suffer from the so-called curse of dimensionality.

Project Description: One approach to reduce the dimensionality of time series data uses conditional mutual independence (CMI) to prune variables that seem redundant. That said, CMI-based processing may still result in hundreds or even thousands of variables. While approaches such as recursive neural networks (RNN) have been very successful with low-to-moderate dimensionality levels, it is unclear whether they scale in a computationally tractable manner. Our goal is to evaluate whether other approaches might scale better in high-dimensional settings. One possible approach by Krishnan et al. uses deep Markov models (DMMs), where an inference network approximates a posterior probability for the time-dynamics of latent variables by running a multi-layer perceptron (MLP) neural network. Moreover, the Markovian property of the DMM data helps simplify the analysis, leading to a computationally tractable approach to optimize a Kullback-Leibler divergence term. We will implement and develop a DMM system that can cope with various types of statistical structure among the features, and pay close attention to scaling the computation as the dimensionality increases.

Progress to Date (if applicable): We are currently performing a literature review on DMM, and plan to carry out preliminary testing using the algorithm by Krishnan et al.

Work Plan (year 2019 only): After implementing Krishnan et al.’s DMM approach, we will optimize and benchmark its performance against other applicable methods, allowing us to transition to higher-dimensional data. Extra steps we envision include: (1) testing the data for the Markov property and (2) exploring data structures that are amenable to fast computation, possibly including GPU implementation.

Related work elsewhere and how this project differs: While the ongoing project 2A6 has used conditional mutual independence (CMI) to prune variables that seem redundant, it has not considered how to process data sets that are still high-dimensional after the pre-processing pruning step.

Proposed deliverables for the current year: (1) Develop an algorithm designed for high dimensional sequential data analysis; (2) integrate the algorithm with the CMI-based approach; and (3) publish the results and evaluate real-world applications.

Projected deliverables for Year 2 (if applicable): To be determined based on our progress. Ideally, we want to integrate our approach to CMI-based approaches and other ongoing CAEML projects.

Budget Request and Justification: $54K/year, including support for 1 student ($30K salary & benefits, $16K tuition), $4K travel, and $4K indirect costs.

Start Date: 1/1/2019 Proposed project completion date: 12/31/2019

Page 20: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 19

I/UCRC Executive Summary - Project Synopsis Date: 08/30/2018 Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: High-Speed Bus Physical Design Analysis through Machine Learning

Tracking No.: 3A5 Project Leader: Prof. Madhavan Swaminathan (Georgia Tech) and Prof. Xu Chen (UIUC) Co-investigator(s): Andreas Cangellaris (UIUC), Jose Schutt-Aine (UIUC)

Phone(s): (404) 894-3340 E-mail(s): [email protected], [email protected]: New Thrust(s): T2 Industry Need and Project’s Potential Benefit to Member Companies: The ML ecosystem to be developed in this project will provide a fast & accurate way for a thorough analysis of complex high-speed PCB, package or interposer designs. In addition, individual blocks in the proposed ecosystem can be very beneficial for tasks other than worst-case net detection, such as design space exploration and optimization either at the topological, geometrical or circuit level of high-speed channels.Project Description: The Problem: The emerging demand in high performance computing has led to the need for high bandwidth chip-to-chip communication channels. As the data rates increase, these high-speed channels are required to be simulated by full-wave EM solvers over a large frequency bandwidth to accurately characterize the crosstalk between different signal paths and reflections caused by discontinuities, which are collectively represented by multiport S-Parameters. Typically, a high-speed PCB contains hundreds to thousands of such channels, which makes the use of full-wave EM solvers infeasible in practice as such a large-scale simulation at the board level have unacceptable CPU time and memory requirements. Hence, designers tend to investigate the board file to determine a “worst-case net”, the channel that is likely to contain the highest amount of reflections and crosstalk and analyze it to ensure that the final system supports the required data rates. This worst case representation of the system may result in over-designing the system as other nets in the PCB can have relatively better electrical performance compared to the worst-case net and the overall system performance can be increased substantially by improving the design rules of only a few nets. Proposed Solution: In this project, we plan to build a machine-learning (ML) based ecosystem to perform board level analysis of a given PCB, characterize the electrical performance of each net and rank them in descending order of eye openings to identify the worst-case scenario, as well as the relative performance of every other net in the PCB with respect to the identified worst-case net. We anticipate the ML ecosystem to contain 3 main blocks, namely a translator, frequency and time domain predictors.

As the input to the ecosystem is intended directly to be a board file, it needs to be converted into structure that can be interpreted by mathematical models. This translation is anticipated to be done directly using a commercial EDA tool. The identified structures in each net will then be used by the frequency domain predictor. This block is anticipated to contain a library of parameterized predictive models of transmission lines and via arrays to generate S-Parameters of each component in the signal path and cascade them according to the structure provided by the translator block. Along with geometrical parameters, we will be including topological parameters to the model library, such as stack-up structure, signal-to-ground ratio, number of aggressors etc. The output of this block will be the end-to-end S-Parameters of each net.

Once the S-Parameters of each net have been generated, frequency domain features that represent this large S-Matrix will be extracted, combined with TX/RX driver settings including equalization parameters and then used by Time Domain Predictor model to predict eye height & width at a certain BER contour. By deriving a predictive model where the inputs are frequency domain features rather than geometrical & topological parameters, we plan to “abstract” the physical structure of the channel to make the model more generic. We will build upon frequency domain features that IBM currently uses to estimate if a channel satisfies the eye opening margins without actually doing the time domain simulation [1]. We will then compare the performance of the time-domain predictor to another model that maps directly maps geometry to eye opening. The output of this block will be the predicted eye width, height and jitter of every signal path in the PCB file, which will be ranked to determine the worst-case net.

In addition, we build upon our previous work [2] on non-intrusive stochastic collocation techniques to enable quantifying uncertainties in determining the worst-case net, which can be caused by the variabilities in trace cross-sectional geometry, aggressor spacing, via geometry and layout.

Page 21: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 20

[1] S. T. Win et al (IBM), "A frequency-domain high-speed bus signal integrity compliance model: Design methodology and implementation," 2015 IEEE ECTC.[2]: X. Ma, X. Chen, A. Rong, J. E. Schutt-Ainé and A. C. Cangellaris, "Stochastic electromagnetic-circuit simulation for system-level EMI analysis," 2017 IEEE International Symposium on Electromagnetic Compatibility & Signal/Power Integrity (EMCSI), Washington, DC, 2017. [3] H.M.Torun, M.Swaminathan et al. “A Global Bayesian Optimization Algorithm and its Application to Integrated System Design”, IEEE TVLSI’18.[4] S. T. Win, J. A. Hejase, W. D. Becker et al (IBM), "A frequency-domain high-speed bus signal integrity compliance model: Design methodology and implementation," 2015 IEEE ECTC.

Progress to Date (if applicable): (GT) We have developed a fast & accurate Bayesian Active Learning algorithm and an effective hierarchical sampling strategy [3] as part of CAEML to be used for efficient generation of library of models in Frequency Domain Predictor. We will also leverage the work done by IBM [4] for abstracting the physical channel structure to operate directly on frequency domain features of a channel. (UIUC) Not applicable.

Work Plan (year 2019 only): (GT) We will start building the ML ecosystem by Time Domain Predictor block for a fixed TX/RX driver topology but with parameterized equalization settings. Then, we will define features for abstracting TX/RX drivers, similar to features derived in [4], and include these in our model for a generic predictor that takes circuit topology and S-Parameters of the channel as input to predict eye characteristics. (UIUC) 1) Quantification of sensitivity of channel BER on various signal degradation inducing parameters and their variability. 2) Demonstration of SC and VAE capabilities in HSSCDR or another high-speed link simulator. 3) Proof of concept of predictive capability of proposed approach

Related work elsewhere and how this project differs: Board level analysis can’t be done with 3D EM accuracy. Commercial “parasitic extractors” use various approximations, which is not acceptable for high-speed channels. The proposed board-level analysis ecosystem is new and expected to be fast & have substantially higher accuracy, accompanied with confidence intervals around predictions to be used to assess the model/prediction quality. Proposed deliverables for the current year: (GT) 1) A Matlab/Python code for Time Domain Predictor that can predict eye width, height and jitter directly from S-Parameters and TX/RX settings, without time domain simulations. 2) Preliminary version of the Frequency Domain Predictor for proof of concept. (UIUC) 1) Tool for translating PCB layout to format compatible with ANN. 2) Implementation of SC capability in HSSCDR; 3) Train neural network to predict eye-width and BER based on S-parameters Projected deliverables for Year 2 (if applicable): (GT) 1) A Python code for Frequency Domain Predictor for a rich library to handle geometrical & topological control parameters. 2) A Matlab/Python code for demonstrating board-level analysis on an example file. (UIUC) 1) ANN capable of predicting high-speed link performance from layout and geometry information 2) ANN for fast identification of worst-case nets Budget Request and Justification: (GT) Year 1: $58324 (1 full time GRA for 12 months; $5K travel). (UIUC) Grad students, supplies, travel, tuition waiver, indirect - $45,000

Start Date: 1/1/19 Proposed project completion date: 12/31/2020

Figure 1: Overview & flow of the proposed ML ecosystem for board‐level analysis. 

Page 22: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 21

I/UCRC Executive Summary - Project Synopsis Date: 08/29/2018Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: Enabling Side-Channel Attacks on Post-Quantum Protocols through Machine-Learning Classifiers Tracking No.: Project Leader: Aydin Aysu (NCSU)

Co-investigator(s): Paul Franzon (NCSU) Phone(s): (919) 515 - 7907 E-mail(s): [email protected]: (New, Continuing1, or Sequel2) New Thrust(s): T5Industry Need and Project’s Potential Benefit to Member Companies: Side-channel attacks are a major threat for the cyberspace. The best known attacks for quantum-secure encryption, however, cannot yet scale to practical devices. This project illustrates how they can using machine learning and adaptation of an existing attack. This proposal is responding to research need 6, Machine learning in Secure and Trusted Designs Project Description: The primary purpose of this project is to enable single-trace power side-channel attacks on post-quantum key-exchange protocols using machine learning and to quantify the strength of timing obfuscation defenses against these attacks. The central questions we address are can machine-learning approaches provide stronger attacks compared to the conventionalones in the context of lattice-based cryptosystems and to what extend can obfuscation methods hide the vulnerability.

Public-key cryptosystems of today are vulnerable to quantum cryptanalysis because they rely on problems such as integer factorization and (elliptic curve) discrete logarithm that can be efficiently solved by a quantum computer in polynomialtime. Post-quantum cryptography seeks alternative, quantum-resistant cryptographic systems that can survive the quantumthreat. These cryptosystems are still classical algorithms that execute on classical computers but they rely on different problems that so far are not vulnerable to quantum cryptanalysis. Among potential proposals, lattice-based cryptosystems have been a predominant class that was even deployed in commercial products. Although the theoretical cryptanalytic strength of lattice algorithms are thoroughly being analyzed, practical attacks on their implementation are largely unexplored. Side-channel attacks are a broad category of such attacks that can extract secret cryptographic keys by analyzing algorithm’s execution behavior in a computing device.

Power-based side-channel attacks are a fundamental threat for CMOS technology because switching activity (i.e. power consumption) is inherently data dependent. Therefore, when the secret key is being processed, there is some correlation between its value and the power measurement. This correlation is extracted conventionally through Differential Power Analysis (DPA) requiring repeated measurements to apply a covariance or difference-of-means test. A straightforward adaptation of this attack is infeasible for key-exchange protocols because the secret key changes after each execution—i.e., there exists a single power measurement to analyze when attacking a key.

The project leader of this proposal recently showed the first successful side-channel attacks on lattice-based key exchange protocols that extracts the entire secret key form a single power measurement trace. The proposed method applies a Horizontal DPA that combines small correlations observed “within” a single execution (Fig. 1 (a)). The underlying lattice-arithmetic—matrix and polynomial multiplications—indeed has a large number of intermediate computations that depend on the same part of the secret key, e.g., a secret-key coefficient is multiplied with all coefficients of the other polynomial.This attack breaks the implementation of Frodo (CCS’16) and NewHope (USENIX’16) key-exchange protocols.

Although the proposed attack is successful, its application is limited to a small subset of implementations. The critical deficiency in the attack procedure is under utilizing the information available in the correlation trace (Fig. 1 (b)). Theattack focuses on a single point in time where the maximum leak occurs and estimates the correct key purely based on that information omitting other data points (Fig. 1 (c)). As a result, the proposed attack can only succeed on a fully-serialized hardware design that processes a small chunk of information (of 8-bits) at a clock cycle (Fig. 1 (d)). A parallelizedhardware, by contrast, will add algorithmic noise and reduce the number of distinct tests, rendering the attack useless.

Our hypothesis is that machine-learning classifiers can improve the best-known attack and can therefore extend the threat towards microcontroller-based designs. We furthermore argue that, in this application context, a machine-learning attackcan even surpass the template attack, which is the best conventional method. It has been recently simulated that Random Forest classifiers can theoretically outperform template attacks when the attackers only have access to few traces each with a large number of data points [2]. Post-quantum key-exchange protocols using long lattice arithmetic computationsand one-time keys are unique instances of this scenario. Therefore, if successful, our proposal will be the first practical demonstration of machine-learning classifier superiority over prior methods that have been used for the last two decades.

3A6

Page 23: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 22

Figure 1: Applying the horizontal DPA attack and the results. (a) Partitioning the power trace into sub-traces for targeted key computations and running a hypothesis test, (b) Correlation trace results of the attack, (c) Analyzing the leak at maximum leak point, (d) Estimating the success rate. The success rate for the entire key would be 0% if 4 coefficients were processed in parallel because the number of sub-traces, i.e. tests, would reduce to 256. The proposed approach will use a supervised classifier to pre-characterize the target device under a set of known key values. Then, the problem of estimating a secret key becomes successfully classifying a given power trace by comparing it to the a priori power profiles. There are a number of classifiers that can be used towards this end. To provide a more comprehensive analysis (beyond Random Forests [2]),

we will train more powerful classifiers based on neural networks and compare their success to conventional attacks.

A straightforward mitigation of power side-channel attacks is inserting random dummy states during cryptographic execution. While these countermeasures work for DPA/template-style single-trace attacks, neural networks have a potential to thwart such simple defenses. Indeed, neural network classifiers such as the Long-Short Term Memory (LSTM) is known for being robust to temporal noise, hence are used in time-series classification with noisy data. Later in the project, we envision to apply random delay insertion techniques for defense and analyze their effect on the classification performance against LSTM-based attacks vs. conventional techniques. Work Plan (year 2019 only): The 2019 work plan is to realize parallel implementations of key-exchange protocols, demonstrate machine-learning side-channel attacks, and compare it with the DPA/template attack results. Related work elsewhere and how this project differs: The project leader demonstrated the only successful side-channel attack on lattice-based key-exchange protocols, which is limited to 8-bit serial hardware implementations [1]. Through machine-learning classifiers, this project will enable the attack on a broad class of architectures. The potential of machine-learning for side-channel attacks has been theoretically hinged by Lerman et al. [2]. We will reveal its practical value by demonstrating in the context of post-quantum key-exchange protocols.

[1] Aysu, A., Tobah, Y., Tiwari, M., Gerstlauer, A., & Orshansky, M. (2018, April). Horizontal side-channel vulnerabilities of post-quantum key exchange protocols. In 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (pp. 81-88). IEEE., Best Paper Runner-Up [2] Lerman, L., Poussier, R., Bontempi, G., Markowitch, O., & Standaert, F. X. (2015, April). Template attacks vs. machine learning revisited (and the curse of dimensionality in side-channel analysis). In International Workshop on Constructive Side-Channel Analysis and Secure Design (pp. 20-33). Springer, Cham. Proposed deliverables for the current year: April 2019: Algorithm implementation and power measurement data. December 2019: Emprical validation of machine-learning side-channel attacks and comparison. Projected deliverables for Year 2 (if applicable): April 2010 Application of dummy state insertion. December 2020: Evaluation of time-series classification with neural networks and comparison. Budget Request and Justification: $54.5K/year, including support for 1 student ($30K salary & benefits, $17K tuition), $4K travel, and $3.5K indirect costs. Start Date: 1/1/19 Proposed project completion date: 12/31/2020

Page 24: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 23

I/UCRC Executive Summary - Project Synopsis Date: 08/17/2018Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: Design Space Exploration using DNNTracking No.: Project Leader: Prof. Madhavan Swaminathan

Co-investigator(s): Collaborations with Prof. Paul Franzon and Prof. Rhett Davis (NCSU),Prof. Elyse Rosenbaum (UIUC) and Prof. Sungkyu Lim (GT)

Phone(s): (404) 894-3340 E-mail(s): [email protected]: (New, Continuing1, or Sequel2) New Thrust(s): T2/T4Industry Need and Project’s Potential Benefit to Member Companies:Designing advanced semiconductor manufacturing process brings area, speed, power and other benefits but also new performance challenges as a result of the pure physics of running current through tiny wires. Often times there are post tape-out escapes both at the silicon and packaging levels due to inadequate analysis at an early design stage. This sometimes is due to lack of time or poor assumptions made by the designer which may be inaccurate. We address these challenges in this project by focusing on early Design Space Exploration (DSE). Such a solution we believe would be applicable to various levels in the system hierarchy.Research Need: Corner tightening and LLE.Project Description: The Problem: Various process parameter like Local Layout Effects (LLE) etc., have critical impact on design, and introduce variability to circuit design, and significantly impact device performance as well as characteristics. Theseeffects need to be accounted for during the earliest stages of design when the chip architect is crafting the architecture, else thedesign may not meet the design specifications. Predicting the performance of interconnects and other structures at an early design stage is still an un-tackled problem. Predicting the output is particularly difficult because of the non-linear interactions amonginput parameters that are either augmented or weakened when combined, undiscovered LLE effects, dataset noise etc. Theinteracting effects among inputs, bring extra challenges on the output design specifications, like LLE effects cannot be inferred from calibration macro set etc. Often times the design space needs to be restricted since simulations take a long time or predictions are inaccurate due to simple models. The design space gets restricted because due to the problem of extrapolating outputs far from the experimental values. This is due to the nonlinearity of the response and the anomalous behavior of high dimensional statistics. For example doing design space exploration (DSE) using say higher order Taylor expansion and others fail miserablyas they cannot capture the nonlinearity and high dimensionality of the problems involved [1]. Added to this list is the problem of simulation using the physics based model EM solver. The EM solvers are computationally expensive and requires high computational framework like parallel machines. The need of the hour for the circuit designer is fast and accurate deep learning techniques to model the LLE residual instead of replacing the entire circuit models and extracting the features of LLE. The DNNhas the extraordinary ability to generalize concepts from the data and can predict accurately over the new hardware designs, onwhich it has not been trained [4].

Proposed Solution: In this project we propose the use of Deep Neural Networks (DNN) to model LLE residual on the fly to predict the compact modeling calibration macros and compare predictions with hardware results. The DNN search for the best solution space of calibration macros will be from samples in a variety of directions in parallel. Based on the data samples, the DNN discards low-value optimization directions from its search space and chooses the most valuable optimization directions to progress towards a solution. Our proposed DNN, will use distributional optimization from samples (DOPS), and will circumventthe difficulties to model the objective function for example LLE residual, eye diagram etc. on demand [1, 2]. The DNN we propose “learns to optimize” from the samples only, without a predefined objective function as most traditional machine learning algorithms do. For example, in the problem of modelling the LLE residual on demand, the goal is to choose an “appropriate datasets” of calibration macros such that, we maximize objective function, which the circuit designer has given before.Formulating targeted objective functions for a circuit designer (for complex circuits) in most cases can be impossible. Our proposed DNN formulates the objective function from the data and chooses those samples that maximizes the objective function and discards others. Our proposed DNN model learns from the data by an innovative procedure called “distributional optimization from samples (DOPS)” [3, 4] that gets rid of noise in the datasets by squeezing the information through a bottleneck, while retaining only the features most relevant to generalization concepts as shown in Figure 1. This not only gives generalization over the new hardware, but also gives the necessary information of features required like additional LLE features, the distribution

3A7

Page 25: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 Project Summaries | 24

invariance of input/output for generating RLGC models etc [3]. Once the DNN models are generated, we can apply earlier work done through CAEML for optimization [5].

Figure 1: (a) Deep Learning using Information Bottleneck Pipeline for Design Space Exploration; (b) An example application where the interconnection dimensions are predicted during the routing phase to achieve 16Gbps performance with minimum

jitter and maximum eye opening.

0B[1] Rosenfeld, Nir and Balkanski, Eric and Globerson, Amir and Singer, Yaron, “Learning to Optimize Combinatorial Functions” Proceedings of the 35th International Conference on Machine Learning, 2018.1B[2] Eric Balkanski and Yaron Singer, “The Sample Complexity of Optimizing a Convex Function” Proceedings of Machine Learning Research, 2018.2B[3] Naftali Tishby , Fernando C. Pereira , William Bialek, “The Information Bottleneck Method”, CoRR physics/0004057 (2000). 3B[4] Ravid Schwartz-Ziv and Naftali Tishby, “Opening the black box of Deep Neural Networks via Information”, CoRR abs/1703.00810 (2017).4B[5] H.M.Torun et al. “A Global Bayesian Optimization Algorithm and its Application to Integrated System Design”, IEEE TVLSI’18.Progress to Date (if applicable): We have built our DNN on the Tensorflow framework and trained it by an Amazon EC2 instance using batch processing. Each of these Amazon EC2 instances are created on the fly for training and is deleted on completion. We have applied it to frequency response of RLGC parameters of transmission lines.Work Plan (year 2019 only): The work plan consists in developing a scalable Deep Neural Network model (DNN) using the information bottleneck for predicting the frequency response. This requires building a scalable Deep Neural Network model (DNN) in Tensorflow framework in distributed cloud framework. Then we validate our DNN prediction with outputs from aphysics based EM solver and circuit simulators for different circuit configurations and instances along with optimization.Related work elsewhere and how this project differs: We are unware of work in the area of DSE.Proposed deliverables for the current year: 01/19-04/19: Development and implementation of DNN with Information Bottleneck. 05/19-12/19: Application and validation of DNN predictions with physics based EM solvers and circuit simulators on designs from industry.Projected deliverables for Year 2 (if applicable): 01/20–12/20: Development of a mapper and application to industrial strength problems.Budget Request and Justification: Year 1: $75328 (1 full time GRA for 12 months; 25% Post Doc; $5K travel); Year 2: $77416 (1 full time GRA for 12 months; 25% Post Doc; $5K travel).Start Date: 1/1/19 Proposed project completion date: 12/31/2020

Page 26: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 25

I/UCRC Executive Summary - Project Synopsis Date: 7/31/2019Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: Quantum Computing based Machine Learning for EDA (QCML)Tracking No.: Project Leader: Paul Franzon (NCSU)

Co-investigator(s): Greg Byrd (NCSU), Dan Stancil (NCSU)

Phone(s): (919) 515 7351 E-mail(s): [email protected], [email protected], [email protected]: (New, Continuing1, or Sequel2) New Thrust(s): T1

Industry Need and Project’s Potential Benefit to Member Companies: Quantum Computing has the promise of being able to quickly solve problems that otherwise require NP hard or NP complete algorithms tosolve exactly. Quantum Computing technology is the subject of a lot of investment by several large and small companies. This project will give Center members a head start on how quantum computing can benefit Electronic Design Automation.

Project Description: Quantum Computing offers the promise of being able to solve a range of NP hard and NP complete problems - problems that cannot be solved on classical computers. Quantum computing is receiving massive investment from the Federal Government, and several large and small companies. When quantum computers reach suitable scales (suitable number of Qubits) the implications on EDA will be profound. Problems that are solved approximately today, often with heuristics, will be able to be solved exactly. The goals of the proposed research are (1) to identify the range of quantum computing algorithms that are relevant to EDA; (2) identify the range of EDA problems that can potentially be solved with quantum computers; and (3) demonstrate one such problem being solved on real quantum hardware.

Current quantum computing systems are relatively small. Known as Noisy, Intermediate Scale Quantum (NISQ) systems, they are constrained by the number of qubits and the inherent noise that causes qubits to lose coherency. NC State University is a member hub of IBM’s Q Network. The currently available platform has 20 qubits and a 50 qubit platform is expected around the end of the year. Note, the largest machine that can be simulated on a supercomputer is around 50 qubits (with 2^50 distinct states it takes a PByte to store the state classically). Noise management will be part of the effort we propose, though initially we will cope with noise by using the simulator on a low noise setting, and initially on the actual Quantum Computer by running the actual Quantum Computer using error mitigation techniques enabled by the Ignis component of IBM’s Qiskit software.Additionally it is expected that the number of qubits, and the stability of the qubits, will continue to improve with time. Note many of the proposed problem have some degree of error tolerance. Exact solutions are not needed.However, other solutions will be sought if this combination turns out to be inadequate.

A number of algorithms that could be useful in Machine Learning are well suited for execution on a Quantum Computer. The most established is the Quantum Approximate Optimization Algorithm (QAOA) which iteratively applies parameterized quantum operations to find an assignment of binary variables (qubits) that minimize a cost function, known as the Hamiltonian [1][2]. The choice of the parameters and the number of iterations provide lower bounds on the quality of the solution. With a few iterations the solution is approximate but with added iterations, the solution will produce increasingly close approximates to the exact solution.

A classical computer can run in tandem with a quantum computer (“Hybrid computing”) in order to perform a number of optimization algorithms that are potentially useful in EDA and engineering in general. These include the following:

Quadratic Assignment Problem (QAP). The fundamental problem in QAP is the assignment of n resources to n locations that minimizes distance of some other cost function. The problem is NP hard and there is no known algorithm to solve this problem in Polynomial time. Heuristic algorithms that give approximate solutions, such as simulated annealing are often used. In EDA, QAP could be used to solve problems such as floorplanning, cell placement, clock tree planning, power grid planning, PCB placement, etc. with much better results than today’s very approximate solutions. QAOA can be used as a basis for a QAP solver. Maxcut. Another NP hard problem, the objective in Maxcut is to find a cut in in a graph that is at least the size of any other cut. It has application in problems such as routing (via minimization), and potentially floorplanning. It is also related to the k-coloring problem which is potentially useful in channel routing. QAOA can be used as a basis for a maxcut solver [1][3].Max-Flow/Min-Cut is solved with a small modification to the Maxcut algorithm. It is used in weighted graph partitioning. It could be the basis for network partitioning (replacing Kernighan-Lin), floorplanning, placement, FPGA place and route, power planning, clock tree design, etc. [4].Boolean Satisfiability (SAT). The SAT problem is that of where a Boolean formula can have its variables bound so that the formula evaluates to True. SAT is used in EDA for equivalency checking, automatic test

P19-1

NEW PROJECT PROPOSALS

Page 27: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 26

pattern generation, logic optimization and other problems [5]. It has been used in FPGA routing [9] and crosstalk noise analysis [10]. The SAT problem is NP complete. A Quantum SAT solvers have been presented in [6] in which QAOA is used as the basis of the solver. [7] presents a SAT solver based on Ising Spin models andthus has potential for a Quantum implementation.

Other potential algorithms include Quantum Neural Networks [8]. There is early discusssion that Quantum Computers can also be used in solving systems of partial differential equations. (Airbus has a “challenge” out right now on this topic.) One possible approach is to leverage recent work on using neural networks to solve differential equations [11].

1. https://arxiv.org/abs/1411.40282. https://arxiv.org/abs/1709.034893. https://nbviewer.jupyter.org/github/Qiskit/qiskit-

tutorials/blob/master/qiskit/optimization/max_cut_and_tsp.ipynb4. Khang, A, “VLSI physical design: form graph partitioning to timing closuer”, Springer, 20115. http://www.ecs.umass.edu/ece/labs/vlsicad/ece667/reading/SATtutorial.pdf6. https://journals.aps.org/pra/pdf/10.1103/PhysRevA.94.0223097. http://guava.physics.uiuc.edu/~nigel/courses/563/Essays_2017/PDF/chertkov.pdf8. https://arxiv.org/pdf/1802.06002.pdf9. http://www.cecs.uci.edu/~papers/compendium94-03/papers/1999/fpga99/pdffiles/07_2.pdf10. https://dl.acm.org/citation.cfm?id=33960611. Yadav, Yadav and Kumar, “An introduction to neural network methods for differential equations,” Springer,

2015.

Progress to Date (if applicable): We have implemented QAOA with the Qiskit/Aqua tool kit and run it on a quantum computer emulator using our IBM Qhub membership

Work Plan (year 2020 only): The conduct of the project will be as follows. In year 1, we will investigate the range of hybrid algorithms enabled by quantum computing, formulate the algorithms and formulate potential VLSI applications of those algorithms. We will demonstrate feasibility for at least one EDA problem. In year 2, we will take a deep dive into one or more hybrid algorithms and their application in EDA, demonstrating that solution (at an appropriate scale) on a quantum computer, probably at 50 qubits. In each year there will be a half-way milestone of reporting out on progress and current view on potential, and at the 12-month point a tutorial and demonstration. Related work elsewhere and how this project differs: We are not aware of any work elsewhere applying Quantum Computing To EDA.

Proposed deliverables for the current year:Month 6 : A report out (using slides) on the potential for leveraging QAOA in VLSI and PCB design.Month 12 : Tutorial and basic demonstration

Projected deliverables for Year 2 (if applicable)Month 18: A report out on current status.Month 24: Deeper tutorial and demonstration

Budget Request and Justification:Year 1 Year 2 Graduate Student $51,000 Graduate Student $52,000 (includes benefits, tuition) Travel $5,000 Travel $5,000 Overhead $4,000 Overhead $4,000 Total $60,000 Total $61,000 Undergraduate Student will be supported from other funds

Start Date: 1/1/20 Proposed project completion date: 12/31/2021

NEW PROJECT PROPOSALS

Page 28: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 27

I/UCRC Executive Summary - Project Synopsis Date: 07/31/2019Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: Inverse design of interconnects using deep learningTracking No.: Project Leader: Prof. Madhavan Swaminathan

Co-investigator(s): Phone(s): (404) 894-9959 E-mail(s): [email protected]: (New, Continuing1, or Sequel2) New Thrust(s): T2

Industry Need and Project’s Potential Benefit to Member Companies: Design of structures that support high speed signaling and tuning its parameters can be prohibitively time consuming. An inverse problem approach suggests going in the opposite direction and determining design parameters from characteristics of the desired output. In this work, we propose a novel inverse design approach using deep learning architecture to save time and resources in an industrial scale.

Project Description: The Problem: With advancements in fabrication technology and the advent of more complex systems, design of interconnects has become very challenging. Several design parameters cannot be achieved on paper and from equations; therefore, the designer needs to do several simulations and tune such parameters, one by one, to achieve desired quality of the output signal. As the number of parameters increases, and the system grows in size and complexity, this task becomes more challenging and time consuming. It is worth noting, because of the correlation between design parameters, tuning parameters in a one by one manner might fail to detect several higher quality designs. Currently, experienced designers and industry gurus need to spend an excessive amount of time to achieve a certain signal quality and eye diagram. Therefore, in this project we propose an inverse problem approach to find design parameters from characteristics of the desired eye diagram, using deep learning. Note that, multiple combinations of design parameters can result in the same eye opening; therefore, it composes an ill-posed problem, which makes the inverse problem more complex and in need of a general scheme. The proposed deep learning approach yields multiple combinations that result in the same eye diagram. Therefore, the designer has several options to choose based on other constraints.

Proposed Solution: The primary proposed algorithm is as follows. We solve an inverse problem of (i) finding range of dependent design parameters for an eye opening (ii) unfolding probabilistic distributions of design parameters (sharpest, low variance geometry) instead of single optimization point (iii) Evaluate the computational complexity of time and memory for the inverse design [1]. In this project, we develop a machine learning system for solving the inverse design of parameters for specified eye characteristics. Eye diagram is a popular tool for evaluating quality of the signal. We propose to develop an inverse design approach that search optimum design parameters based on the required eye characteristics with deep learning. Our proposed algorithm consists of two steps (i) predicting an optimum geometry parameter configuration from eye characteristics with a deep neural network and a knowledge base (KB) (ii) recommending multiple geometry configurations that has similar eye characteristics using differentiable tracing and edge sampling. In part (i), our proposed learning architecture is a large-scale coupled training system, in which multiple predictions and classifications are done jointly for inverse mapping of microwave systems from eye characteristics. Our current focus is on the inverse design of transmission lines (similar to equalization techniques). The deep neural network performs a sequence of multiple supervised learning tasks, predicting dependent geometry parameters and classification, and retains the learned knowledge in the knowledge base (KB). Rather than building a separate neural network for each of the individual tasks (prediction, classification), we build a common deep neural network for all the tasks. Our common neural network uses the same input layer for input from all tasks and uses separate output for each individual task. The shared hidden layers are trained in parallel using backpropagation on all the tasks to minimize the combined error. This way our deep neural network learns better and predicts the inverse design of geometry parameters from eye characteristics. In part (ii) once we get an optimum geometry configuration, we perturb that optimum geometry configuration by triangle meshes. We then compute derivatives of our perturbed geometry parameters of transmission line/interconnects with respect to eye-characteristics. The derivatives of these iterative perturbations are stored in a data structure which we call as the ray tracing. We access elements of ray tracing database by calling it’s index. The index is look-up table which gives multiple inverse designs. All the cells of the ray tracing data structure are not filled and some of them are kept empty because we cannot do infinite perturbations experiments. We can predict the values of the unfilled ray tracing cells from their neighborhoods and thus we can predict the eye

P19-5

Page 29: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 28

characteristics. Therefore, with a few numbers of perturbations of geometry parameters we can estimate eye characteristics. We split the perturbation gradient into smooth and discontinuous regions [2]. For the smooth part of the integrand we employ traditional area sampling with automatic differentiation. For the discontinuous part we use a novel edge sampling method to capture the changes at boundaries of perturbation. We integrate our ray tracing with the automatic differentiation library Tensorflow for efficient integration with optimization and learning approaches.

[1] Zhaocheng Liu, Dayu Zhu, Sean P. Rodrigues, Kyu-Tae Lee, Wenshan Cai, “Generative Model for the Inverse Design of Metasurfaces,” Nano Letters, 18(10),pp. 6570–6576, 2018. [2] Li, Tzu-Mao and Aittala, Miika and Durand, Fredo Durand and Lehtinen, Jaakko, “Differentiable Monte Carlo Ray Tracing Through EdgeSampling”, ACM Trans. Graph, November 2018

Progress to Date (if applicable): So far, a primary algorithm is developed and tested on a set of 3 coupled microstrip lines. Our current inverse design model consists of training with 15 input eye-characteristics and 4 output geometrical parameters of transmission lines, which are line width lw, line thickness tc, spacing s, and substrate thickness hsub. We have used multi-tasking learning paradigm and train our deep learning model jointly for prediction (by regression) and classification. The classification problem is artificially induced, where we discretize the geometry parameters in 10 different classes. We train our model with 4315 samples of eye characteristics and output geometry. We have tested our model on 100 unseen samples of data [3].

[3] K. Roy, M. Ahadi, H. Torun, R. Trinchero, M. Swaminathan, “Inverse design of Transmission lines with Deep Learning.”, IEEE Conference on ElectricalPerformance of Electronic Packaging and Systems, Submitted in July, 2019.

Work Plan (year 2020 only): 1- Inverse design for a single response. 2- Inverse design for the range of possibilities near the original response. 3- Build the differentiable database of ray tracing.

Related work elsewhere and how this project differs: Inverse design of electromagnetics with neural networks has been previously suggested in works such as [4], [5]. However, these papers have developed basic neural networks. In this project, we propose to solve the inverse design problem with a scalable deep learning architecture for accurate and definite solutions in a general scheme. [4] I. Elshafiey, L. Udpa, and S. S. Udpa, "Application of neural networks to inverse problems in electromagnetics," IEEE transactions on magnetics, vol. 30, no.5, pp. 3629-3632, 1994. [5] D. Cherubini, A. Fanni, A. Montisci, and P. Testoni, "Inversion of MLP neural networks for direct solution of inverse problems," IEEEtransactions on magnetics, vol. 41, no. 5, pp. 1784-1787, 2005.

Proposed deliverables for the current year: 1- Software package for the proposed inverse design.

Projected deliverables for Year 2 (if applicable): 1- Inverse design for multiple responses and ranges in different areas of the design space. 2- Expanding to numerous variables. 3- Expanding to other areas of microwave systems. 4- Integrate our differentiable ray tracer with the automatic differentiation library Tensorflow/PyTorch, and deliver the software.

Budget Request and Justification: The expected budget is $57K/year which will include support for aPhD student and $3550 for travel.

Start Date: 01/01/20 Proposed project completion date: 12/31/2021

Page 30: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 29

I/UCRC Executive Summary - Project Synopsis Date: 8/14/2019Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: FPGA hardware accelerator for real time securityTracking No.: Project Leader: Franzon (NCSU)

Co-investigator(s): Asu (NCSU)

Phone(s): 919 515 7351 E-mail(s): [email protected]

Type: (New, Continuing1, or Sequel2): New Thrust(s): T5

Industry Need and Project’s Potential Benefit to Member Companies: Member companies will get a prototype real time defense hardware reference design that is maintainable. While targeting crypto attacks the hardware structure could be the basis of defenses for other attacks including other network attacks and possibly Trojan detection.

Project Description: Many network-based attack types unfold in real time. For example, a crypto (ransomware) attack encrypts files in real time, or a network intruder gets root access and starts downloading sensitive files. While there are machine learning trained models to detect such attacks, e.g. random forest, random tree or decision tree ensembles, Bayes networks etc. (e.g. [1][2]), in order to stop the attack before it completes, the model needs to recognize the attack with a low latency. Hence the interest in FGPA accelerators. We propose to develop such an accelerator and the infrastructure for maintaining it.

The proposed research will be conducted over three phases towards three related goals. Phase 1: Model determination and testing. Phase 2: FPGA accelerator prototype and testing. Phase 3: Build a hardware generator or reconfigurationgenerator.

In phase 1, we will survey the state of the art in network security detection engines based on machine learning in order to select one or two for prototyping. We will review (1) publicly available engines and benchmarks, such as those in [1] and (2) models otherwise reported in the literature (we will also reach out to authors of key papers such as those who wrote [2]). We will also develop a specification for the hardware engine, not just high level ones, but intermediate ones such as the PC interface (what is being monitored on the PC and how), the packet throughput rate needed, etc. At the end of phase 1, we will have a representative machine learning model that has some demonstrated utility in detecting crypto attacks.

In phase 2, the objective will be to demonstrate mapping the multi-classifier detector to an FPGA to demonstrate faster than real time detection. We will use a high end FPGA integrated with a PC. Network traffic will be fed to the FPGA for packet-based and flow-based analysis. Key host CPU parameters will be communicated to the FPGA for host behavior analysis.

An appropriate ensemble network will be implemented on an FPGA. FPGAs are very good at implementing fast and deep Boolean logic, and thus very suitable for implementing fast decision trees. The DSP engines and arithmetic blocks are very good at perform fast calculations without using LUTs. In a modern high-end FPGA, a single combinational logic stage of an FPGA has a delay of around 0.3 ns, while routing the signal adds about 1 ns of delay. Thus it would be expected that every branch of the decision tree will take the order of 10 ns or less for its decision. The deepest set of branches in any one decision tree is likely to be less than 50 deep (actually much less but we will assume this worst case for this calculation). A 50 deep tree will thus take around 500 ns (0.5 ms) of latency to come to a decision. The tree has to keep up with the packet throughput at the input. Some pipelining will be needed to achieve this balance. Even with careful and balanced assignment of logic to pipeline stages, the total latency will probably increase by around 20%, i.e. the latency is expected to be less than 0.6 ms. The different trees in the ensemble will be implemented in parallel. With multiple trees being evaluated in parallel memory traffic can become a bottleneck unless it is coordinated between the trees, e.g. as in [3]. This requires not just coordination but regularization of the tree architectures.

Reference [2] provides a good summary of the state of the art in ransomware detection, datasets, etc. They suggest using the CTU dataset as well as real malware. They present a summary of the main behavioral features of attacks, and how to use a combination of host information, packet information and packet flow information to achieve very good false positive andfalse negatives in detection. They compare different ML training schemes including Random Forest, SVM, Bayes Net and Random Tree. The Random Tree approach performed very well, though for the packet flow-based classifier, the Bayes Net implementation performed slightly better. They don’t provide a link to their model but we will contact the authors to see if they are willing to share it. If they are not, we will consider re-creating it from data provided in the paper.

P19-8

Page 31: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 30

It is expected that the ML-based classifiers that are best at this problem will evolve with time as attack vectors change. Thus the FPGA hardware must be periodically be reprogrammed to keep the defenses up to date. For this reason in year 2 we propose to build an ensemble decision tree generator or otherwise support reconfigurability. For the generator, a template-based approach will be most likely be used though we will investigate the potential for behavioral synthesis using member company tools. In a template-based approach, fragments of highly parameterized RTL are written and code is written to program and assemble these fragments to make a complete RTL implementation of a specific tree. The idea in this approach is to regenerate the hardware from scratch when modified. However, we will consider an alternative approach – that of incorporatingreconfigurability wherein the model can be modified in-situ.

[1] https://github.com/topics/ransomware-detection[2] A. O. Almashhadani, M. Kaiiali, S. Sezer and P. O’Kane, "A Multi-Classifier Network-Based Crypto Ransomware Detection System: A Case Study of Locky Ransomware," in IEEE Access, vol. 7, pp. 47053-47067, 2019.[3] M. Kang, S.K. Gonugondla, S. Lim, and N.R. Shanbhag, A 19.4-nJ/Decision, 364-K Decisions/s, In-Memory Random Forest Multi-Class Inference Accelerator. IEEE Journal of Solid-State Circuits, 2018, 53(7), pp.2126-2135.

Progress to Date (if applicable): We have conducted a literature review. We have experience using Random Forests in other projects.Work Plan (year 2020 only):Phase 1: (6 months). Model determination, building and testing. Phase 2: (9 months). FPGA accelerator prototype and testing.Phase 3: (9 months). Design, implement and test a hardware generator or insert reconfiguration supportRelated work elsewhere and how this project differs: No work was found in IEEE Xplore under the keywordcombinations “ransomware” and “FPGA” or “ASIC”. However, there has been work conducted in accelerating random forests using FPGAs and ASICs. For example the authors of [4] detail how FPGAs can be efficiently used to implement random forest classifiers and outperform GPUs and MCPUs in terms of classification weight, cost and power. This will be a source of considerable general guidance to the project, including the use of embedded DSPs, etc. They also indicates that large high-end FPGAs will be needed. Of course, since 2012, high-end FPGAs have grown considerably in capability making them even more suitable to these tasks. Reference [3] above does describe an ASIC implementation of a Random Forest chip and has some information that is useful to our implementation.

[4] B. Van Essen, C. Macaraeg, M. Gokhale and R. Prenger, "Accelerating a Random Forest Classifier: Multi-Core, GP-GPU, or FPGA?," 2012 IEEE 20th International Symposium on Field-Programmable Custom Computing Machines, Toronto, ON, 2012, pp. 232-239.Proposed deliverables for the current year:Month 6 : Presentation on state of art in ML based models for network security with a focus on ransomware detection.Month 12 : Progress report on project.Projected deliverables for Year 2 (if applicable):Month 15 : Demonstration of FPGA acceleration of ransomware attack detectorMonth 24 : Presentation and demonstration of hardware generator.Budget Request and Justification:

Year 1 Year 2 Graduate Student $51,000 Graduate Student $52,000 (includes benefits, tuition) Travel $5,000 Travel $5,000 Overhead $4,000 Overhead $4,000 Total $60,000 Total $61,000 Member company will provide CPU and high end FPGA

Start Date: 1/1/20 Proposed project completion date: 12/31/2021

Page 32: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 31

I/UCRC Executive Summary - Project Synopsis Date: 7/31/2019Center: Center for Advanced Electronics through Machine Learning (CAEML)

Title: Physical Design Parameter Optimization (PDPO) using Reinforcement Learning

Tracking No.: P19-10

Project Leader: Sung Kyu Lim, Georgia TechCo-investigator(s): none

Phone(s): 404-894-0373 E-mail(s): [email protected]: (New, Continuing1, or Sequel2) New Thrust(s): T2

Industry Need and Project’s Potential Benefit to Member Companies: Tool parameter optimization is a time-consuming process for physical design, especially with large-scale designs done at advanced nodes. Inevitably, this process relies heavily on designer expertise. Industry will benefit from better final PPA qualities as well as savings on design turnaround time and human and computing resources.

Project Description: Given a synthesized netlist and PPA goals, we formulate the reinforcement learning (RL)-based Physical Design (PD) Parameter Optimization (PDPO) problem as follows: (1) objective: tunePD tool parameters so that the final GDS layouts meet the PPA goals, (2) state: each PD parameter is tuned to a specific setting, (3) action: change the setting of a subset of parameters, (4) reward: positive if the gap between desired vs. achieved PPA is reduced. PD consists of several steps including floorplanning, power routing, placement, clock routing, signal routing, and timing closure in between. The outcome of a step heavily depends on what is done previouslyand affects subsequent steps accordingly. Thus, it is not wise or even feasible to optimize all of the parameters. Instead, we will take a step-by-step approach, where we perform RL to optimize floorplanning parameters first, then power routing based on the floorplanning, etc. Our RL-based solution is based on two neural networks: policy and value network. The former takes the current state of our optimization process and decides which action to take next. The latter takes the new PD parameters given by the policy network and map them to PPA without doing any actual PD runs. We will pre-train the value network using a layout database and supervised learning (we will leverage our 3A1 project for this purpose). Our policy network will be trained by backpropagation during optimization. Our first choice is DNN for both networks. We will study alternatives including CNN, RNN, etc. In this case, our definition of RL state, action, and reward function will change accordingly. Our RL is orchestrated by Markov Decision Process (MDP) and Policy Gradients (PG) algorithm: in the beginning, we start at a random state (= random settings for all PD parameters). Next, our policy network samples an action stochastically (= probabilistically select and tune a subset of PD parameters). The PPA evaluation is done using our value network. The gap between this model-predicted vs. desired PPA becomes the gradient. If the gap decreases, we reward the policy network by performing backpropagation and increasing related weights. If it worsens, we penalize the related weights. The optimization process is terminated if the gap becomes zero. We repeat the entire RL multiple times (= play multiple games in RL terminology) so that the quality of our networks and their optimization capability mature. We will investigate various mechanisms to tackle well-known shortcomings of RL. First, the inefficiency of sampling—the actions taken during the early phase of RL are useless and or detrimental—will beaddressed by intelligent sampling. Next, we will tackle the credit assignment problem—not clear which set of actions taken in the past is more useful—by an RNN that tracks the history. We will address the alignment problem—greedy actions that maximize reward but never achieve the overall goal of PPA closure—using hill-climbing actions. Lastly, we will compare RL with other popular approaches including Simulated Annealing, Genetic Algorithm, DNN, GAN, etc.

Progress to Date (if applicable): Our project 3A1 is solving physical design problems, but the focus is on PPA prediction, not PD parameter optimization. This proposed project and 3A1 thus are complementary.

Work Plan (year 2020 only): We will focus on the backend of PD, e.g. routing. In 2021, we will tackle the front-end (floorplanning and placement).

Related work elsewhere and how this project differs: RL-based PD optimization is in its infancy, and there does not seem a well-accepted method that exists today.

Proposed deliverables for the current year: RL setup & reward function design, survey of deep RL methods, results of applying deep RL for chosen problem on open source designs. Focus on placement prediction.

Page 33: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 32

Projected deliverables for Year 2 (if applicable): RL applied to other physical design problems such as routing, timing closure, etc.

Budget Request and Justification: $65K (one graduate student and travel support)

Start Date: 1/1/2020 Proposed project completion date: 12/31/2021

Page 34: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 33

I/UCRC Executive Summary - Project Synopsis Date: 8/15/2019 Center: Center for Advanced Electronics through Machine Learning (CAEML) Title: Mitigating the Curse of Dimensionality in Electronic Systems Modeling via Physics-Aware Universal Approximation by Dynamic Neural Nets Tracking No.: Project Leader: Maxim Raginsky (UIUC)

Co-investigator(s):

Phone(s): (217) 244 - 1782 E-mail(s): [email protected]: (New, Continuing1, or Sequel2) Sequel (building on 1A1) Thrust(s): T1, T2, T3 Industry Need and Project’s Potential Benefit to Member Companies: As the EDA industry is starting to embrace state-of-the-art machine learning methods, particularly deep learning (DL), one should be aware of the fact that the prevailing trend in mainstream applications of DL (such as computer vision or speech/language modeling) is to train a complex network architecture in an end-to-end-fashion, without paying much attention to domain-relevant specifics. This trend is worrisome due to increasing energy costs of training such state-of-the-art models, and the training data requirements are also rising exponentially with model complexity [1,2]. The goal of this project, which builds on the theoretical findings of 1A1, is to tackle this challenge and to develop a domain-specific toolbox of ML models and training algorithms that can mitigate the curse of dimensionality in the context of electronic systems modeling. [1] E. Strubell, A. Ganesh, A. McCallum, “Energy and policy considerations for deep learning in NLP,” Proc. ACL, 2019[2] R. Schwartz, J.A. Dodge, N. Smith, O. Etzioni, “Green AI,” preprint, 2019

Project Description: The project, intended to be a sequel to 1A1 (Modular Machine Learning for Behavioral Modeling of Microelectronic Circuits and Systems), focuses on developing a systematic, physics-aware approach to trading off model complexity, data requirements, and predictive accuracy when using neural nets to learn behavioral models of highly complex electronic systems. This is particularly important since the high flexibility and approximation power of deep neural nets and recurrent neural nets come with a heavy price tag in terms of computational resources and training data requirements. For instance, it was recently reported that training a state-of-the-art neural net architecture for natural language processing emits an order of magnitude more CO2 than an average American does in one year [1]. These are sobering numbers, especially if one takes into account the curse of dimensionality: the number of neurons needed to approximate a continuous function on a compact set grows exponentially with the dimension of the function domain [2]. Mainstream ML researchers and practitioners in such application domains such as computer vision or natural language processing have largely ignored this issue due to abundance of computational resources and training data. This is not the case in EDA (for instance, it is time-consuming to collect input-output measurements on a device under test), so a more principled, domain-aware approach is needed.

A broad class of analog electronic circuits can be modeled by nonlinear state-space dynamical models [3]. Thus, an n-port device can be represented in the form dx/dt = f(x,u), y = Cx, where u and y are the n pairs of conjugate port variables (currents and potentials), x is the multidimensional internal state, f is a smooth nonlinearity, and C is a state-to-output matrix. The dimension of the internal state and the form of the nonlinearity depend on the topology of the circuit and on the elements present in the circuit. We seek a class of models that satisfy the following two requirements:

1. They should serve as a universal approximator for nonlinear dynamical systems of the above type, i.e., anynonlinear circuit admitting a smooth state-space model, under suitable stability conditions, can be approximatedto a desired accuracy by some model from our class.

2. They should be learnable from observed waveforms by gradient descent.

In our earlier work, we have chosen recurrent neural nets (or RNNs) as our model class: they have the universal approximation property [4,5] and can be trained using gradient descent with backpropagation [6]. Moreover, because RNNs are formed by composing analytic nonlinearities and linear time-invariant maps, they are particularly amenable to analysis and simulation in SPICE/Verilog-A.

However, despite these desirable properties, the curse of dimensionality still manifests itself. The key step in the proof of universal approximation for RNNs is to approximate the right-hand side f(x,u) by a feedforward neural net with one hidden layer, and it is known that, in order to approximate a continuous function of d real variables on a compact subset up to ε accuracy, we need at least (1/ε)d neurons. In our case, the dimension d is equal to the total number of ports and internal state variables. On the other hand, the approximation complexity can be reduced to (1/ε)2 (which is dimension-

P19-12

Page 35: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 New Project Proposals | 34

free) if the function f belongs to the so-called Barron class, which contains analytic functions and is closed under a wide variety of operations [7]. The Barron class is defined in terms of a certain integrability condition for the multidimensional Fourier transform of f. Thus, one of the major goals of this project is to investigate the following question: Given a nonlinear dynamic multiport where that the constitutive relations of every component can be modeled by Barron functions, does the overall state-space model belong to the Barron class? We conjecture that this is true: indeed, any dynamic nonlinear multiport can be realized as an interconnection of linear capacitors (or inductors) and nonlinear resistors [3], we only need to verify that commonly used models of nonlinear resistors belong to the Barron class. Most such models used in practice, such as the Ebers-Moll transistor model, are given by analytic functions (e.g., compositions and sums of exponential and polynomial functions). This theoretical result will have an important practical implication: any circuit model amenable to SPICE/Verilog-A simulation can be efficiently approximated by an RNN. We will then build on these results to develop efficient methods for learning generative models of dynamic nonlinear multiports in the presence of stochastic variability. State-of-the-art generative models, such as Generative Adversarial Nets (GANs), are used to learn and sample from probability distributions that are supported on differentiable submanifolds (smooth surfaces) of a high-dimensional space. Recently, it was shown that one can generate these geometric objects using deep neural nets [8]. Now, a natural description of a nolinear dynamic multiport is also geometric: it is given by a high-dimensional object determined by the constitutive relations of all the components and by the constraints imposed by the Kirchhoff current and voltage laws [9]. Under the so-called transversality condition, this object will be a smooth submanifold [9], and therefore one can learn it and sample from it using neural nets. The practical usefulness of this comes form the fact that such a model takes into account physics-based constraints, such as KCL and KVL, and its intrinsically geometric nature does not require one to partition the ports into inputs and outputs. One can also naturally formulate circuit dynamics as a state space model on this manifold, which is the natural setting for nonlinear dynamical systems [10]. Once again, if one only assumes continuity and differentiability, the approximation complexity will be exponential in dimension; we plan to build on our results on the compositional properties of Barron functions to show that we can avoid, or at least mitigate, the curse of dimensionality. Similar to 1A1, we will emphasize synergy and integration with other ongoing CAEML projects. [1] E. Strubell, A. Ganesh, A. McCallum, “Energy and policy considerations for deep learning in NLP,” Proc. ACL, 2019 [2] D. Yarotsky, “Optimal approximation of continuous functions by very deep ReLU networks”, in Proc. COLT, 2018 [3] L.O. Chua, “Device modeling via basic nonlinear circuit elements,” IEEE Trans. on Circuits and Systems, vol. CAS-27, no. 11, pp. 1014-1044, November 1980. [4] E.D. Sontag. “Neural nets as systems models and controllers,” in Proc. Seventh Yale Workshop on Adaptive and Learning Systems, Yale University, pages 73-79, 1992. [5] J. Hanson and M. Raginsky, “Approximate simulation of incrementally stable state-space systems by recurrent neural nets,” working paper, 2019 [6] R.J. Williams and D. Zipser, “Gradient-based learning algorithms for recurrent networks and their computational complexity,” in Backpropagation, pp. 433-486, L. Erlbaum Associates Inc., 1995. [7] A.R. Barron, “Universal approximation bounds for superpositions of a sigmoidal function,” IEEE Transactions on Information Theory, vol. 39, no. 3, pp. 930-945, May 1993. [8] V. Khrulkov and I. Oseledets, “Universality theorems for generative models,” preprint, 2019. [9] T. Matsumoto, L.O. Chua, H. Kawakami, H. Ichiraku, “Geometric properties of dynamic nonlinear networks: Transversality, local-solvability, and eventual passivity,” IEEE Trans. on Circuits and Systems, vol. CAS-28, no. 5, pp. 406-428, May 1981. [10] H. Nijmeijer and A.J. van der Schaft, Nonlinear Dynamical Control Systems, Springer, 1990.

Work Plan (year 2020 only): Identify the class of circuit elements that can be modeled using Barron functions. Analyze compositional properties of dynamic nonlinear circuits composed of such elements, establish quantitative approximation theorems and analyze sample complexity of learning from input-output data. Related work elsewhere and how this project differs: To the best of our knowledge, there is no existing theoretical work on physics-aware efficient universal approximation of behavioral models of electronic circuits. Proposed deliverables for the current year: Report or manuscript on efficient universal approximation of dynamic nonlinear circuits. Procedures for ensuring end-to-end accuracy of learned RNN models. Project report and python code. Projected deliverables for Year 2 (if applicable): Develop theory and algorithms for efficient, physics-aware generative modeling of nonlinear dynamic multiports using differential geometric methods. Carry out empirical evaluation based on lab measurements. Project report and python code. Budget Request and Justification: $56,920/year. 1 graduate research assistant. Travel to the CAEML meetings and 1 conference. Computing charges. Start Date: 1/1/2020 Proposed project completion date: 12/31/2021

Page 36: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 35

MEET ING ATTENDEESANALOG DEVICESBrian Swahn Staff CAD R&D Engineer [email protected] INSTITUTE OF TECHNOLOGYAnthony Agnesina Graduate Student Antic Grad Fall 2021 [email protected] Golder Graduate Student Antic Grad August 2023 [email protected] Lim Professor ‐ Projects 2A4 & 3A1 [email protected]‐Chen Lu Graduate Student Antic Grad May 2021 [email protected] Raychowdhury Professor ‐ Project 2A2 [email protected] Roy Post Doc [email protected] Swaminathan CAEML GaTech Site Director, Projects 3A5 & 3A7 [email protected] Torun Graduate Student Antic Grad May 2021 [email protected] Waqar Bhatti Graduate Student Antic Grad May 2023 [email protected] ENTERPRISEChris Cheng Distinguished Technologist [email protected] CORPORATIONDale Becker IBM Systems [email protected] Hejase Senior Engineer ‐ High Speed Bus Signal Integrity [email protected] Mozipo Principal Engineer [email protected]‐MARTINGuy Chriqui Sr Research Scientist [email protected] CAROLINA STATE UNIVERSITYFurkan Aydin Graduate Student Antic Grad May 2023 [email protected] Aysu Asst. Professor ‐ Project 3A6 [email protected] Baron Assoc. Professor ‐ Project 3A4 [email protected] Davis Professor ‐ Project 2A7 & 3A2 [email protected] Francisco Graduate Student Antic Grad May 2021 [email protected] Franzon CAEML NCSU Site Director  [email protected] Turtletaub Undergraduate Student Antic Grad May 2022 [email protected] Marvin Principal Engineer [email protected] NATIONAL LABORATORIESKurt Brenning Member of Technical Staff [email protected] Chu Senior Manager, Emerging Cyber Capabilities [email protected] Paskaleva staff tech [email protected] Reza Principal Member of Technical Staff [email protected], INCSiddhartha Nath Sr. Staff R&D [email protected] Obilisetty Group Dir, R&D [email protected] OF ILLINOIS AT URBANA‐CHAMPAIGNArtie Balakir Undergraduate Student Antic Grad May 2022 [email protected]  Cangellaris Vice Chancellor for Academic Affairs and Provost [email protected] Chen Teaching Asst. Professor ‐ Project 3A5 [email protected] Hanson Graduate Student Antic Grad May 2024 [email protected]‐Han Huang Graduate Student Antic Grad June 2024 [email protected] Konduru Masters antic May 2020 [email protected] Ma Graduate Student Antic Grad December 2019 [email protected]

Page 37: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 201936

MEET ING ATTENDEES

Maxim Raginsky Professor ‐ Project 1A1 [email protected] Rosenbaum CAEML Director ‐ Project 3A3 [email protected] Schutt‐Aine Professor  [email protected] Shangguan Graduate Student Antic Grad June 2024 [email protected] Wang Graduate Student Antic Grad May 2020 [email protected] Xiong Graduate Student Antic Grad May 2021 [email protected] Yang Graduate Student Antic Grad May 2020 [email protected] Hoffman Evaluator [email protected] Costello IC Design Director [email protected] Peckham Operations Manager [email protected] ‐ ANSYSFred German Sr. R&D Manager [email protected] ‐ APPLEMayur Joshi Senior Digital Circuit Designer [email protected] Qumsieh Physical design methodology engineer [email protected]‐ASE GROUPCalvin Shiao VP of Corporate R&D [email protected] Tseng Dept Manager of Corporate R&D [email protected]‐ASUSHank Lin Senior Manager [email protected]‐Chyi Tseng Deputy Division Director Bin‐[email protected]

Page 38: Fall 2019 Semiannual Meeting | October 29 & 30, 2019publish.illinois.edu/caeml-industry/files/2019/10/CAEML...Semiaual eeti ctoer 29 30 2019 1 WELCOME Welcome to the Fall 2019 semiannual

CAEML Semiannual Meeting • October 29 & 30, 2019 37

publish.illinois.edu/advancedelectronics