Keynote speakers

 


Keynote #1

Dr. Yervant Zorian is a Chief Architect and Fellow at Synopsys, as well as President of Synopsys Armenia. Formerly, he was Vice President and Chief Scientist of Virage Logic, Chief Technologist at LogicVision, and a Distinguished Member of Technical Staff AT&T Bell Laboratories. He is currently the President of IEEE Test Technology Technical Council (TTTC), the founder and chair of the IEEE 1500 Standardization Working Group, the Editor-in-Chief Emeritus of the IEEE Design and Test of Computers and an Adjunct Professor at University of British Columbia. He served on the Board of Governors of Computer Society and CEDA, was the Vice President of IEEE Computer Society, and the General Chair of the 50th Design Automation Conference (DAC) and several other symposia and workshops.

Dr. Zorian holds 35 US patents, has authored four books, published over 350 refereed papers and received numerous best paper awards. A Fellow of the IEEE since 1999, Dr. Zorian was the 2005 recipient of the prestigious Industrial Pioneer Award for his contribution to BIST, and the 2006 recipient of the IEEE Hans Karlsson Award for diplomacy. He received the IEEE Distinguished Services Award for leading the TTTC, the IEEE Meritorious Award for outstanding contributions to EDA, and in 2014, the Republic of Armenia’s National Medal of Science.

He received an MS degree in Computer Engineering from University of Southern California, a PhD in Electrical Engineering from McGill University, and an MBA from Wharton School of Business, University of Pennsylvania.

Title: Challenges and Opportunities of Silicon Lifecycle Management for Chiplets & 3DICs

Abstract: : With increasing system complexity and stringent runtime requirements for AI accelerators, high-performance computing and autonomous vehicles, reliable, safe and secure operation of electronic systems are still a major challenge, particularly, with the increased use of third party chiplets and multi-die systems. This keynote will focus on optimizing silicon health by using advanced solutions throughout the silicon life cycle stages, from chiplet design, to bring up, volume production, tmid-stack, 3D packaging and in-field operation. The advanced solutions for silicon lifecycle management (SLM) to be discussed will starts by embedding a range of monitoring engines in different levels of the design, access mechanisms and solutions for on-chip and across the chips network, as well as data analytics on the edge and in the cloud for fleet optimization


Keynote #2

Ana Cavalcanti is a Professor at the University of York, UK, and holds a Royal Academy of Engineering Chair in Emerging Technologies. In that role, she is Director of the RoboStar centre on Software Engineering for Robotics. She previously held a Royal Society Industry Fellowship, which provided her with the ideal opportunity to understand and contribute to the practice of formal methods working with QinetiQ. Her main scientific achievements have been on the design and justification of sound refinement-based program development and verification techniques. She has covered theoretical and practical integration with industry-strength technology: concurrency, object-orientation, and testing, dealing now with mobile and autonomous robots. She has led the development and justification of refinement theories, notations, and techniques, and tools to cope with control systems. Her work provides support for graphical notations popular with engineers, and for main-stream programming languages. It is distinctive in that it has comprehensive coverage of practical languages, rather than idealised notations. It also supports high degrees of automation to enable usability and scalability. She has chaired the Programme Committee of leading conferences, and been a member of numerous Programme Committees. Currently, she is the Chair of the Formal Methods Europe Board.

Title: Systematic testing of a drone for emergency relief

Abstract: : Recent surveys suggest that, within the field of robotics, there is a prevailing tendency to employ a manual ad hoc testing approach, heavily reliant on the expertise of developers. However, this method proves to be costly and comes with various drawbacks, including the inability to assess the fault-detection capabilities of the test set, potential errors in test specification and execution, and the possibility of expert disagreement on test outcomes. In this presentation, we share our experience with the adoption of the innovative RoboStar systematic testing approach for a firefighting UAV, developed using the widely adopted ROS middleware. The RoboStar framework advocates a model-based approach in control software development for robotics, offering domain-specific tool-independent notations for modeling and simulation, along with techniques for the automatic generation of artifacts. Our focus in this talk, centers on the RoboStar techniques for automated test generation. Through our approach, we effectively reduce testing expenses and can provide guarantees regarding the absence of faults within specified classes.


Keynote #3

Philippe Notton is the CEO and Founder of SiPearl, the French company designing the European high-performance, low-power microprocessor. His original vision of SiPearl came in 2015 while he was leading a division of 2400 engineers at STMicroelectronics. In 2017, he joined Atos to set up the European Processor Initiative (EPI) consortium, which aimed to foster the return of high-performance microprocessor design to Europe. In June 2019, he launched SiPearl as a spin-off of the EPI with the support of the European Union. He assembled a team of experts and managers from Atos, STMicroelectronics, Marvell, Intel, Nokia and MediaTek and now employs more than 170 engineers in France (Maisons-Laffitte, Grenoble, Massy, Sophia Antipolis), Germany (Duisburg) and Spain (Barcelona). SiPearl’s first generation microprocessor, named Rhea1, will equip JUPITER, the first European exascale supercomputer.
As a senior executive, Philippe Notton has built an outstanding track-record in the multimedia, semiconductor and security fields. Passionate about high technology and fast-moving environments, he has worked in France, the UK and the US for market-leading groups (Thomson, Canal+, LSI Logic, STMicroelectronics, Atos), as well as a successful startup (MStar Semiconductor, sold to MediaTek in 2012 for US$4B).
Philippe Notton is a Supélec engineer (1993) and has an Executive MBA from ESSEC & Mannheim (2008).

Title: The European high-performance low-power microprocessor, ultimate solution for LLM

Abstract: The Large Language Model (LLM) is a new form of generative AI designed to understand, process, and generate human-like language. Nowadays, its integration into enterprises operations is crucial to develop their business and improve their efficiency.

Until now, most of LLM are managed using GPU or dedicated accelerators. But, their cost combined to their low availability on the market and their level of energy consumption are prompting us to turn to other solutions. In this context, the European high-performance low-power microprocessor with built-in High Bandwidth Memory (HBM) will be the ultimate solution for LLM workloads.

The LLM’s workflow can be divided into three steps: (1) sanitizing and extracting features, (2) building/training founding models, then (3) fine-tuning and using models. (1) is to collect, identify and extract relevant features from the raw data. (2) is to provide generic founding models, a task done by the ecosystem of AI start-ups and hyperscalers. The last step (3) is to integrate these models in enterprises’ daily business. It requires to specialize model for specific tasks (fine-tuning) and to efficiently query it to obtain to produce actionable outputs (inference).

While step 1 is already done on CPU, the other steps are still performed on GPU. This talk describes why and how other tasks can be carried out more advantageously on the European microprocessor with built-in HBM. The talk covers inference, fine-tuning and training. It demonstrates among other things the resilience of the European microprocessor which is more flexible to model changes than solutions currently in use.

 


Keynote #4

Albert Cohen is a research scientist at Google DeepMind. He leads a team at the forefront of the acceleration and energy-efficiency of machine learning models. An alumnus of École Normale Supérieure de Lyon and the University of Versailles (Paris Saclay), he first joined INRIA as a research scientist, then also held a part-time (teaching) associate professor position at École Polytechnique. He has been a visiting scholar at the University of Illinois, an invited professor at Philips Research then NXP as a recipient of a Marie Curie technology transfer fellowship, and a visiting researcher at Facebook Artificial Intelligence Research. Albert’s work spans the theory and practice of programming languages, parallelism, high-performance and power-efficient computing, as well as safety-critical embedded control, resulting in 250 peer-reviewed publications together with 28 PhD students and international collaborators. Some of this work led to technology transfer, including contributions to the industry standard GCC and LLVM compilers. Since joining Google, Albert played an essential role in the design and adoption of the MLIR platform for scalable and efficient machine learning.

Title: Compilers for Performance Engineers: Oxymoron or Revolution?

Abstract: Closing the gap with the peak performance of modern computing systems involves a combination of skills, from high-performance computing to computer architecture, with a dose of statistics and significant reverse engineering hackery. This is nothing new and unlikely to change: when it comes to performance, power, efficiency, it is the fundamental nature of computation, communication and storage set by the laws of physics to break the composable abstractions of high-level programming languages. Interestingly, many of the reverse engineering tricks involve compilers. Unfortunately not in a good way, including frustrations with the software stack and with compilers in particular. Performance engineers often ditch higher level abstractions because of the chaotic behavior of compiler optimizations. As a result, providing optimizing compilers with better control or feedback has long been an important research area. We will survey some of the challenges, partial successes, and research on controlling optimizations and code generation, with a focus on compilers for machine learning acceleration. We will highlight ongoing work on constraint-guided methods and scheduling languages showing the most promising impact on the effectiveness and productivity of performance engineers.


Keynote #5

Coral Calero is Professor at the University of Castilla-La Mancha in Spain and has a PhD in Computer Science. She is a member of the Alarcos Research Group, being responsible of the “Green and Sustainable software” line research, where two main lines of work are developed. The first one addresses issues such as measuring the impact that software and information systems have on the environment and how to improve its energy efficiency, as well as human and economic aspects related to software sustainability. The second major line of work supports all the group’s dissemination activities to raise awareness of the impact that software has on the environment. Since its creation in 2023, she has been one of the 12 members of the Spanish Research Ethics Committee.

 

Title: Always look on the green side of software

Abstract: That software moves the world is a clear fact. And that it is becoming more and more important, too. There are three aspects that have led to an increase in the intensity with which software is used: the Internet and social networks, data and artificial intelligence. However, not everything is positive in the support that software provides to our daily lives. There are estimates that ICT will be responsible for 20% of global energy consumption by 2030, part of which will be due to software. And precisely the three mentioned aspects require large amounts of energy.

In this keynote we will review different concepts related to software sustainability, and we will show some results of software consumption measurements that we have carried out on the one hand, cases carried out to raise awareness in society in general about the impact that software has on the environment. On the other hand, examples related to the consumption of data and artificial intelligence carried out with the aim of creating a set of best practices for the software professionals.

Our ultimate goal is to make you aware of the consumption problem associated with software and to ensure that, if at first, we were concerned with the “what” and later with the “how”, now it is time to focus on the “with what”.


Keynote #6

Danilo Pau is Technical Director, IEEE AAIA & ST Fellow, APSIPA Life Member in STMicroelectronics. Danilo (h-index 28, i10-index 74) graduated at Politecnico di Milano. He worked on memory reduced HDMAC HW design, MPEG2 video memory reduction. on video coding, transcoding, embedded (Khronos) 2/3D graphics, and (ISO/IEC/MPEG CDVS and CDVA with Leonardo Chiariglione) computer vision. Currently, his work focuses on the ST Unified AI Core Technology, a software environment to deploy machine learning workloads on sensors and micro-controllers. He supervises many students and enjoys publishing papers.

Title: nW Range Tiny Reconfigurable and Programmable Inferences for Pascal Accurate Pressure Sensor Re-Calibration

 

Abstract: MEMS pressure sensors are widely used in various application fields such as industrial, consumer, medical, and automotive. High volume and cost-effective manufacturing procedure severely shapes these sensors. When they are exposed to an increase in temperature, dust, humidity, and many other kinds of stresses, these ultimately lead the sensor to drift in its pressure measurements. Thermal stresses are a very important cause of sensor drifts. Exposure to high levels of temperatures, for example due to soldering, can cause reading deviations in the sensor’s response for hours (short terms) and even several days (long terms). This speech will address two state of art low cost MEMS pressure sensors under many relevant stress studies representing real-world application scenarios. For example, one case study is dedicated to the drifts caused by the soldering process during manufacturing with up to five reflow cycles. Another one induced drift due to the exposure of the sensor to a thermal stress of 150 Celsius degree for 1000 hours. An aging test is about the sensor to be left at the ambient temperature of 25 Celsius degree. Pressure measurements resulting from these cases were acquired by several manufactured devices and compared to the pressure measured by a golden reference barometer. In each case study, data were grouped to achieve more accurate compensation. The drift compensation was studied through several artificial neural networks and between them an ultra-tiny temporal convolutional neural network was selected to predict the pressure error for compensation. The model was implemented in software on both a low power micro controller and programmable sensors as well as into reconfigurable hardware manufactured by a test chip. Quantitative accuracy and implementation performances will be commented on. This work also paves the way for future on-device learning technologies.