Friday, 2021-08-13

 

 

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Keynote Speakers

 

Micro-Benchmarking Considered Harmful; When the Whole is Faster or Slower Than the Sum of its Parts

Measuring the time spent on small individual fractions of program code is a common technique for analysing performance behavior and detecting performance bottlenecks. The benefits of the approach include a detailed individual attribution of performance and understandable feedback loops when experimenting with different code versions. There are however severe pitfalls when following this approach that can lead to vastly misleading results. Modern optimizing compilers use complex optimization techniques that take a large part of the program into account. There can be therefore unexpected side-effects when combining different code snippets or even when running a presumably unrelated part of the code. This talk will present performance paradoxes with examples from the domain of dynamic compilation of Java programs. Furthermore, it will discuss an alternative approach to modelling code performance characteristics that takes the challenges of complex optimising compilers into account.

Thomas Wuerthinger is a Senior Research Director at Oracle Labs leading programming language implementation teams for languages including Java, JavaScript, Ruby, and R. He is the architect of the Graal compiler and the Truffle self-optimizing runtime system. Previously, he worked on the Crankshaft optimizing compiler of V8 at Google, and the Maxine research virtual machine at Sun Microsystems. He received a PhD degree from JKU Linz for his research about dynamic code evolution.

 

Performance is Also a Matter of Where You Live

Nowadays, a plethora of techniques and methods are available to optimize the runtime behavior of complex applications, ranging from modeling/prediction tools to the employment of recognized patterns and/or knowledge-bases on the expected performance under specific workloads. However, in common scenarios, the ultimate applications' behavior may depend on features that are scarcely predictable or difficult to be taken into account when designing the applications and their own runtime optimizers. Among them, we mention the actual structure of the underlying hardware and/or virtualized platforms, as well as specific runtime dynamics such as thread correlation on data and synchronization---not much the average behavior, rather punctual effects. We believe that the environment where applications live, like operating systems and user-space runtime libraries, play a central role in coping with these features. We similarly believe that such environments must be re-staged so as to be actually effective in pursuing the performance optimization goal. In this talk, we discuss specific guidelines to re-stage the environments, based on a real experience, and we point as well to challenges that are still untackled and deserve attention by the research community.

Francesco Quaglia received the Laurea degree (MS level) in Electronic Engineering in 1995 and the PhD degree in Computer Engineering in 1999 from the University of Rome ``La Sapienza''. From summer 1999 to summer 2000 he held an appointment as a Researcher at the Italian National Research Council (CNR). Since January 2005 he works as an Associate Professor at the School of Engineering of the University of Rome ``La Sapienza", where he has previously worked as an Assistant Professor since September 2000 to December 2004. His main research interests are in the areas of high performance computing, dependable computing, transactional systems, operating systems, automatic code parallelization, performance analysis and optimization. Currently, he is the director of the HPDCS (High Performance and Dependable Computing Systems) Research Lab at the University of Rome ``La Sapienza''.

 

Autonomic storage management at scale

Cloud data centers use enormous amounts of storage, and it is critical to monitor, manage, and optimize the storage autonomically. Optimally configuring storage is difficult because storage workloads are very diverse and change over time. Data centers measure running workloads, but this measurement data stream is itself quite large. We present some real world case studies in the use of big data techniques, sampling, and optimization to manage storage in data centers.

Arif Merchant is a Research Scientist at Google and leads the Storage Analytics group, which studies interactions between components of the storage stack. His interests include distributed storage systems, storage management, and stochastic modeling. He holds the B.Tech. degree from IIT Bombay and the Ph.D. in Computer Science from Stanford University. He is an ACM Distinguished Scientist.