Monday, 2018-12-17

 

 

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

ICPE 2017 Tutorials

April 22, 2017

Track 1

  • Application Performance Management: State of the Art and Challenges for the Future
    Christoph Heger (NovaTec Consulting GmbH), André van Hoorn (University of Stuttgart), Mario Mann (NovaTec Consulting GmbH), Dušan Okanović (University of Stuttgart)

     

    9:00 -> 10:30 break 11:00 --> 12:30

     

    The influence of application performance on the success of the business is very high, as businesses may loose customers and revenue in case of performance problems. Thus, continuous monitoring and analysis of performance parameters of the application is required during production. The data is collected on all system levels (hardware, operating system, application), as well as from users and businesses. It is further combined for analysis, and results can be used for different goals, e.g., improving performance, finding performance problems and their causes, capacity planning or auto-scaling. In this tutorial, we will present the state of the art in the field of application performance management (APM) in industrial practice and academic research. We will present the ways to collect data and perform analysis using available commercial and open-source APM tools. Our goal is to introduce the APM practices to both researchers and industry professionals and encourage them to work on APM interoperability, exchange APM data, experiences, and approaches. Our talk will also include future directions in APM development, that will include using APM in new environments, such as mobile and DevOps.



  • SPEC Cloud IaaS 2016 Benchmark
    Salman A. Baset and Marcio Silva (IBM Research), Nicholas Wakou (Dell)

     

    14:00 -> 15:30 break 16:00 -> 17:30

     

    In May 2016, Standard Performance Evaluation Corporation (SPEC) released SPEC Cloud IaaS 2016, the first industry standard benchmark that measures the performance of infrastructure-as-a service (IaaS) clouds. The benchmark measures the scalability, elasticity, mean instance provisioning, provisioning and run-time success of both public and private clouds.

    The tutorial will:

    • give an overview of the benchmark, its metrics, and the underlying harness;
    • present a demo on how to run the benchmark;
    • show the underlying harness, CloudBench in action.

 

Track 2

  • Software Performance Analytics in the Cloud
    Kingsum Chow and Wanyi Zhu (Alibaba Infrastructure Services)

     

    9:00 -> 10:30 break 11:00 -> 12:30 lunch 14:00 -> 15:30 break 16:00 -> 17:30

     

    The emergence of large-scale software deployments in the cloud has led to several challenges: (1) measuring software performance in the data center, and (2) optimizing software for resource management. This tutorial addresses the two challenges by bringing the knowledge of software performance monitoring in the data center to the world of applying performance analytics. It introduces data transformations for software performance metrics. The transformations enable effective applications of analytics.

    This tutorial starts with software performance in the small and ends with applying analytics to software performance in the large. In software performance in the small, it summarizes performance tools, data collection and manual analysis. Then it describes monitoring tools that are helpful in performance analysis in the large.

    The tutorial will guide the audience in applying analytics to performance data obtained by common tools. We would have two hands on exercises to engage the audience in applying analytics. This tutorial describes how to select analytical methods and what precautions should be taken to get effective results.

 

April 23, 2017

 

Track 1

  • An introduction to systems and control theory for computer scientists and engineers
    Alberto Leva (Politecnico of Milan)

     

    9:00 --> 10:30 break 11:00 --> 12:30 lunch 14:00 --> 15:30 break 16:00 --> 17:30

     

    The tutorial aims at introducing the basics of system and control theory, in such a way to foster their utilisation in the management and design of computing systems. Besides presenting mathematical results and tools, the focus will be set on the systems and control theory as a forma mentis rather than a source of algorithms.

    The tutorial is divided into three parts. The first one introduces the two fundamental concepts of dynamic system and feedback, and provides an overview of the properties that a control system has to enjoy, together with the main techniques to prescribe and assess the said properties formally. The second part discusses a couple of application examples, revisiting the entire addressed problems from scratch with a system-centric viewpoint, and comparing the solutions - and most important, the way the system is viewed and designed - with state-of-the-art alternatives. This leads to envisage the potentialities of control-based computing systems design, but at the same time to identify open problems, from both the technological and the methodological standpoints. A short overview of these aspects is the subject of the third part of the tutorial, ending with a discussion.

 

Track 2

  • Design and Evaluation of a Proactive, Application-Aware Auto-Scaler
    André Bauer, Nikolas Herbst, Samuel Kounev (University of Wurzburg)

     

    9:00 --> 10:30 break 11:00 --> 12:30 lunch 14:00 --> 15:30 break 16:00 --> 17:30

     

    Simple, threshold-based auto-scaling mechanisms as often used in practice bring no features to overcome resource provisioning delays and interdependencies between layers of a software service. In this tutorial, we guide the audience step-by-step through the design and evaluation of a proactive and application-aware auto-scaling mechanism.

    First, we introduce the building blocks for such an auto-scaling mechanism: (i) an on-demand arrival rate forecasting method, (ii) resource demand estimates at run-time, (iii) a descriptive and continuously updated performance model of the deployed software and (iv) an intelligent adaptation planner that incorporates a threshold-based mechanism as fallback.

    In the second part of the tutorial, we cover auto-scaler evaluation steps: Preparation steps are a scenario and workload problem definition and an automated scalability analysis. Then, we show how auto-scaler experiments can be conducted and analysed with the help of metrics for a detailed and fair comparison of alternative auto-scaler mechanisms and their respective configurations.

    Tools available online for the construction of the autoscaler building blocks and for the evaluation are introduced with demonstrations or hands-on.