|
|
Tutorials
The Palladio Component Model
by Steffen Becker, Department Manager at the Forschungszentrum Informatik (FZI)
The Palladio Component Model (PCM) has been developed over the last 5 years. Today it is a mature modeling language for modeling component-based or service-oriented software systems with a special focus on predicting extra-functional properties of the system based on its constituting components. The PCM highly relies on model-driven software development techniques for this and uses automated transformations into well-known prediction models or simulation systems. It is supported by a mature, industry proven
tool set based on the Eclipse platform.
The tutorial presents the PCM's foundational ideas from the area of component-based or service-oriented software development, its analysis capabilities, its tooling support, and possible extension points.
In the component-based foundations, the tutorial defines the term component and presents components in different phases of their life-cycle. The discussion is completed by
showing the PCM's understanding of a typical component-based software development process and the developer roles involved into it. The way these developer roles collaborate
highly impacts the way, how components are being modeled and parameterized in the PCM.
The following part of the tutorial then focuses on performance predictions and the annotations necessary for this. It introduces the stochastic expression language (StoEx) which is used in the PCM to specify generally distributed stochastic and/or parametric performance annotations. Additionally, it shows how these annotations are being interpreted by the PCM's analysis transformations. The last part of the tutorial introduces the PCM's tool set and shows how to use it to create and analyze PCM models.
|
|
Steffen Becker is Department Manager at the Forschungszentrum Informatik (FZI) in Karlsruhe in the division for Software Engineering since January 2008. Before, he graduated from the University of Oldenburg with a PhD in computer science. From July 2003 he was a member of the young investigators excellence program Palladio of the National German Research Foundation. He got his diploma in business administration and computer science combined in 2003 from the Technical University of Darmstadt.
He participates regularly in conferences where he gives presentations, holds tutorials, and participates in panel discussions. He is known for being one of the initiators of the Palladio Component Model, a meta-model for describing the performance aspects of component-based software. His further interests include model-driven software development, software architectures, and model-driven quality predictions. |
Benchmarking Event Processing Systems: Current State and Future Directions
by Marcelo Mendes, Pedro Bizarro, Paulo Marques, University of Coimbra
Complex Event Processing (CEP) has attracted a lot of interest from academia and industry in recent years. It has been employed in a variety of domains (e.g. financial, health-care, military) as a way of promptly detecting and reacting to the occurrence of certain events/situations of interest. However, as a relatively new area, many people are still unaware or unfamiliar with CEP. The goal of this tutorial is therefore twofold: first to give a broad view of CEP to researchers and practitioners of the performance engineering community; second to share our experiences over the last months in the ambit of BiCEP, a research project at University of Coimbra that aims at devising standard benchmarks for CEP. We present the general principles behind the definition of benchmarks, the specific challenges and novelties found when benchmarking CEP systems, as well as the current state of the BiCEP project and its future directions. We also provide hands-on instruction on the FINCoS framework, a set of tools we have developed for carrying out experimental performance evaluation of CEP engines.
|
|
|
Marcelo Mendes is a PhD student at the University of Coimbra and a member of BiCEP, a research project aimed at analyzing, comparing and improving the performance and scalability of event processing systems. Formerly, Marcelo has worked as Software/Performance Engineer at the CIn/Itautec Performance Lab, where he got involved in several studies and activities concerning benchmarking, system sizing and capacity planning. His main research areas are Management of Data, Complex Event Processing, and Performance Engineering.
|
Regression Techniques for Performance Parameter Estimation
by Murray Woodside, Carleton University, Ottawa
This tutorial will describe how to use nonlinear regression techniques to fit the parameters of any kind of performance model to performance data measured at the boundaries of the system. The advantage of this approach, which has never been a standard practice in performance work, is that it avoids the need for intrusive monitoring of execution paths, such as profiling.
The topics covered will include:
- The estimation problem
- Regression basics: normal equations, confidence intervals
- Non-linear regression using iteration
- Fitting a performance model into non-linear regression
- Significance of model details (pruning insignificant details)
- Examples
|
|
Murray Woodside does research into performance modeling of software, often based on the use of the layered queuing model which he originated. He and his students have elaborated this model to describe enterprise service systems, embedded distributed systems, systems with speculative operations, and parallel computing, often with industrial partners. He is a past Chairman of ACM Sigmetrics, a Fellow of IEEE, and an Associate Editor of Performance Evaluation, and since his retirement has had a position as Distinguished Research Professor at Carleton.
|
Automatic Generation of Benchmark and Test Workloads
by Jozo J. Dujmović
Motivation:
It is realistic to expect that the majority of future benchmark and test workloads will be
automatically generated using benchmark program generators. According to Moore's law
the memory capacity and the performance/price ratio exponentially grow doubling each
12-18 months. Consequently, industrial benchmarks should follow the exponential
growth of computer performance, and this cannot be achieved using natural workloads
that are updated once in several years. In addition, automatic workload generators are
crucial for software testing and for performance analysis of modern language processors.
Tutorial goals:
The goal of this tutorial is to present methods and tools for automatic generation and use
of benchmark and test workloads. In addition, the tutorial will include live demos of
automatic generation of large programs (2 million LOC), benchmark program calibration,
and the use of generated programs for performance analysis of computers and language
processors. The tutorial provides techniques that are of interest for software engineers,
developers, performance analysts and researchers.
Tutorial Outline:
- Static and dynamic characterization of computer workloads
- Machine dependence and independence in workload characterization
- White-box and black-box program difference metrics
- The concept of program cloning
- A recursive expansion (REX) method for program generation
- A kernel insertion (KIN) method for program generation
- Design and calibration of kernel libraries
- Generators of random source programs
- Generators of programs for computer performance measurement
- Live demo of generating very large programs
- Compiler performance measurement, modeling, and comparison
- Analysis of code density, compilation and execution times
- Generators of network workload
The emphasis of this presentation will be on sections 5,6,7,8,9,10.
|
|
Jozo J. Dujmović was born in Dubrovnik, Croatia, and received the Dipl.
Ing. degree in electronic and telecommunication engineering in 1964, and
the M.Sc. and Sc.D. degrees in computer engineering, in 1973 and 1976
respectively, all from the University of Belgrade, Serbia.
Since 1994 he has been Professor of Computer Science at San Francisco
State University, where he served as Chair of Computer Science
Department from 1998 to 2002. His teaching and research activities are in
the areas of soft computing, software metrics and computer performance evaluation. In 1973
he introduced the soft computing concepts of andness and orness and logic aggregators based
on continuous transition from conjunction to disjunction. He used these concepts to develop
the Logic Scoring of Preference (LSP) method for evaluation, selection, and optimization of
complex systems. He is the author of more than 130 refereed publications, including 13
books and book chapters. Before his current position at San Francisco State University, he
was Professor of Computer Science at the University of Belgrade, University of Florida
(Gainesville), University of Texas (Dallas), and Worcester Polytechnic Institute. In addition,
he was teaching in the graduate Computer Science programs at the National Universities of
San Luis and Jujuy (both in Argentina). At the University of Belgrade, where he was
teaching from 1968 to 1992, he also served as Chairman of Computer Science Department,
and as founding Director of the Belgrade University Computing Center. His industrial
experience includes work in the Institute "M. Pupin" in Belgrade, and consulting in the areas
of decision methods, performance evaluation, and software design.
Prof. Dujmović is the recipient of three best paper awards, and a Senior Member of IEEE. He
is an editor of Informatica, and served as General Chair of the Eight IEEE International
Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication
Systems (MASCOTS 2000), and as General Chair of the Fourth ACM International
Workshop on Software and Performance (WOSP 2004).
|
|
|
|