next up previous
Next: Component-Based Programming Up: Software Components Enable Wide-Area Previous: Introduction

Why Existing Approaches to Parallel Programming Fail

Parts of the complexity of metacomputers (like security and coarse-grain task assignment) should preferably be handled by suitable software infrastructure on the middleware layer. But in order to achieve reasonable efficiency, important tasks concerned with heterogeneity and dynamic behaviour of network and computing nodes remain to be dealt with by the application itself. From a software engineering viewpoint, this additional complexity must not be fully exposed to the core application code, because this would lead to mixtures of application properties and algorithms with concerns related to the heterogeneous computation substrate as well as with dynamically changing network behaviour. As a consequence, application software becomes highly complex and extremely hard to maintain, analyze, and optimize; not to talk about any kinds of re-using given algorithms or data structures. Unfortunately, most existing parallel applications are written in such a way. The reason why they work is either a sacrifice of portability or the assumption of homogeneous computing nodes and highly efficient network connections, or even a combination of both.

Experience with programming systems aiming at completely hiding system complexity from application code indicates that even in simple cases (e.g. with homogeneous, dedicated parallel computers), programmers have to tune communication behaviour manually [11, 12]. Hence, the application has to know which kind of behaviour is ``expensive''. These problems are getting even worse in metacomputers and led e.g. to the introduction of application-level scheduling facilities [5].

As a consequence, all existing approaches to parallel programming will fall short when applied to metacomputing. Into this category fall message-passing systems [17], automatic parallelization [11], distributed shared memory [3, 6], as well as distributed object computing [20]. The only way out of the dilemma outlined above seems to be the construction of applications that are resource-aware and simultaneously resource-independent, in order to make application efficient as well as capable of adapting themselves to changing system properties. In the following, we outline how this goal can be achieved by constructing toolkits of reusable components, providing suitable abstractions of all sources of complexity, ranging from low-level communication and synchronization, via design patterns like managers and workers, up to parallel algorithm schemata. Our vision of building parallel applications for metacomputers is a process of gluing together given components and hence orthogonally integrating resource-awareness with resource-independent application code.


next up previous
Next: Component-Based Programming Up: Software Components Enable Wide-Area Previous: Introduction

Thilo Kielmann
Tue Apr 14 20:49:54 MET DST 1998