( alfabetycznie wg autorów )

Norbert Attig
Jülich Supercomputing Centre (JSC), John von Neumann Institute for Computing (NIC),
Forschungszentrum Jülich GmbH, D-52425 Jülich, Germany

IBM Blue Gene for Scientific Computations


Driven by technology, Scientific Computing is rapidly entering the PetaFlops era. The Jülich Supercomputing Centre (JSC), one of three German national supercomputing centres, is focusing on the IBM Blue Gene architecture to provide computer resources of this class to its users. The Blue Gene systems at JSC are used as Leadership-class systems and host only a small number of projects to give selected researchers the opportunity to get new insights into complex problems which were out of reach before.

Already in early summer 2005, JSC started testing a single Blue Gene/L rack with 2,048 processors [1]. It soon became obvious that many more applications than initially expected could be ported to run efficiently on the Blue Gene architecture. Therefore, in January 2006 the system was expanded to 8 racks with 16,384 processors. The 8-rack system has successfully been in operation for almost two years now. Today, about 30 research groups, which were carefully selected with respect to their scientific quality, run their applications on the system using job sizes between 1,024 and 16,384 processors. Following this operational success, the Forschungszentrum Jülich commissioned a powerful next-generation Blue Gene system. In October 2007, a 16-rack Blue Gene/P with 65,536 processors was installed [2]. With its peak performance of 222.8 TFlop/s, Jülich’s Blue Gene/P – alias JUGENE – is currently the biggest supercomputer in Europe and ranked No 2 worldwide. A key feature of this architecture is its scalability towards PetaFlops computing based on low power consumption, small footprint and an outstanding price-performance ratio.

Due to the fact that the Blue Gene systems are well balanced in terms of processor speed, memory latency, and network performance, applications scale reasonably up to large numbers of processors. Moreover, surprisingly many applications from different research fields can be ported to and run efficiently on this new architecture, whose forerunner version was mainly designed to perform lattice quantum chromo dynamics (LQCD) computations. Blue Gene applications at JSC cover a broad spectrum ranging from LQCD to MD codes like CPMD and VASP, materials science, protein folding codes, fluid flow research, quantum computing and many, many others.

The performance and the scaling behaviour of the applications are being continuously improved in close collaboration between the user support team at JSC and the corresponding computational scientists. For example, JSC organizes Blue Gene Scaling Workshops, where experts from the Blue Gene Consortium, comprising Argonne National Laboratory, IBM and Jülich, help to further optimise important applications. Computational scientists from many research areas use this chance to improve their codes during these events and then later apply for significant shares of Blue Gene computer time to tackle challenging phenomena which come into reach with this architecture.

[1] Attig, N., Wolkersdorfer, K.: IBM Blue Gene/L in Jülich: A First Step to Petascale Computing; inSiDE, 3, 2005, 2, S. 18-19
[2] Stephan, M., Wolkersdorfer, K.: IBM BlueGene/P in Jülich: The Next Step towards Petascale Computing; inSiDE, 5, 2007, 2, S. 46-47


Jaap A. Kaandorp
Section Computational Science, Faculty of Science, University of Amsterdam, The Netherlands

Computational modelling of corals: from genes to colony


Within the metazoans, sponges and cnidarians represent the phyla with the simplest body plan and a relatively simple regulatory network controlling the development. This makes these organisms an excellent case study for understanding morphogenesis and the physical translation of the genetic information into a growth form, using a combination of biomechanical models of growth and form and a model of the spatial and temporal expression of developmental genes. During this talk we will give an overview of the ongoing work at the Section Computational Science on modelling and simulation of growth and form in scleractinian corals. For modelling growth and form of corals it is required to combine models at very different spatio-temporal scales: gene regulation and cellular level, calcification, physiology (respiration and photosynthesis), growth of the coral skeleton and the impact of the physical environment (water movement and availability of light). We will briefly discuss:

  • Modelling spatio-temporal gene expression patterns and cell movement in early development of the cnidarians Nematostella vectensis and Acropora millepora and methods for inferring gene networks from gene expression data based on mathematical models of early development in Drosophila melanogaster,
  • Some preliminary results on modelling genetic regulation of calcification in Acropora millepora and the analysis of micro-Computer Tomography scans of corallites of Madracis sp,
  • Modelling of calcification (calcium and carbonate physiology, photosynthesis),
  • (If time permits: ) Modelling accretive growth of Madracis sp colonies and the impact of the physical environment (advection-diffusion and light), methods for the morphological analysis and comparison of three-dimensional images of coral colonies obtained with Computer Tomography scanning and simulated morphologies and methods for the genetic comparison of different coral colonies.


  1. Y. Fomekong Nanfack, J.A. Kaandorp, J.G. Blom, Efficient parameter estimation for spatio-temporal models of pattern formation: the case study of early embryogenesis of Drosophila melanogaster, Bioinformatics, 2007 (in press).
  2. Mathematical Modeling of Coral Calcification in Stylophora pistillata, J. Cui, J. A. Kaandorp & D. Allemand, under submission.
  3. J.A. Kaandorp, E.A. Koopman, P.M.A. Sloot, R.P.M. Bak, M.J.A. Vermeij and L.E.H. Lampmann Simulation and analysis of flow patterns around the scleractinian coral Madracis mirabilis (Duchassaing and Michelotti). Phil. Trans. R. Soc. Lond. B358 (1437): 1551-1557, 2003.
  4. J.A. Kaandorp, P.M.A. Sloot, R.M.H. Merks, R.P.M. Bak, M.J.A. Vermeij,and C. Maier Morphogenesis of the branching reef coral Madracis mirabilis, Proc. Roy. Soc. B. 272:127-133, 2005.
  5. K. Kruszynski, J.A. Kaandorp and R. van Liere A computational method for quantifying morphological variation in scleractinian corals, Coral Reefs, 2007 (in press).


Andrzej Olszewski
Instytut Fizyki Jądrowej PAN w Krakowie

Eksperymenty fizyki wysokich energii w sieciach komputerowych Grid


Potrzeby obliczeniowe eksperymentów Fizyki Wysokich Energii stale rosną wraz z rozwojem badań rzadkich procesów fizycznych przy coraz wyższych energiach zderzeń cząstek elementarnych. Wzrost energii zderzeń zwiększa wymagania co do złożoności systemów detektorów cząstek i metod rekonstrukcji zdarzeń fizycznych, a badania nad rzadkimi procesami zwiększają zapotrzebowanie na przechowywanie ogromnych ilości danych. Tak wysokie wymagania co do zasobów obliczeniowych mogą być obecnie zaspokojone jedynie przez sieci komputerowe Grid zarządzane przez międzynarodowe organizacje koordynujące takie jak EGEE czy jej odmiana WLCG dla eksperymentów na akceleratorze LHC. Podczas prezentacji przedstawię metody wykorzystania sieci Grid przez eksperymenty FWE ze szczególnym uwzględnieniem działalności tych eksperymentów, które wykonywały lub planują wykonywać obliczenia w ośrodku obliczeniowym ACK Cyfronet w Krakowie.


Irena Roterman
Dept. of Bioinformatics and Telemedicine, Collegium Medicum – Jagiellonian University, Krakow

Large Scale Computing in Pharmacology


Introduction: Proteins are the molecules responsible for chemical (biological) processes in living organism. Their ability to express their function is defined and determined by their highly specific three-dimensional structure which allows the chemical reactions to be performed as well as the adaptation to different external conditions. Many pathological processes have their source in proteins malfunction. The recognition of correct processes shall be available before the corrections of damaged process can be done. The problem is that the mechanism responsible for three-dimensional structure generation is not recognized despite of long lasting attempts in this field. The new model oriented on the simulation of protein folding process rather than protein structure prediction has been developed and applied for standard proteins. The verification of the model requires the large scale computing to test as many as possible different protein structures.

Description of a problem solution: The model assumes the multi-step character of the protein folding process. The early- stage – responsible for backbone conformation and late-stage of folding driven mostly by hydrophobic interaction are taken into consideration. The application of the model allows also the biological function recognition, particularly for small proteins.

Applied algorithms: The large scale calculation for the 107 proteins data base was performed on the grid platform (EuChinaGrid project). The Never Born Proteins was the object of calculation. Two methods were applied: stochastic and heuristic one to make possible the comparison of results. The stochastic method faced some difficulties due to the lack of database, meanwhile the heuristic method produced structures of well defined biological function. The NMR experiment (synthesis and structural analysis) will verify the reliability of calculations.

Results: The proteins under consideration do not exist in the nature. This is why some sequences (about 15 %) generated the structures not applicable for computations. Some proteins appeared to demonstrate the biological activity using the irregularity of hydrophobic distribution as the criterion.

Conclusions and future work: The challenge for current pharmacology is to project the individual therapy for each patient taking into account the polymorphism. Only computer calculations will be able and fast enough to simulate and construct drugs correcting the proteins function appropriately for individual case. The lack of general model makes the computer simulation doubtful. The practical pharmacology is awaiting the tools for protein structure generation to make possible the successful treatment.

Keywords: individual therapy, pharmacology, protein structure

[1] Jurkowski et al. (2004) Proteins: Structure, Function and Bioinformatics 55, 115-127
[2] Konieczny et al. (2006) In Silico Biology 6, 15-22, (2006)
[3] Bryliński et al. (2006) Bioinformation 1 127-129 (2006)


Paweł Russek, Ernest Jamro, Maciej Wielgosz, Marcin Janiszewski,
Marcin Pietroń, Kazimierz Wiatr

Department of Electronics, AGH UST, Kraków

Computation Acceleration on SGI RASC: FPGA Based Reconfigurable Computing Hardware


Authors will present a novel method of computation using FPGA (Field Programmable Gate Arrays) technology. In several cases this method provide a calculations speed up with respect to the general purpose processors. The main concept of this approach is based on the design of dedicated hardware which fits accelerated algorithm dataflow and best utilize well known computing technics as pipelining and parallelizm. Configurable hardware is used as a implementation platform for custom designed hardware.

Paper will present implementation results of algorithms used in such areas as cryptography, data analysis and scientific computation. The other promising areas of new technology utilization will also be mentioned. This are bioinformatics for instance. Mentioned algorithms were designed, tested and implemented on SGI RASC platform. RASC module is a part of Cyfronet's SGI Altix 4700 SMP system. We will also present RASC's modern architecture. In principle it consist of FPGA chips and very fast 128-bit wide local memory. Design tools avaliable for designers will also be presented. There are available both Hardware Description Languages and High Level Languages.


ACK Cyfronet AGH
ul. Nawojki 11
30-950 Kraków
tel: 012 634 1766
fax: 012 633 8054