NCF, The Hague, The Netherlands
The Netherlands Science Grid
In the past five to ten years, the ICT infrastructure for scientific research in the Netherlands has developed from a fast network with individual large (computational) resources into a grid infrastructure, with coupled compute and storage resources, supported by a fast hybrid network. Main acting parties in the field are SURFnet (responsible for the networking infrastructure), NCF (responsible for large HPC and storage facilities), VL-e (responsible for generic e-science software) and SARA (housing and operational support).
Following early grid projects in 2004 and 2006, a large-scale grid effort has been launched in 2007 under the name BiG Grid. The BiG Grid project is to realise a major grid for science and research in the Netherlands. The project is targeted to a broad range of scientific research disciplines which can take advantage of the ICT resources, available in a science grid infrastructure.
The BiG Grid project is funded for the period 2007-2012 with a total budget of M€ 28.8. It will cover a full range of hardware entities, but also a significant amount of the budget is reserved for manpower, also to attract and to bring a wide (new) range of scientific disciplines to the science grid. The project has been initiated by the Netherlands National Computing Facilities Foundation (NCF), the National institute for subatomic physics (Nikhef) and the Netherlands Bio-Informatics Centre (NBIC).
This presentation will cover the e-infrastructures strategy in the Netherlands in general, and role of the grid therein in particular.
ICM, Warsaw University &
Faculty of Mathematics and Computer Science, N. Copernicus University, Torun, Poland
Grid Computing at ICM
The Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) at Warsaw University was founded in 1993 by the Senate of Warsaw University is a research centre in computational sciences, with a multidisciplinary profile. The research of the centre focuses on the area of mathematical, natural and computational sciences, as well as in networking and information technology.
ICM has a longstanding experience in the Grid technology. For the past few years it has been involved in the software development, its parallelisation as well as adaptation to the Grid middleware (see biogrid.icm.edu.pl). In particular, it has been involved in the EU projects EUROGRID, GRIP, CROSSGRID, EGEE, UniGrids and Chemomentum. The particular area of expertise in the Grid technology is development of the application specific interfaces to the number of biomolecular applications including commercial software, academic packages and own developments in the quantum-classical molecular dynamics and bioinformatics.
ICM staff id familiar with the different grid middleware including globus, gLite and UNICORE. In particalur, within Chemomentum project, ICM has been developing key components of the UNICORE 6 middleware such as security framework, UNICORE Virtual Organization Services or web access. This components has been integrated with the official releases and are widely used all over the world. The another area of expertise is adjustment of the grid software to the vector systems available at ICM.
WCSS, Wroclaw, Poland
Presentation of the WCSS and it's Activities
Wroclaw Centre for Networking and Supercomputing is an organizational unit of Wroclaw University of Technology. It was founded in 1995. Main objectives of WCSS are: maintenance and development of the Wroclaw Academic Computer Network (WASK), providing HPC services for science and network services for all academic institutions in the city of Wroclaw. WCSS was taking part in several projects namely Clusterix (Polish grid project), LDAP, NASTEC, POSITIF and SGIgrid. Now we are a partner in the following ventures: EGEE, PRACE, KMD, eduroam. We are also developing security application - ACARM which is an integrated platform for attack recognition and risk assessment. The most important of our duties is to provide high quality HPC services for science. WCSS offers access to clusters and supercomputer with total of almost 2000 computing cores. Our staff is doing its best to make it as easy as possible to use for our users.
INRIA, CNRS, IUF, France
Grid Component Model and ProActive Parallel Suite for Science:
Bridging Distributed and Multi-Core Computing
ProActive (proactive.inria.fr) is a GRID Java library (Source
code under GPL license) for parallel, distributed, and concurrent computing,
also featuring mobility and security in a uniform framework. ProActive aimed at
simplifying the programming of applications that are distributed on Local
Area Network (LAN), on cluster of workstations, or on the GRID. ProActive
promotes a strong NoC approach, Network On Ship, to cope seamlessly with
both distributed and shared-memory multi-core machines. A theoretical
foundation ensures constant behavior, whatever the environment.
Grid Component Model (GCM) has been initially defined in the CoreGRID NoE EU
funded project. The GridCOMP STREP EU funded project is refining this
definition and providing the reference implementation of the GCM specification based on
the ProActive library. Grid Plugtests organized by European Telecommunications
Standards Institute (ETSI) and INRIA since 2004 have permitted to test GCM on
up 3888 nodes. GCM is currently being standardized as an official ETSI standard of
the ETSI Technical Committee (TC) on Grid.
The GCM includes 4 related standards, namely:
1. GCM Interoperability Deployment,
2. GCM Application Description,
3. GCM Architecture ADL (Application Description Language),
4. GCM Management (Java, C, WSDL API)
Interactive and graphical GUI and tools will also be presented during the
talk. Overall, ProActive/GCM features are the following:
- an Eclipse Integrated Development Environment (IDE)
- a set of parallel programming frameworks in Java, including:
- Branch & Bound
- Active Objects (Actors)
- Legacy Code Wrapping
- a component framework as a reference implementation of the GCM.
- a resource acquisition, virtualization and deployment framework
implementing the GCM standard
ProActive research and developments are conducted with all the great
researchers and developers from the OASIS Team: www-sop.inria.fr/oasis
STFC e-Science Centre, UK
UKs National Grid Service
I will give a brief introduction and status review of the UKs National
Grid Service and its relation to other grid activities in the UK and
Current progress towards a sustainable model for e-infrastructure
support will be summarized.
DEISA and OGF, Germany
Science Applications on the DEISA Infrastructure
This presentation addresses the state-of-the-art of applications running
on distributed, high performance and large scale computing Grids such as
DEISA. We will look into application modeling and simulation, design and
use, and their impact on science. We will present selected compute and
data intensive applications which have been ported to Grids recently,
especially to the Distributed European Infrastructure for Supercomputing
Applications (DEISA) which connects the 11 most powerful supercomputers in
Europe into a supercomputing grid. We will highlight the main DEISA
middleware developments which today enable scientists seamless,
transparent and remote access to supercomputing cycles in Europe. The
presentation will conclude with lessons learned and recommendations on
porting and running applications on this distributed supercomputing
LIP, Lisbon, Portugal
The Portuguese Grid Initiative
Launched in 2006, the Portuguese National Grid Initiative aims to support the development of resource sharing for demanding computing applications and ensure the enhancement of strategic competences and capacities of special interest for this type of computing in Portugal.
In this presentation we provide an overview of grid computing in Portugal and we describe the Portuguese National Initiative activities, current status and future plans. Among these is the deployment of a grid infrastructure for scientific computing that is now being prepared within the context of the Initiative.
ACC Cyfronet AGH and Institute of Computer Science AGH, Kraków, Poland
Structure and Status of National Grid Initiative in Poland
As a response to needs of Polish scientists and due to ongoing grid activities in Europe within the framework of EGI – Design Study Project, in January 2007 an agreement concerning the creation of the Polish Grid (PL-Grid) Consortium was approved. The Consortium is made up of five largest Polish supercomputing and networking centers. The main aim of the Consortium is to create and develop a stable Polish Grid infrastructure, fully compatible and interoperable with European and worldwide Grids. The intention of the proposed infrastructure is to provide scientific communities in Poland with Grid services, enabling realization of the e-Science model of research in various scientific fields and making international collaboration easier and more profitable. It is expected that the Consortium activity will be financially supported by the Operational Programme on Innovative Economy, ongoing in Poland.
In the presentation motivation and detailed aims will be outlined, together with the structure and description of planned activities.
Acknowledgements are due to: M. Turala, K. Wiatr, M. Bubak, A. Kusznir, J. Niwicki, T. Szepieniec, M. Radecki, A. Ozieblo, Z. Mosurska, (from Cyfronet), P. Bala, W. Wislicki (from ICM Warsaw), N. Meyer, K. Kurowski (from PCSS Poznan), J. Janyszek, A. Kwiecien, J. Pankiewicz,
B. Balcerek (from WCSS Wroclaw), M. Nakonieczny, R. Tylman, M. Bialoskórski (from TASK Gdansk).
LMU, Muenchen and LRZ, Garching near Munich, Germany
The European Grid Initiative – Rationale for a Sustainable Grid Infrastructure in Europe
The European Grid Initiative represents an effort to establish a sustainable grid infrastructure in Europe. Driven by the needs of the research community, it is expected to enable the next leap in research infrastructures, thereby supporting collaborative scientific discoveries in the European Research Area (ERA).
Within this effort, major key stakeholders in grid infrastructures in Europe are working together to establish a long-term perspective for the co-ordination and operation of European grid infrastructures. This involves identifying possibilities for transforming the current project-based funding of grid infrastructures into a more sustainable one, such that continuity and interoperation of grids in Europe is ensured.
The core foundation of EGI is the set of National Grid Initiatives (NGI), which operate the grid infrastructures in each country. Therefore these NGIs have a fundamental role within EGI and must be available for the effort to succeed. EGI intends to link existing NGIs and will actively support the set-up and initiation of new ones. The National Grid Initiatives (NGIs) are the recognized national bodies, each with a single point-of-contact operating the national grid infrastructure. They support user communities, mobilise national funding and resources, contribute and adhere to international standards and policies.
The three principal strategic objectives of EGI are to ensure the long-term viability of the European e-infrastructure, to co-ordinate the integration and interaction between national grid infrastructures and to operate the European level of the production grid infrastructure for a wide range of scientific disciplines. In addition, a series of other objectives necessary for EGI to function properly includes virtual organization support, security, middleware development co-ordination, packaging software; encouraging consultation with relevant standards organizations and collaborate closely with technology and service providers; initiating training to promote the rapid and successful uptake of grid technology by European industry.
EGI is an important initiative for science and research in Europe. The current project-based funding of grid infrastructures has reached its limits. Infrastructure users require a long term perspective, such that their investments for bringing their applications to the grid are protected. At the same time, Europe and its member states have invested heavily in grid infrastructures and these investments need to be leveraged for the future. The grid of today should be available seamlessly as a future service.
The EGI organisation is the next logical step towards grid service sustainability. However, this organisation is expected to evolve over time to take onboard new technologies and changed user needs. EGI should become the driving force of tomorrow's European research and technology, enabling science to remain at the cutting edge and industry competitive.
The goal of the EGI Design Study (EGI_DS) project is to evaluate cases of grid usage to identify processes and mechanisms for establishing EGI, to define the structure and ultimately to initiate the construction of the EGI organization.
More information: www.eu-egi.eu
Jesus Marco de Lucas
Instituto de Física de Cantabria, IFCA, CSIC-UC, Santander, Spain
Origin and Evolution of the Spanish NGI
In early 2003, around 20 Spanish research groups with interest in Grid Technologies, several of them participating in the DATAGRID and CROSSGRID projects, started a networking activity called IRISGRID, which main objective was to analyse, define and foster the consolidation of a scientific community of users, infrastructure providers, application and middleware developers with interest on Grids. This community started promoting the concept of a Spanish NGI at the Ministry of Science. In 2004, the Ministry selected a set of experts (several of them among this initial group) for the development of both a green and a white paper on e-Science, comprising among its topics the interest for a NGI in Spain. This work was coordinated and published by the FECYT (Spanish Foundation for Science and Technology). Concurrently, the National Middleware Experts Research Network and the Spanish Supercomputing Network were started-up. In 2006, the Ministry selected a group of experts and nominated, with the consensus of the group, Vicente Hernández as the responsible of the Spanish Network for e-Science. This network has received several mandates, as the creation and support for the Spanish Joint Research Unit for Grid projects (namely ES-GRID), whose secretariat is under the coordinator of the Network, and the set-up of the Spanish NGI under the Spanish Network for e-Science. This network has created a strong link with the Portuguese NGI, through the IBERGrid agreement.
The Spanish NGI core comprises mainly those Spanish institutions participating in research and development projects about Grid research infrastructures, being the attractor for many other smaller-scale centres to be added. It currently integrates about 18 resource provider centres and 2400 cores and it gives support to 340 direct users from the communities of High Energy Physics, Biomedicine, Fusion, Computational Chemistry, Astrophysics and Earth Sciences, among the most relevant communities. It is also the administrative structure for projects such as EGEE and EELA, participating through JRUs (lead by the IFAE in EGEE and by the CIEMAT in EELA) and giving a coherent umbrella to the whole Spanish Grid Community.
The Spanish NGI has two additional features that are interesting to comment. Firstly, the strong connection with the Portuguese NGI has lead to the creation of a joint Commission that organises a yearly congress, and two six-monthly coordination meetings with the aim on interoperability, joint participation in projects and exchange of research interest. A joint research program is being developed also by the Spanish and Portuguese authorities.
Another important issue of the Spanish NGI is its relation with the Spanish Supercomputing Network (RES). This network comprises several Spanish research centres that operates a common infrastructure of supercomputing. The links between Supercomputing and Grid are one of the most important challenges of the Spanish Network for e-Science, and it is forecasted that many collaboration in terms of middleware, applications and even resources will be developed.
The Spanish Network for e-Science (www.e-science.es) organised on February the 21st and the 22nd 2008, its first plenary meeting in which 120 researchers participated to exchange experiences in Middleware, Grid and Supercomputing Infrastructures and Applications, and will have a second meeting in October in Sevilla. The Spanish Network for e-Science is an activity funded by the Spanish Ministry of Science and Education that aims at fostering the development of science in Spain through the use of e-Infrastructures. The Spanish NGI is organised in the frame of this network.
Norbert Meyer, Maciej Stroinski
Poznan Supercomputing and Networking Center, Poland
e-Infrastructure and Related Projects in PSNC
PSNC is one of the HPC centres in Poland, operator of Polish National Research and Education Network PIONIER, operator of Poznan Metropolitan Area Network POZMAN. PSNC is the Centre of Excellence of Sun Microsystems Inc, Microsoft Innovation Center and official Cisco Networking Regional Academy. Each of its four departments has an active computer science research group working on aspects such as: middleware, tools and methods for Grid computing, resource management for Grids, large scale Grid applications (Application Department), user accounting on the Grid, Grid security mechanisms and policies, Grid fabric management tools, distributed storage management, Data Center issues (Supercomputing Department). Our institution participates in numerous projects e.g.: European: RINGrid, DORII, PRACE, Euforia, EGEE, gEclipse, BalticGrid/2, e-IRGSP2, GN2 and former projects like GridLab or CrossGrid and national (founded by the Ministry of Science and Higher Education): Clusterix (National Cluster of Linux Systems), PROGRESS, Virtual Laboratory, SGIgrid - High Performance Computing and Visualisation with the SGI Grid for Virtual Laboratory Applications. PSNC leads research in the area of new generation networks. PSNC contributes to such European projects as Phosphorus, SEQUIN (IST-1999-20841), 6NET (IST-2001-32603) and ATRIUM (IST-1999-20675). PSNC takes an active part in many international conferences and forums, including the Global Grid Forum, e-IRG, NESSI.
Other grid-related projects include HPC Europa, Gridcoord, ACGT and InteliGrid.
Finally, several portals and information systems are developed and managed by PSNC: the Joshua portal content management framework, the dLibra digital library framework, Multimedia City Guide, Poznan Education Service, Polish Educational Portal (Interkl@sa), WBC Regional Digital Library, EBSCO Publishing, LDAP project and InteractiveTV.
The presentation will give an overview about the current state of available hardware resources, its reference to European e-Infrastructure activities and related R&D projects which aim to build a sustainable e-Infrastructure in Poland and Europe.
P. Nowakowski, M. Kasztelnik, D. Harezlak
ICS and ACC Cyfronet AGH, Kraków, Poland
Environment for Collaborative e-Science Applications: CGW08 Tutorial
The recent emergence of scientific investigations carried out on a holistic ("system") level requires new e-research environments. Such environments aim to support integration of various sources of data and computational tools to help investigators acquire understanding of a given phenomenon through modeling and simulation processes.
During this interactive tutorial, an environment and a new approach to development and execution of e-Science applications will be presented. The environment is an integrated system of dedicated tools and services which provide a common space for building, improving and executing distributed applications.
Following a brief introduction of the core concepts (in-silico experiments, the experiment pipeline, experiment substrates and results) and explanation of the ecosystem of tools and services which constitute the environment, the main hands-on session will begin, according to the following plan:
- Start by running some demonstration experiments from virology, bioinformatics, chemistry and mathematics domains, and learn how the runtime layer works.
- Become an experiment developer who plans and implements a new experiment. Explore the programming tools provided for such a task and learn the different stages of the lifecycle of your experiment.
- Experience the creator-user collaboration through publication, versioning and registering feedback related to your experiment. Use specialized tools to send feature requests and then to refine the solution to meet additional expectations.
- Augment the simple application with powerful techniques for integrated, multi-source data acquisition and access to computational testbeds. Use dedicated tools to create new elements (gems) of the experiment, to register them in the environment and to leverage them in the experiment.
- Enjoy the fruits of your work by watching the experiment run and produce valuable results within your “virtual workbench”.
For active participation in this tutorial you need your own notebook with either a Linux or a Windows XP/Vista distribution. Additional software required by the virtual experiments (Java JRE, JRuby interpreter and the virtual laboratory tools) will be installed during the tutorial. Tutorial participation is free of charge. If you wish to learn more before attending, please consult the following website: virolab.cyfronet.pl.
If you would like to participate in the tutorial, please visit the tutorial registration
and fill in a simple registration form. We need this data to choose a suitable room, set up user accounts, send out
The webpage also provides information on the basic requirements
for participation in the hands-on part of the tutorial.
Department of Physics, University of Oslo, Oslo, Norway
The ARC Middleware – An International Grid Initiative
The NorduGrid collaboration1 and its middleware product, ARC (the Advanced Resource Connector2), span institutions in the Nordic countries and several other countries in Europe and the rest of the world.
The innovative nature of the ARC design and flexible, lightweight distribution make it an ideal choice to connect heterogeneous distributed resources. ARC has been used by scientific projects for many years and through experience it has been hardened and refined to a reliable, efficient software product. The simple architecture and design of ARC eases application integration and facilitates taking advantage of distributed resources, such as the SRM and distributed dCache-based storage service provided by the Nordic Data Grid Facility3 (NDGF) for the so-called Nordic Tier-1 for the Large Hadron Collider (LHC) experiments at CERN. Some results from the simulation production and analysis for the ATLAS experiment at the LHC are shown as an illustration of ARC's common usage today.
The EU FP6-funded KnowARC project4 is creating a next generation of the ARC middleware based on Open Standards and a Web-Services interface. The plan is to provide a smooth upgrade path for the large number of pre-WS ARC sites currently in production and to attract additional user communities by the industrial quality and ease-of-use for inexperienced users through intelligent, high-level know-how sharing services of the next-generation ARC. Two additional grid projects which contribute to the hardening of ARC and development of new ARC services are NDGF and Innovative Tools and Services for NorduGrid5 (NGIn), the later having its focus on training programmes.
In a recent major breakthrough, the ARC middleware has been included in the list of three major European middlewares considered as a basis for the future FP7 European Grid Initiative6 (EGI).
Robotics Research Institute, University Dortmund, Germany
The Structure of D-Grid
In 2004, researchers from various German institutions came together to discuss the requirements and process of further establishing Grid technology in Germany. Although at this time some German institutions already had a long history of Grid activities, including European projects like EDG or the middleware UNICORE, the use of Grid technology was largely restricted to the high energy physics and astrophysics communities.
Then in 2005, the German Federal Ministry of Education and Research (BMBF) started the official D-Grid program by funding the academic research communities high energy physics, astrophysics, climate research, medical research and engineering projects as well as a dedicated integration project that addressed common structures of the communities. Later in this year, TextGrid, a community from the humanities, and the community energy meteorology joined this first group of D-Grid communities. The integration project covers basic software components, Grid operation and deployment, network and security as well as coordination and sustainability. The communities have the choice to use three different well established middlewares: the US-based GLOBUS, the European gLite and UNICORE which is particularly used in high performance computing installations. Further, the integration project also provides installation and training support for data management and access tools like dCache, OGSA-DAI and SRB/IRODS, the GAT/SAGA toolbox and the GridSphere portal software.
Thanks to an additional hardware support of the BMBF, Grid resources are now widespread in Germany. Together with the Grid resources directly provided by the various centres, D-Grid communities have access to more than 20000 cores and more than 2000 TB of storage. These resources form the distributed backbone of D-Grid which is complemented by the operational concept and the above-mentioned basic software components provided by the integration project.
The supported communities have been successful in establishing an infrastructure to operate their community-oriented Grids and have developed various new Grid tools, such as scheduler, monitoring and information systems. A second call of the BMBF in 2006 targeted towards higher-level services in the Grid and communities with commercially oriented partners, in order to open D-Grid towards industrial users. Further, there are some projects that addressed gaps discovered during execution of the various projects. The integration project has also been extended until 2010 to establish a sustainable basic infrastructure that provides basic operation services for the communities. While the D-Grid communities are free to define most properties of their Grids, there are security and operation issues that are better addressed in a centralised approach in order to assure sharing of resources by different communities. Moreover, it is more efficient to concentrate support for basic software components at some centres instead of requiring each community to generate a separate support structure.
The increased number of D-Grid projects requires a structure to assure seamless cooperation between these projects. To this end, the D-Grid Corporation was established in 2008. As well as coordinating project interaction, this corporation will particularly address sustainability issues and represent D-Grid on the international level. German institutions interested in the use of Grid technology are invited to become stockholders of this corporation. All current D-Grid projects are also directly involved in this corporation as they are members of its advisory council. This advisory council addresses problems of comprehensive relevance across the individual projects.
Director of the Grids Institute, CNRS, France
Presentation of Grid Activities in France
I will present the various Grid activities in France, ranging from fundamental research on Grids, developing grids for research in computing sciences and production grids. The present effort to form a National Grid Initiative will be described, as well as the results as a national prospective exercice to assess the needs of the Grids in all scientific domains.