RESERVOIR Stand at CGW'10
Manned by: Michael Van de Borne
The RESERVOIR stand will be manned throughout the duration of the CGW'10 event. This stand will provide project brochures; and feature RESERVOIR Cloud demos. It will provide a platform to have informal discussions related to the Cloud deployments with the project engineers. The RESERVOIR stand will also provide a connecting point for computing enthusiasts to harness themselves to the thriving Cloud community.
Standards-Based Energy-Efficient Grid-Resources: Implementations and Solutions
HP's HPC product portfolio which has always been based on standards at the processor, node and interconnect level lead to a successful penetration of the High Performance Computing market across all application segments. The rich portfolio of compute, storage and workstation blades comprises a family a components called the Proliant BL-series complementing the well-established rack-based Proliant DL family of nodes. To address additional challenges at the node and systems level HP recently introduced the Proliant SL-series which became the architectural base for HP's first Peta-scale system to be delivered later this year with new nodes specifically designed for highest density and energy efficiency.
Most systems in that performance range will end-up in Grid- and Cloud-Environments and finally will not only be competing for users in terms of performance but also in terms of operational cost. Certain aspects here will significantly gain in importance.
Beyond acquisition cost, a major factor is power and cooling efficiency. This is primarily an issue of cost for power, but also for the power and thermal density of what can be managed in a data center. To leverage the economics of scale established HPC centers as well as providers of innovative services are evaluating new concepts which have the potential to make classical data center designs obsolete. Those new concepts provide significant advantages in terms of energy efficiency, deployment flexibility and manageability. Examples of this new approach, often dubbed POD for Performance Optimized Datacenter, including a concept to scale to multiple PFLOPS at highest energy efficiency will be shown.
Department of Scientific Computing,
University of Vienna, Austria
Construction, Execution, and Reproducibility
of Data-Intensive Scientific Workflows
Data-intensive research is offering powerful methods aiming at
innovative new uses of computing, storage and network devices when
capturing, managing, analysing, and understanding data produced by
modern science, called e-Science and business at volumes and rates
that push the frontiers of current technologies. This is essential
for achieving success of a large class of applications.
Scientific workflows play an important role in data-intensive research.
This talk presents approaches to composition (languages and tools),
execution (gateways) and reproducibility (provenance systems) of
data-intensive scientific workflows developed within the European
project ADMIRE and the Austrian project GridMiner. The focus of these
workflows is especially mining and integration of large-scale
AlmereGrid and EnterTheGrid, The Netherlands
Desktop Grids take their place in the e-Science distributed computing infrastructure
During the past years, volunteer Desktop Grids have become a regular
part of the computational infrastructure for e-Science. Although they
already form an impressive computational power, the EDGeS infrastructure
for instance connects about 150.000 computers in Desktop Grids to the
European Grid Infrastructure (EGI), this is only the beginning, as
there are hundreds of millions of computers, alone in Europe that could be
connected. Also much experience has been gained on how to port and
optimise applications for scientific Desktop Grids. The presentation
will describe the current status and activies in Desktop Grid
computing for eScience, such as the EDGI and DEGISCO projects, and the formation
of the International Desktop Grid Federation.
Cees de Laat
Informatics Institute, Universiteit van Amsterdam, The Netherlands
e-Infrastructure aware Topology handling in the Global Lambda Integrated Facility
To support the very high demands of the data intensive e-Science
applications the Research and Education Network organizations have
created a system of optical photonic connections amongst each other. This system
is well known under the name: the Global Lambda Integrated Facility (GLIF,
http://www.glif.is/). Large experimental facilities as the Large Hadron
Collider use GLIF to distribute data to the Tier-1 processing facilities
all over the world. Technology and topology descriptions shared amongst
domains are necessary to automate the handling of connections in this
infrastructure. We study the application of the resource description
framework (RDF) from W3C's semantic web activity as a basis for a
distributed infrastructure information system. This contribution will
present results of pathfinding using declarative programming
implementations that operate on RDF repositories containing simulated
multi-layer network topologies. Moreover we studied the effects on the
pathfinding success rate of different features of multi layer networks
like the ratio of the number of ethernet versus DWDM devices.
Security Challenges of Cloud Deployments - Lessons learned from the RESERVOIR Project
The scope of IT security services has significantly broadened with the swinging of computing pendulum from the physical resources towards the virtual infrastructures. From the users (both individuals as well as businesses) point of view, this loss of control over physical resources should adequately be compensated by reliable security mechanisms. The European Framework 7 funded Cloud Computing project RESERVOIR (Resources and Services Virtualization without Barriers) has explored a number of security challenges of Cloud deployments that should be addressed to fully tap the potential of Cloud paradigm. This talk will present the findings of the RESERVOIR Security Activity ranging from securing externally hosted applications and services to meeting the regulatory compliance requirements. It will also provide an insight to the range of solutions worked-out by the RESERVOIR project and their deployment status.
EGI-InSPIRE Project Director
A virtualised European Grid Infrastructure?
EGI.eu was established in February 2010 and began operation
in May 2010. It is supported by its participants - the European
National Grid Initiatives and European International Research
Organisations - and through the support of the European Commission
through the EGI-InSPIRE project. The presentation will cover the
current status of EGI, and the challenges it faces in the immediate
future as it continues to evolve to federated pan-European production
West University of Timisoara and Research Institute e-Austria Timisoara, Romania
From Grid computing towards Sky computing. Case study for Earth Observation
The promise of Grid computing made to the Earth observation community at the beginning of last decade was to provide a shared environment for accessing a wide range of resources: instrumentation, data, high-performance computing resources, and software tools. The current systems in production like G-POD provided by European Space Agency or our smaller-scale GiSHEO system are proofs of keeping this promise but also revealers of the limitations encountered in current Data Grids. To sustain these statements an overview of GiSHEO (On-demand Grid services for Earth observation, gisheo.info.uvt.ro) will be provided.
With the recent advent of Cloud computing and storage, experimental efforts in distributed computing for Earth observation were moved towards the usage of Infrastructure as a Service paradigm. Use cases like reprocessing huge collections of data or responding to environmental crisis are matching very well with the current Cloud computing concept and Cloud providers? offer, but the implementations show that from the user and application point of view (fortunately) it is no considerable difference from a cluster environment usage.
We argue that the real benefits of exploiting Cloud offers are obtained by dealing with dynamically provisioning of resources from distributed domains representing several Clouds (paradigm recently labeled as Sky computing). Moreover, the specific domain applications should be redesigned using a Platform as a Service that is independent from providers and hides the Clouds from the application developer. In this context, the status of mOSAIC's platform development (www.mosaic-cloud.eu) will be exposed.