Norbert Attig
Jülich Supercomputing Centre, Institute for Advanced Simulation, Germany
Experiences with Support & Research Units for the Computational Sciences
Abstract:
With the increasing prevalence of architectures based on massively parallel and multi-core processor topologies, simulation scientists are now compelled to take scalability into account when developing new models or when porting long-established codes to new machines. To meet this challenge, the Jülich Supercomputing Centre (JSC) in Germany, one of the major European players in supercomputing, has pioneered the introduction of community-oriented support units for scientific computing, so called Simulation Laboratories. Progress in establishing these units and first success stories are reported on ( www.fz-juelich.de/ias/jsc/EN/Expertise/SimLab/simlab_node.html ).
Besides its efforts in strengthening the computational sciences, JSC is actively investigating future architectures and their application, primarily in cooperation with hardware vendors. Since 2010, JSC has established joint Exascale Laboratories with IBM, Intel, and NVIDIA, to give such collaborations a more formal framework with binding resource commitments, multi-year work plans, and agreements on intellectual property rights. First-hand experiences with this new kind of collaboration and some highlights are presented
( www.fz-juelich.de/ias/jsc/EN/Research/HPCTechnology/ExaScaleLabs/_node.html ).
Robert Begier
Research Programme Officer, European Commission,
DG CONNECT, Unit H.1 – Health & Well-being, Brussels, Belgium
Possibilities of funding of the e-Science in Horizon 2020
Abstract:
This talk will address:
- What is Horizon 2020 (H2020)? – an introduction [1],
- General information about funding research in H2020 [2],
- Presentation of topics related to eHealth [3],
- Funding by grants or accepting the challenge? – Prize instrument.
References
[1] ec.europa.eu/programmes/horizon2020/h2020-sections
[2] ec.europa.eu/programmes/horizon2020/en/h2020-section/health-demographic-change-and-wellbeing
[3] ec.europa.eu/research/participants/data/ref/h2020/wp/2014_2015/main/h2020-wp1415-health_en.pdf
Wlodzislaw Duch
Department of Informatics, Nicolaus Copernicus University, Torun, Poland
Computational neurophenomics: grand challenge for understanding people
Abstract:
Phenomics is concerned with detailed description of all aspects of
organisms, from their physical foundations at genetic, molecular and
cellular level, to behavioural and psychological traits.
Neuropsychiatric phenomics tries to understand mental disease from
such broad perspective. It is clear that learning sciences also need
similar approach that should integrate efforts to understand cognitive
processes from the perspective of the brain development, in temporal,
spatial, psychological and social aspects. A new branch of science
called neurocognitive phenomics, or neurophenomics, is proposed,
treating the brain as a substrate shaped by the genetic, epigenetic,
cellular and environmental factors, in which learning processes due to
the individual experiences, social contacts, education and culture
take place. A brief review of selected phenomena relevant to
neurophenomics will be presented, from genes to learning styles.
Central, peripheral and motor processes in the brain will be linked to
learning styles and mental disorders, such as autism and ADHD.
Thomas Fahringer
Institute of Computer Science, University of Innsbruck, Austria
INSIEME: a multiple-objective auto-tuning compiler for parallel programs
Abstract:
Efficient parallelization and optimization for modern parallel architectures is a time-consuming and error-prone task that requires numerous iterations of code transformations, tuning and performance analysis which in many cases have to be redone for every different target architecture.
In this talk we introduce an innovative auto-tuning compiler named
INSIEME which consists of a compiler component featuring a
multi-objective optimizer and a runtime system. The multi-objective
optimizer derives a set of non-dominated solutions, each of them
expressing a trade-off among different conflicting objectives such as execution time, efficiency, and energy consumption. This set is commonly known as Pareto set.
Our search algorithm, which explores code transformations and their
parameter settings, dynamic concurrency throttling (DCT), and dynamic
voltage and frequency scaling (DVFS) is based on differential
evolution. Additionally, rough sets are employed to reduce the search
space, and thus the number of evaluations required during
compilation. We demonstrate our approach by tuninng loop tiling, the
nr. of threads, and clock frequency in cache sensitive parallel
programs, optimizing for runtime, efficiency and energy.
www.insieme-compiler.org
Yannick Legre and Sergio Andreozzi
EGI.eu Foundation
Amsterdam, The Netherlands
www.egi.eu/about/EGI.eu/
Big Data Architectures and Technologies
Abstract:
"The new EGI vision statement for an ‘Open Science Commons’ reads:
“Researchers from all disciplines have easy and open access to the
advanced digital services, data, knowledge and expertise they need to
collaborate to achieve excellence in science, research and
innovation”.
Open Science refers to the opening of knowledge creation and
dissemination towards a multitude of stakeholders, including the society
in general. Nowadays, open science, sometimes called Science 2.0, is
driven by the digitalization of the research process and by the
globalization of the scientific communities.
Commons is a resource management principle by which a resource is
shared within a community in a way that allows non-discriminatory
access, while ensuring adequate controls to avoid congestion or
depletion when the capacity is limited.
The next step is to recognize that e-Infrastructures are a component of
the whole scientific process that should be managed as a commons.
The new Open Science Commons vision will be enabled by the richness of
the EGI ecosystem, where different actors participate with different
roles to deliver value to the processes involved into research and
innovation.
Marcel Kunze
Karlsruhe Institute of Technology, Germany
Big Data Architectures and Technologies
Abstract:
The speech addresses the technical foundations and non-technical framework of Big Data. A new era of data analytics
promises tremendous value on the basis of cloud computing technology.
Can we perform predictive analytics in real-time?
Can our models scale to rapidly growing data? How do we
efficiently deal with bad data? Some practical examples as well as open
research questions are discussed.
Jakub Moscicki
IT/DSS CERN, Geneva, Switzerland
File synchronization and sharing for scientific and engineering workflows
Abstract:
Scientific data repositories have been growing larger than ever. Yet it is still surprisingly unhandy to easily share files between researchers, access the data from mobile devices or personal desktop computers. At the same time in the open consumer market simple file sharing and synchronisation services have been extremely successful: Dropbox, Box, Wuala are just few big names. At CERN we have lately looked into extending the idea of simple file sharing and synchronization service to help researcher do their science in easier and sometimes innovative ways. High Energy Physics (HEP) is an interesting target as it is a highly distributed and fairly large user community (more than 10K users in over 100 countries) with very large scientific data repositories (above 50PB). As often for other sciences, the data analysis workflows in HEP are tightly integrated with processing and data services – batch farms, analysis clusters and storage systems provided by Computing Centers. This creates an additional challenge but also offers an interesting opportunity to explore file sharing and synchronization in this context. In this talk we will review recent initiative at CERN in the area of file sharing and synchronization which is also an invitation for collaboration.
Syed Naqvi
School of Computing, Telecommunications and Networks,
Faculty of Computing, Engineering and the Built Environment,
Birmingham City University, United Kingdom
Cybercrime and Forensic investigations of e-Infrastructures
Abstract:
Significant advances in the scope of e-Science during the last two decades and resulting sophistication in the e-Infrastructures have evolved them as critical information infrastructure for our society. Providing reasonable security services to these infrastructures is no longer an optional feature. Each e-Infrastructure's operations team has specialised security professionals to ensure effective protection of their assets. They also have CERT/CSIRT members who are tasked with maintaining a vigilant lookout to discover and remediate security breaches. These cyber firefighters play vital role in the smooth and secure functioning of e-Infrastructures. Their task is so overwhelming that in the most of the situations, they have to limit their scope to
the detection and removal of security cracks, instead of pursuing these incidents as cybercrime investigations. However, post-accident analysis of these attacks is becoming a necessity with the ever-increasing reliance of scientific and industrial communities on e-infrastructures.
This talk will describe best practices and guidelines for the e-Infrastructures' cyber incident response teams to cope with the inevitable challenges posed by cybercriminals. Benchmarks for evaluating the operational readiness of these teams will also be presented.
Philippe Trautmann
HPC & POD Sales Director, Europe Hewlett-Packard
Pawel Gepner Intel Technology Poland
HP and Intel (re)invest in supercomputing
Abstract:
Since decades, HP has traditionally provided our HPC users with
a set of traditional, yet innovative server technologies. The
emergence of blade-based designs has allowed HP to reach a very
significant position in the TOP500. HP is today the clear leader
amongst all other vendors in terms of number of installed systems.
With the recent announcement of the densest and easy-to-deploy
hyperscale products, HP sees a massive opportunity to develop
a set of dedicated offerings for its most demanding customers.
We also will touch our current directions for HPC, and the trends
that HP can see on this market. Finally, we will also mention the
R&D directions that HP is pursuing.
|