Conveners
Infrastructure Clouds & Virtualisation
- Ludek Matyska (CESNET)
Description
This track will focus on the development of cloud infrastructures and on the use of cloud computing and virtualization technologies in large-scale (distributed) computing environments in science and technology. We solicit papers describing underlying virtualization and "cloud" technology including integration of accelerators and support for specific needs of AI/ML and DNN, scientific applications and case studies related to using such technology in large scale infrastructure as well as solutions overcoming challenges and leveraging opportunities in this setting. Of particular interest are results exploring the usability of virtualization and infrastructure clouds from the perspective of machine learning and other scientific applications, the performance, reliability and fault-tolerance of solutions used, and data management issues. Papers dealing with the cost, price, and cloud markets, with security and privacy, as well as portability and standards, are also most welcome.
The National Institute for Nuclear Physics (INFN) has been managing and supporting Italy’s largest distributed research and academic infrastructure for decades. In March 2021, INFN introduced "INFN Cloud," a federated cloud infrastructure offering a customizable service portfolio designed to meet the needs of the scientific communities it serves. This portfolio includes standard IaaS solutions...
In 2021, the National Institute for Nuclear Physics (INFN) launched the INFN Cloud orchestrator system to support Italy’s largest research and academic distributed infrastructure. The INFN Cloud orchestration system is an open-source middleware designed to seamlessly federate heterogeneous computing environments, including public and private resource providers, container platforms, and more....
The ability to ingest, process, and analyze large datasets within minimal timeframes is a milestone of big data applications. In the realm of High Energy Physics (HEP) at CERN, this capability is especially critical as the upcoming high-luminosity phase of the LHC will generate vast amounts of data, reaching scales of approximately 100 PB/year. Recent advancements in resource management and...
High-Energy Physics (HEP) experiments involve a unique detector signature - in terms of detector efficiency, geometric acceptance, and software reconstruction - that distort the original observable distributions with smearing and biasing stochastic terms. Unfolding is a statistical technique used to reconstruct these original distributions, bridging the gap between experimental data and...
In computer science, monitoring and accounting involve tracking and managing the usage of system resources in IT environments by users, applications, or processes. These activities typically encompass monitoring CPU usage, memory allocation, disk space, network bandwidth, and other critical resources. The insights obtained through activity tracking and analysis serve several purposes. Resource...