Speaker
Description
The KM3NeT Collaboration is building two Cherenkov neutrino detectors in the depths of the Mediterranean Sea to study both the intrinsic properties of neutrinos and cosmic high-energy neutrino sources. Neutrinos are elementary particles with no electric charge and almost no mass, interacting only through the weak force. These characteristics allow the neutrino particles to travel straight across vast cosmic distances, on the contrary, this makes it extremely difficult to detect them. In the rare case that interaction occurs, neutrinos can create charged secondary particles, such as muons. These particles may travel faster than light moves in the medium, and when this occurs they emit a cone of photons known as Cherenkov radiation. Neutrino detectors are thus exploiting this characteristic finding the best conditions in large, transparent media, typically deep water or ice, where this light can be recorded by optical sensors.
KM3NeT addresses these challenges with two detectors: ORCA (Oscillation Research with Cosmics in the Abyss), optimised for atmospheric neutrinos in the GeV range to study neutrino oscillations and determine the neutrino mass ordering; and ARCA (Astroparticle Research with Cosmics in the Abyss), designed for TeV–PeV astrophysical neutrinos, targeting cosmic accelerators and the sources of high-energy cosmic rays.
Both detectors share the same fundamental component, the Digital Optical Module (DOM), a glass sphere instrumented with 31 photomultipliers and the necessary electronics to transmit detected light signals. The DOMs are then arranged in vertical strings of 18 units called Detection Units (DUs), which are anchored to the seabed. The communication between the underwater detectors and shore-based computing centers is ensured via electro-optical cables. At completion, ORCA will consist of a single modular block of 115 DUs, while ARCA will comprise two such blocks, instrumenting a volume of water of about 1Km3.
Thanks to the modular architecture of the detector, data acquisition, processing, and analysis are already in progress.
This contribution focuses on the KM3NeT data and computing infrastructure, describing the current computing model and its evolution toward distributed Grid-based resources as the experiment approaches full deployment. The expected data volume grows roughly linearly with the number of DUs, reaching hundreds of terabytes per year at full deployment. To handle this load, Rucio file catalogue will manage distributed data storage and replication, and the DIRAC workload manager will orchestrate large-scale data processing and Monte Carlo simulations. This infrastructure allows the collaboration to scale efficiently with detector size while supporting real-time data acquisition, processing, and analysis workflows.