International Symposium on Grids & Clouds 2017 (ISGC 2017)

Asia/Taipei
BHSS, Academia Sinica

BHSS, Academia Sinica

No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
Description

The International Symposium on Grids and Clouds (ISGC) 2017 will be held at Academia Sinica in Taipei, Taiwan from 5-10 March 2017, with co-located events and workshops. The main theme of ISGC 2017 is “Global Challenges: From Open Data to Open Science”.

The unprecedented progress in ICT has transformed the way education is conducted and research is carried out. The emerging global e-Infrastructure, championed by global science communities such as High Energy Physics, Astronomy, and Bio-medicine, must permeate into other sciences. Many areas, such as climate change, disaster mitigation, and human sustainability and well-being, represent global challenges where collaboration over e-Infrastructure will presumably help resolve the common problems of the people who are impacted. Access to global e-Infrastructure helps also the less globally organized, long-tail sciences, with their own collaboration challenges.

Open data are not only a political phenomenon serving government transparency; they also create an opportunity to eliminate access barriers to all scientific data, specifically data from global sciences and regional data that concern natural phenomena and people. In this regard, the purpose of open data is to improve sciences, accelerating specifically those that may benefit people. Nevertheless, to eliminate barriers to open data is itself a daunting task and the barriers to individuals, institutions and big collaborations are manifold.

Open science is a step beyond open data, where the tools and understanding of scientific data must be made available to whoever is interested to participate in such scientific research. The promotion of open science may change the academic tradition practiced over the past few hundred years. This change of dynamics may contribute to the resolution of common challenges of human sustainability where the current pace of scientific progress is not sufficiently fast.

The goal of ISGC 2017 is to create a face-to-face venue where individual communities and national representatives can present and share their contributions to the global puzzle and contribute thus to the solution of global challenges. We cordially invite and welcome your participation!

    • ICT-Enhanced Educational Workshop Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Tosh Yamamoto (Kansai University)
      • 1
        Introduction: Goal Setting
        Speaker: Tosh Yamamoto (Kansai University)
      • 2
        PBL through TBL in Active Learning Way
        Speaker: Tosh Yamamoto (Kansai University)
        Slides
      • 3
        Writing Program (ESL, the mirror of the actively learning mind)
        Speaker: Dr Yuri Kite (Kansai Univeristy)
      • 4
        Writing Program (JSL, the mirror of the learning mind)
        Speaker: Dr Tomoki Furukawa (Kansai University)
      • 5
        Fundamental Math Programs (symbol manipulation and higher order thinking)
        Speakers: Dr Kunio Hamamoto, Tosh Yamamoto (Kansai University)
      • 6
        ePortfolio for Learning Goal Settings & Artifacts of Learning Processes
        Speakers: Dr Ti-Chuang Timothy Chiang (NTU) , Tosh Yamamoto (Kansai University)
      • 7
        ePortfolio for career development (Scenario Planning for the future)
        Speaker: Tosh Yamamoto (Kansai University)
    • Security Workshop
      Convener: Dr David Kelsey (STFC-RAL)
      • 8
        Introduction
        Speaker: Dr David Kelsey (STFC-RAL)
        Slides
      • 9
        Security Incident handling in Federated Clouds
        Speaker: Dr Sven Gabriel (Nikhef/EGI)
        Slides
      • 10
        Security intrusions and their detection
        Speaker: Mr Fyodor Yarochkin (Academia Sinica)
        Slides
    • 10:30 AM
      Coffee Break
    • ICT-Enhanced Educational Workshop Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Tosh Yamamoto (Kansai University)
      • 11
        Hands-on session for the participants
        Speaker: Tosh Yamamoto (Kansai University)
      • 12
        summary and wrap-up session
        Speaker: Tosh Yamamoto (Kansai University)
    • Security Workshop
      Convener: Dr David Kelsey (STFC-RAL)
      • 13
        How to perform forensic analysis?
        Speaker: Mr Vincent Brillault (CERN/EGI)
        Slides
      • 14
        Introduction to the hands-on exercises
        Speaker: Dr Sven Gabriel (Nikhef/EGI)
    • 12:30 PM
      Lunch 4F Recreation Hall

      4F Recreation Hall

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • Security Workshop Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr David Kelsey (STFC-RAL)
      • 15
        Hands on session I
        Speaker: Dr Sven Gabriel (Nikhef/EGI)
    • Workshop on Linux Containers in Grids & Clouds Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Christophe HAEN (CERN)
      • 16
        Docker and dCache
        Speaker: Dr Paul Millar (DESY)
        Slides
      • 17
        Container Technology and Software Delivery
        Speaker: Dr Jakob Blomer (CERN)
        Slides
    • 3:30 PM
      Coffee Break
    • Security Workshop Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr David Kelsey (STFC-RAL)
      • 18
        Hands on session II
        Speaker: Dr Sven Gabriel (Nikhef/EGI)
      • 19
        Wrap-up and conclusions
        Speaker: Dr David Kelsey (STFC-RAL)
    • Workshop on Linux Containers in Grids & Clouds Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      Convener: Dr Christophe HAEN (CERN)
      • 20
        Running LHC jobs using Kubernetes
        Speaker: Dr Andrew Lahiff (RAL)
        Slides
      • 21
        Docker for tests and software preservation in LHCb
        Speaker: Dr Ben Couturier (CERN)
        Slides
    • APGridPMA/IGTF Meeting Conf. Room 901

      Conf. Room 901

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Eric Yen (ASGC)
      • 22
        Introduction
        Speaker: Dr Eric Yen (ASGC)
      • 23
        EUGridPMA Update
        Speaker: Dr David Groep (Nikhef)
        Slides
      • 24
        TAGPMA Update
        Speaker: Dr Derek Simmel (Pittsburgh Supercomputing Center)
      • 25
        APGridPMA Update
        Speaker: Dr Eric Yen (ASGC)
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Danny Hsu (Academia Sinica)
      • 26
        General introduction to biological cryoEM
        Speaker: Dr Chi-Yu Fu (Academia Sinica)
        Slides
      • 27
        Three key breakthroughs that enable cryo-EM to become a routine density map generator
        Speaker: Dr Wei-Hau Chang (Academia Sinica)
    • ECAI Workshop: Maritime Buddhism Project Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Prof. Wayne de Fremery (Sogang University)
      • 28
        Atlas of Maritime Buddhism Reports on the Initial 3D Filming Projects in Myanmar, India, and Sri Lanka and Plans for Data Collection
        Speaker: Prof. Lewis Lancaster (ECAI, UC Berkeley)
        Slides
      • 29
        Exploring the Bujang Valley - A Pre-production Survey
        Speaker: Prof. Hal Thwaites (Sunway University)
      • 30
        Maritime Religious Networks: Aesthetic Sources of Flora and Fauna Motifs from India and China on Tomb Elaborations in Taiwan
        Speaker: Prof. David Blundell (National Chengchi University)
        Slides
    • 10:30 AM
      Coffee Break
    • APGridPMA/IGTF Meeting Conf. Room 901

      Conf. Room 901

      BHSS, Academia Sinica

      Convener: Eric Yen (ASGC)
      • 31
        New NAREGI CA Software System
        Speaker: Dr Eisaku SAKANE (National Institute of Informatics)
      • 32
        KISTI CA Report and Review
        Speaker: Dr Sang Un AHN (Korea Institute of Science And Technology Information)
        Slides
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Danny Hsu (Academia Sinica)
      • 33
        Relion (Part 1)
        Speaker: Dr Wei-Hau Chang (Academia Sinica)
        Slides
    • ECAI Workshop: Digital Humanities, Archives, and Supporting Technology Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Howie Lan (ECAI, UC Berkeley)
      • 34
        Orphan Works and Collective Memories: On Setting Up The Sunflower Movement Archive
        Speaker: Dr Tyng-Ruey Chuang (Academia Sinica)
        Slides
      • 35
        Metadata Development and Documentation for a Research Data Repository
        Speaker: Dr Huang-Sin Syu (Academia Sinica)
        Slides
      • 36
        Regional Religious System and Social Changes in China
        Speaker: Jiang Wu (University of Arizona)
      • 37
        Automatic Collective Commentaries with Corpus Positioning System
        Speaker: Mr Cheah Shen Yap
        Slides
    • Environmental Computing Workshop Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Mr Matti Heikkurinen (LMU)
      • 38
        Introduction
        Speaker: Mr Matti Heikkurinen (LMU)
        Slides
      • 39
        Numerical Analysis on Mesoscale Dynamics of the Extreme Rainfall and Flood Event (May 2016) over Sri Lanka
        Speaker: Dr Chuan-Yao Lin (Academia Sinica)
        Slides
      • 40
        Q&A
    • 12:30 PM
      Lunch 4F Recreation Hall

      4F Recreation Hall

    • APGridPMA/IGTF Meeting Conf. Room 901

      Conf. Room 901

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Eric Yen (ASGC)
      • 41
        Remote Vetting
      • 42
        Member Report
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Danny Hsu (Academia Sinica)
      • 43
        Relion (Part 2)
        Speaker: Dr Wei-Hau Chang (Academia Sinica)
    • Environmental Computing Workshop Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Mr Matti Heikkurinen (LMU)
      • 44
        Parameterization Study of Chemically Reactive Pollutant Dispersion using Large-Eddy Simulation
        Speaker: Dr Zhangquan WU (The University Of Hong Kong)
        Slides
      • 45
        Parallel taxonomic classification algorithm for metagenomic sequences
        Speaker: Prof. Nam THOAI (Ho Chi Minh City University of Technology)
        Slides
      • 46
        Detecting Incidents with Limited Linguistic Knowledge for Low Resource Languages
        Speaker: Dr Chao-Hong Liu (ADAPT Centre Dublin City University Dublin)
        Slides
    • ECAI Workshop: Text Analysis and Translation Systems Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Prof. Lewis Lancaster (ECAI, UC Berkeley)
      • 47
        Mining Textual Image Data with Cloud Technologies: Challenges and Opportunities
        Speaker: Prof. Wayne de Fremery (Sogang University)
      • 48
        Challenges and Possibilities: Translating and Building Buddhist Lexicography in the Modern Era
        Speaker: Ven Miao Guang (FGS Institute of Humanistic Buddhism)
        Slides
      • 49
        Translation of Buddhist Texts: Building of Translation Memories, Terminology Databases and Corpus Analysis Tools
        Speaker: Ven. Shih You Zai (FGS Institute of Humanistic Buddhism)
        Slides
      • 50
        Dictionary Translation Project
        Speaker: Ven. Xianchao (LongQuan Monastery)
    • 3:00 PM
      Coffee Break
    • 3:00 PM
      Coffee Break
    • 3:00 PM
      Coffee Break
    • APGridPMA/IGTF Meeting Conf. Room 901

      Conf. Room 901

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Eric Yen (ASGC)
    • Coffee Break
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Danny Hsu (Academia Sinica)
      • 51
        Leggion and Appion Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Speaker: Dr Chi-Yu Fu (Academia Sinica)
    • Environmental Computing Workshop Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Eric YEN (ASGC)
      • 52
        Malyasia
        Speaker: Dr Suhami Napis (Universiti Putra Malaysia)
      • 53
        Indonesia
        Speaker: Dr Basuki SUHARDIMAN (Institut Teknologi Bandung)
        Slides
      • 54
        Philippines
        Speaker: Dr Peter BANZON (ASTI)
        Slides
      • 55
        Vietnam
        Speaker: Prof. Nam Thaoi (Bach Khoa University)
        Slides
      • 56
        Taiwan
        Speaker: Dr Eric Yen (ASGC)
      • 57
        Thailand
        Speaker: Dr Veerachai TANPIPAT (HAII)
      • 58
        Panel Discussion: State of the Art and new opportunities/challenges
        Slides
    • ECAI Workshop: Community Updates Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Prof. Lewis Lancaster (ECAI, UC Berkeley)
      • 59
        Report on Technical Developments
        Speaker: Dr Howie Lan (ECAI, UC Berkeley)
      • 60
        Affiliate Updates: Short reports on work in progress
    • Opening Ceremony & Keynote Session I Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Simon C. Lin (ASGC)
      • 61
        Opening Remarks
      • 62
        On-the-fly Capacity Planning in Support of High Throughput Workloads
        Speaker: Dr Miron LIVNY (OSG)
        Slides
    • 10:30 AM
      Coffee Break & Photo-taking
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Chi-Yu Fu (Academia Sinica)
      • 63
        Image Processing in cryoEM: Open problems and current perspectives
        Speaker: Dr Jose Maria Salvador Carazo
        Slides
      • 64
        Applications of cryo-electron microscopy to understand complex structures
        Speaker: Dr Sunny Wu
    • e-Science Activities in Asia Pacific I Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Simon C. Lin (ASGC)
      • 65
        e-Science Activities in Japan - Building Academic Inter-Cloud Infrastructure Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Speaker: Dr Kento AIDA (National Institute of Informatics)
      • 66
        eScience Activities in China Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Speaker: Dr Gang Chen (Institute Of High Energy Physics)
        Slides
      • 67
        GSDC activities for scientific computing Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        GSDC is a national funding project to promote fundamental research fields in Korea by providing computing and networking infrastructure at KISTI. Currently we support 6 global and/or domestic experiments. In this talk, we present the status of GSDC activities about experiments support, especially WLCG Tier-1 center for ALICE experiment and the underlying system architecture and its operations.
        Speaker: Dr Sang-Un Ahn (KISTI)
        Slides
      • 68
        e-Science Activities in Taiwan Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Speaker: Dr Eric Yen (ASGC)
      • 69
        e-Sciences Activities in MAS Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        We mainly focus on our Cultural Heritages which are old Mongolian scripts, Birch bark, Paintings and many cultural, and agricultural treasures. Mongolian Academy of Sciences has been working on the projects about saving our national treasures. Also, we have experience and cooperation with foreign Institutes and local museums and libraries. In order to archive and digitize our cultural heritages we needed delicate technologies and research. After a long time of research and procedure, we digitized hundreds of sutras and many more objects are planned.
        Speaker: Mr Batzaya E. (Mongolian Academy of Sciences)
      • 70
        Q&A Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • Lunch 4F Recreation Hall

      4F Recreation Hall

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      Convener: Dr Chi-Yu Fu (Academia Sinica)
      • 71
        EMAN 2 (Part 1)
        Speaker: Dr Sunny Wu
    • Data Management & Big Data Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr David Groep (Nikhef)
      • 72
        Towards a cloud-based computing and analysis framework to process environmental science big data Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Environmental sciences are quickly and increasingly adopting research methodologies based on data coming from satellites, large network of sensors installed on the ground or sea-floating stations, as well as from devices installed on balloons or aircraft. These networks produce a big amount of data that needs to be appropriately processed and analyzed to extract information useful for scientists to investigate natural phenomenas. This require for example, the capacity to collect and store huge amount of data together with space and time information and the availability of large and powerful computing resources to run analysis and visualization codes. However, for environmental scientific community to develop or to have in house the infrastructures to accomplish and support efficiently all these steps can not be convenient, because they require a non negligible effort for maintenance. On the other hand, in the last decades, scientific communities of other scientific fields, such as high-energy physics, have developed and acquired a large experience in computer grid infrastructures to store and analyze big amount of data coming from particle accelerator laboratories experiments. Computer grids have recently evolved into clouds which are more user-friendly allowing access to new and more heterogeneous communities with less computing expertise [1,2]. In this contribution we discuss how to bridge the experience acquired in using the grids for high energy physics experiments and the needs of environmental sciences. In particular, we apply this strategies in the context of the interdisciplinary EU-ERASMUS+ TORUS project which includes Europe’s and south east Asia’s partners with a strong expertise in distributed and cloud computing and earth and environmental sciences. The TORUS project aims to make soon available to environmental scientists a cloud based computing and analysis framework to manage and process big-data. This includes the ability to access clouds to virtualize the computing resources, and knowledges to use software tools to process and analyze data coming from the different data sources. We also describe how to store data together with meta-data information related to time and space, and how to present data at high-level that can be easily used and interpreted by user scientists. Finally, we also discuss how to integrate into this framework high-performance computing to boost for example satellite image processing, that for the intrinsic computational complexity may require the use of recently developed accelerators like GP-GPUs or many-core processors [3,4]. References [1] Fella, A., Luppi, E., Manzali, M., Tomassetti, L., A general purpose suite for Grid resources exploitation (2012) IEEE Nuclear Science Symposium Conference Record, art. no. 6154459, pp. 99-103. [2] Roiser, S., et al., The LHCb Distributed computing model and operations during LHC runs 1, 2 and 3 (2015) Proceedings of Science, art. no. 005. [3] Calore, E., et al., Massively parallel lattice Boltzmann codes on large GPU clusters, Parallel Computing 58 (2016), pp. 1–24. [4] Adinetz, A. V., et al., Performance evaluation of scientific applications on POWER8, Lecture Notes in Computer Science 8966 (2015), pp. 24–45.
        Speakers: Prof. Eleonora Luppi (University of Ferrara and INFN) , Dr Luca Tomassetti (University of Ferrara and INFN) , Dr Sebastiano Fabio Schifano (University of Ferrara and INFN)
        Slides
      • 73
        Data storage accounting at RAL Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Accounting for storage available and used is important to many communities using grids (and clouds). Several large grids came together to define the GLUE schema (Grid Laboratory Uniform Environment) in order to promote the interoperation and cross-grid use of the infrastructures. Ensuring consistency (across implementations and grids) and usefulness of the information published by these grids was a significant task for every storage system, and the consistency exercise had to be repeated for the second version of GLUE, GLUE2. Creating the schema alone is not sufficient, so collaborations defined their "interpretations" of the schemas and guidelines for publishing metadata. The present paper describes the work involved in developing and deploying such an information system for the CASTOR and CEPH systems at the RAL Tier 1. In part 1 of the paper, we outline some of the important design decisions taken throughout the development process and focus on how we obtain the required information from several disparate parts of the systems, and describe the difficulties associated with accounting when files can be compressed on tape, have ephemeral copies on disk, filesystem vs object store, etc. We finish part 1 with an outlook towards dynamic ("cloudy") use of storage resources. In part 2, we look at how to format the information to become standards-compliant and future-proof.
        Speaker: Mr Rob Appleyard (STFC)
        Slides
      • 74
        dCache, managing Quality of Service in Cloud Storage Infrastructures Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        For the previous decade, high performance, high capacity Open Source storage systems have been designed and implemented, accommodating the demanding needs of the LHC experiments. However, with the general move away from the concept of local computer centers, supporting their associated communities, towards large infrastructures, providing Cloud-like solutions to a large variety of different scientific groups, storage systems needed to adjust their capabilities in many areas, as there are federated identities, non authenticated delegation to portals or platforms, modern sharing and user defined Quality of Storage. This presentation will give an overview on how dCache is keeping up with modern Cloud storage requirements by partnering with EU projects, which provide the necessary contact to a large set of Scientific Communities. Regarding authentication, there is no strict relationship anymore between the individual scientist, the scientific community and the infrastructure, providing resources. Federated identity systems like SAML or “OpenID Connect” are growing into the method-of-choice for new scientific groups and are even sneaking their way into HEP. Therefor, under the umbrella of the INDIGO-DataCloud project, dCache is implementing those authentication mechanisms in addition to the already established ones, like username/password, Kerberos and X509 Certificates. To simplify the use of dCache as back-end of scientific portals, dCache is experimenting with new anonymous delegation methods, like “Macaroons”, which the dCache team would like to introduce in order to start a discussion, targeting their broader acceptance in portals and at the level of service providers. As the separation between managing scientific mass data and scientific semi-private data, like publications, is no longer strict, large data management systems are supposed to provide a simple interface to easily share data among individuals or groups. While some systems are offering that feature through web portals only, dCache will show that this can be provided uniquely for all protocols the system supports, including NFS and GridFTP. Furthermore, in modern storage infrastructures, storage media, and consequently the quality and price of the request storage space are no longer negotiated with the responsible system administrators but dynamically selected by the end user or by automated computing platforms. The same is true for data migration between different qualities of storage. To accommodate this conceptual change, dCache is exposing it’s entire data management interface through a RESTful service and a graphical user interface. The implemented mechanisms are following the recommendation of the corresponding working groups in RDA and SNIA and are agreed-upon with the INDIGO-DataCloud project to be compatible with similar functionalities of other INDIGO provided storage systems.
        Speaker: Dr Patrick Fuhrmann (DESY/dCache.org)
        Slides
      • 75
        Machine Learning analysis of CMS data transfers Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Tens of Petabytes of collision and simulated data have been collected and distributed across WLCG sites in Run-1 and Run-2 at LHC. A low latency in transfers among dozens of computing centres is crucial to make an efficient use of the computing resources. Despite on average the desired level of throughput has been successfully achieved to serve the LHC physics programs, it is not uncommon to observe transfer latencies caused by a large variety of causes, from file corruptions to site issues, most of which require operator intervention. To improve on this front, in particular, the CMS experiment equipped the PhEDEx dataset replication system with a system to collect the latency data, and a mechanism to categorise and analyse them promptly, matching them to quick and focussed operators intervention. The transfer latencies data has also been the target of Machine Learning techniques - already used in CMS to study and predict the dataset popularity - and preliminary results on the predictability potential of this approach will be presented and discussed.
        Speaker: Prof. Daniele Bonacorsi (University of Bologna)
        Slides
      • 76
        Q&A Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • ECAI Workshop: Cultural Mapping Conf. Room 802

      Conf. Room 802

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Oliver Streiter (National Kaohsiung University)
      • 77
        Stopping the flow - The Yellow River and China's Grand Canal after 1855
        Speaker: Thomas Hahn
      • 78
        Volunteered Geographic Information and Arches
        Speaker: Prof. Jihn-Fa (Andy) Jan (National Chengchi University)
        Slides
      • 79
        Digital Economy and Asian Production Network – A Reality Check for Humanities
        Speaker: Janet Tan (National Chengchi University)
    • Network, Security, Infrastructure & Operations I Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Gang Chen (Institute Of High Energy Physics)
      • 80
        Can R&E federations trust Research Infrastructures? Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Research Infrastructures increasingly use national and global “Research and Education” (R&E) authentication federations to provide access to their services. Collaborators log on using their home organization credentials, the Research Infrastructure (RI) enriches it with community information, and access decisions are made on its combined assertions. Studies in the AARC project have shown that research communities connect to the R&E federation using an ‘SP-IdP proxy’ design pattern: a single logical component makes the RI appear as a single entity towards the R&E federations. The RI can also augment the ‘R&E identity’ of their users with membership information. Thus the RI shields itself from heterogeneity in the global R&E federations and it eases service deployment by 'hiding' all services behind a single proxy identity provider (IdP) that itself needs to be registered only once in R&E federations. The AARC Blueprint Architecture identifies this model as a recommendation for engaging research collaborations with R&E federations. The use of a proxy in itself poses policy challenges: services 'internal' to the community see only a single IdP that they have to trust ultimately. Generic service providers that span multiple research communities will have to trust many of these proxies – there are over a hundred multi-national RIs in the world but many multi-purpose e-Infrastructures as well. And towards the R&E federations, the SP-IdP proxy hides all of the research services: home organisations and R&E federations see just a single service provider, even if the services behind it are provided in hundreds of different administrative domains. Building on the Security for Collaboration among Infrastructures (SCI) framework, the “Security Networked-Community Trust-framework for Federated Identity” (Snctfi) proposes a policy framework that allows determination of the 'quality' of such SP-IdP proxies and the research services behind them. For example, a SP-IdP-proxy for EGI – proxying for all its compute and storage services – would be able to express to the R&E federation space that is has an internally-consistent policy set, that it can make collective statements about all its constituent services and resource providers, and that it will abide by best practices in the R&E community, such as adherence to the Data Protection Code of Conduct (DPCoCo), REFEDS Research and Scholarship (R&S) entity category, Sirtfi – the security incident response trust framework that is in itself a development from the SCI structure. By addressing the structure of the security policy set that binds services ‘hiding’ behind the SP-IdP proxy, Scntfi allows comparison between proxies, assign trust marks for meeting requirements, and it allows a scalable way to negotiate and filter based on such policies. It eases authentication and attribute release by R&E federations as well as service providers (by easier enrolment in federations and because R&E IdPs may be more willing to release attributes if the proxy can convincingly assert DPCoCo and R&S), but also aids assessment by generic e-Infrastructures providers that know the RI proxy meets their trust requirements. We will describe the requirements on the Scntfi framework and show how it applies to research and generic e-Infrastructures.
        Speaker: Dr David Kelsey (STFC-RAL)
        Slides
      • 81
        WLCG Security Operations Centres Working Group Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Security monitoring is an area of considerable interest for sites in the Worldwide LHC Computing Grid (WLCG), particularly as we move as a community towards the use of a growing range of computing models and facilities. There is an increasingly large set of tools available for these purposes, many of which work in concert and use concepts drawn from the use of analytics for Big Data. The integration of these tools into what is commonly called a Security Operations Centre (SOC), however, can be a complex task - the open source project Apache Metron (which at the time of writing is in incubator stage and is an evolution of the earlier OpenSOC project) is a popular example of one such integration. At the same time, the necessary scope and rollout of such tools can vary widely for sites of different sizes and topologies. Nevertheless, the use of such platforms could be critical for security in modern Grid and Cloud sites across all scientific disciplines. In parallel, the use and need for threat intelligence sharing is at a key stage. Grid and Cloud security is a global endeavour - modern threats can affect the entire community, and trust between sites is of utmost importance. Threat intelligence sharing platforms are a vital component to building this trust as well as propagating useful threat data. The MISP software (Malware Information Sharing Platform) is a very popular and flexible tool for this purpose, in use at a wide range of facilities in different domains across the world. In this context we present the work of the WLCG Security Operations Centres Working Group, which was created to coordinate activities in these areas across the WLCG. The mandate of this group includes the development of a scalable SOC reference design applicable for a range of sites by examining current and prospective SOC projects & tools. In particular we report on the first work on the deployment of MISP and the Bro Intrusion Detection System at a number of WLCG sites, including areas of integration between these tools. We also report on our future roadmap and framework, which includes the Apache Metron project.
        Speakers: Dr David Crooks (University of Glasgow) , Mr Liviu Vâlsan (CERN)
        Slides
      • 82
        Collaborating for WISEr Information Security Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        As most are fully aware, cybersecurity attacks are an ever-growing problem as larger parts of our lives take place on-line. Distributed digital infrastructures are no exception and action has to be taken to both reduce the security risk and to handle security incidents when they inevitably happen. These activities are carried out by the various e-Infrastructures and it has become very clear in recent years that collaboration with others both helps to improve the security and to work more efficiently. The WISE (Wise Information Security for collaborating E-infrastructures) community was born as the result of a workshop in October 2015, which was jointly organised by the GÉANT group SIG-ISM (Special Interest Group on Information Security Management) and SCI, the ‘Security for Collaboration among Infrastructures’ group of staff from several large-scale distributed computing infrastructures. All agreed at the workshop that collaboration and trust is the key to successful information security in the world of federated digital infrastructures for research. WISE provides a trusted global framework where security experts can share information on topics such as risk management, experiences about certification processes and threat intelligence. With participants from e-Infrastructures such as EGI, EUDAT, PRACE, XSEDE, NRENs and more, WISE focuses on standards, guidelines and practices, and promotes the protection of critical infrastructure. This talk will focus on ongoing work in the WISE Working Groups, each tackling different aspects of collaborative security and trust: Security Training and Awareness; Risk Assessment; Review and Audit; Security in Big and Open Data; and SCI version 2. This final Working Group aims to produce a whitepaper detailing the community’s requirements. We will report on progress made and highlight the expected challenges to be faced by participating E-infrastructures in the coming years. Details on the WISE community and our Working Groups can be found at https://wise-community.org
        Speaker: Ms Hannah Short (CERN)
        Slides
      • 83
        EGI-CSIRT: Coordinating Operational Security in evolving distributed IT-Infrastructures. Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Operational Security in Scientific distributed IT-Infrastructures like EGI is challenging. Existing computation frameworks get further extended, and new technologies implemented. In this evolving environment new policies have to be developed, and existing policies and procedures have constantly to be extended to meet new requirements. To efficiently enforce new policies, the security monitoring infrastructure has to be further developed to cover all elements of the evolving infrastructure. Finally the incident response (ir) tool set has to be extended to be able to efficiently handle security incidents affecting new technologies. In this presentation we will discuss EGI-CSIRTs way towards extending its portfolio to also provide all aspects of operational security in a Cloud environment. This covers the developments around the Virtual Machine Endorsement Policy and the related technical aspects towards a trustworthy set of Virtual Machine Images (VMI) offered to the user community through an Application-DataBase. VMIs with vulnerable configurations were already involved in incidents handled by EGI-CSIRTs Incident Response Task Force (IRTF). Here it got apparent that the existing procedures and tools, which were successfully applied in IR in EGI, exposed deficiencies when applied to the FedCloud services. This triggered the development of a central User- and Virtual Machine-Management for frameworks deployed in EGI-FedCloud. The status of these tools will be demonstrated and the integration with the existing IR tools discussed. In EGI the policies and procedures are put to a test in so called Security Service Challenges (SSCs) to check if they indeed help in security operations to prevent and respond to incidents. An SSC addressing EGI-FedCloud interfaces and IR procedures will be described.
        Speakers: Dr Sven Gabriel (Nikhef/EGI) , Mr Vincent Brillault (CERN)
        Slides
      • 84
        Q&A Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • Coffee Break
    • Coffee Break
    • Coffee Break
    • Coffee Break
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      Convener: Dr Chi-Yu Fu (Academia Sinica)
      • 85
        EMAN 2 (Part 2)
        Speaker: Dr Sunny Wu
    • ECAI Workshop: Mapping and Culture in Taiwan Conf. Room 802

      Conf. Room 802

      BHSS, Academia Sinica

      Convener: David Blundell (National Chengchi University)
      • 86
        Endangered Languages and Flow of Identities: State Policies and Ethnic Boundaries of the Thao
        Speaker: Prof. Yayoi Mitsuda (National Chi Nan University)
        Slides
      • 87
        History and Projections of Tombs Research in the Taiwan Area
        Speaker: Prof. Oliver Streiter (National Kaohsiung University)
        Slides
      • 88
        Earth Deity Mapping and Community Networks in Taiwan
        Speaker: Dr James Morris (National Chengchi University)
        Slides
    • Network, Security, Infrastructure & Operations II Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Chalee Vorakulpipat (NECTEC)
      • 89
        Identifying Suspicious Network Activities in Grid Network Traffic: Finding the needle in a stack of needles
        In this presentation we will share our experience with analysing a year of grid network flow data. The network flow data provides only limited information regarding the nature of network traffic that traveled through the network segments. Therefore researchers need to come up with additional methods of anomaly detection, data enrichment and cross-referencing in order to effectively identify ’true-positives’: a subset network flows which could be of some interest to security officers: from denial of service attacks, to malware operations, network scanning and attacker’s lateral movements. In this study having access to other network data feeds (such as honeypot networks) and full packet payload monitoring, so we demonstrate how such sources could be effectively leveraged in identifying and verifying suspicious network activities.
        Speaker: Mr Fyodor Yarochkin (Academia Sinica)
        Slides
      • 90
        Modern Monitoring Systems
        Monitoring your infrastructure is vital to ensure your services are running for your users in an Cloud or Grid environment. Modern Monitoring Systems can not only check the health of your hardware but also the status of provided services or issues that are relevant to security. In this talk an overview of Modern Monitoring Systems is given and how they interact, scale and integrate with your infrastructure.
        Speaker: Mr Aleksander Paravac (University of Wuerzburg)
        Slides
      • 91
        Status of Network Security Operations at IHEP
        Institute of High Energy Physics (IHEP) is an institute of Chinese Academy of Sciences which explore elementary particle physics. The network of IHEP campus and data center connects about 1200 servers and 3000 PC clients. It supports IPv4 and IPv6 protocols, both of them has 10 Gbps internet access. This report gives a brief introduction to the status of network security operations at IHEP. Firstly, an architecture overview is presented. Then four subsystems are described in detail, including a monitoring and early warning system, an intranet proactive defense system based on user behavior, a network security self-service platform and a cyber-security federation for HEP community in China. A network security monitoring and early warning system was developed and deployed based on a 10 Gbps network probe. By monitoring the network traffic data and analyzing the log of network devices and servers, it can find out the malicious IP addresses and then block the attack by associated firewall. The affected hosts in local network can be located too. A warning system is integrated and it will send short messages to administrators when high risk attack appears. An intranet proactive defense system was designed as a supplement of traditional strategy of network boundary protection. It’s based on user behavior analysis with help of artificial neural network. Suspicious user behavior can be detected and visualized, as well as early warning and even proactive attack block. In order to help users getting rid of their host vulnerability, we developed and deployed a network security self-service platform. This system can present straightforward result of quantized fuzzy evaluation of host security risk. The result is obtained by analytic hierarchy process and cloud model theory. We improved the multi-level index system of the analytic process by dynamic weight method, which increased the adaptability and objectivity of the index system. In June 2016, we proposed a China cyber-security federation for high energy physics (CSFHEP), about 10 universities and institutes have joined in this federation. The constitution of CSFHEP is completed and the federation is now in test run. In framework of CSFHEP, a cooperative security response center and a cyber-security study group are founded. All the members will benefit from this federation on security incident response support, threat information sharing, training on secure operations and other related services.
        Speaker: Dr Tian Yan (Institute of High Energy Physics, CAS, China)
    • VRE Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Kento Aida (NII)
      • 92
        Data Provenance Tracking as the Basis for a Biomedical Virtual Research Environment
        In complex data analyses it is increasingly important to capture information about the usage of data sets in addition to their preservation over time in order to ensure reproducibility of results, to verify the work of others and to ensure appropriate conditions data have been used for specific analyses. Scientific workflow based studies are beginning to realize the benefit of capturing this provenance [1] of their data and the activities used to process, transform and carry out studies on that data. This is especially true in biomedicine where the collection of data through experiment is costly and/or difficult to reproduce and where that data needs to be preserved over time. There is a clear requirement for systems to handle data over extended timescales with an emphasis on preserving the analysis procedures themselves and the environment in which the analyses were conducted alongside the processed data. One way to support the development of workflows and their use in (collaborative) biomedical analyses is through the use of a Virtual Research Environment. However, the dynamic and geographically distributed nature of Grid/Cloud computing makes the capturing and processing of provenance information a major research challenge. In addition most workflow provenance management services are designed only for data-flow oriented workflows but researchers are now realising that tracking data alone is insufficient to support the scientific process [2]. What is required for collaborative research is traceable and reproducible provenance support in a Virtual Research Environment (VRE) that enables researchers to define their analyses in terms of the datasets and processes used, to monitor and visualize the outcome of their analyses and to log their results so that others users can call upon that acquired knowledge to support subsequent analyses. We have extended the work carried out in the neuGRID and N4U projects in providing a Virtual laboratory [3] to provide the foundation for a generic VRE in which sets of biomedical data (images, laboratory test results, patient records, epidemiological analyses etc.) and the workflows (pipelines) used to process those data, together with their provenance data and results sets are captured in the CRISTAL software [4]. This paper outlines the functionality provided for a VRE by the Open Source CRISTAL software and examines how that can provide the foundations for a practice-based knowledge base for biomedicine and, potentially, for a wider research community. References: [1] Y. Simmhan et al., “A Survey of Data Provenance in e-Science”. In SIGMOD RECORD, Vol 34, P. 31-36. ACM, 2005. [2] S. Bechhofer et al., “Why Linked Data is not Enough for Scientists”. Future Generation Computer Systems Vol 9 No. 2 pp 599-611, Elsevier Publishers, 2013. [3] R. McClatchey et al, “Traceability and Provenance in Big Data Medical Systems”. Proc of CBMS 2015, Sao Carlos, Brazil. [4] A. Branson et al., “CRISTAL : A Practical Study in Designing Systems to Cope with Change”. Information Systems 42, pp 139-152. Elsevier publishers.
        Speaker: Prof. Richard McClatchey (University of the West of England, Bristol UK)
        Slides
      • 93
        Design and Implementation of Portal System for Subscribed Cloud Services in Identity Federation
        With growing number of online services, identity federation is rapidly spreading, especially in the academic world. Identity federations have been established in many countries. In Japan, an academic identity federation called “GakuNin” is operated since 2010. In identity federation, IdP can provide information about a user as attributes in addition to the authentication-related information. An SP can request attributes about a user from an IdP and then utilize those attributes to authorize users and provide services. In general, an IdP releases attributes they derived from authoritative source systems within the organization. Although academic society affiliations or research community membership across universities are user data that would often be interesting for academic services, a typical campus identity management system is not designed to manage such data, and the IdP is thus unable to provide it to services. Some of the advanced academic federations are beginning to make available attributes like this, especially related to the membership of a group that spans multiple organizations, by means of the Attribute Provider (AP). The SP is typically responsible for aggregation of additional attributes from APs after the initial exchange with the IdP. A unique user identifier is supplied by the IdP to the SP and is shared with APs in order to look up additional attributes of that user. An IdP usually joins one federation and is connected to some services such as organization-specific services provided by the organization. If users in the IdP could use all the services joining the federation, they could easily recognize usable services through their credentials of the IdP. But the reality is not the case, for example, because the organization does not subscribe a service, or the IdP is not configured to send attributes to the service correctly, and so on. Users must access each service that joins the federation in order to make sure that they can use the service. It is not realistic especially if the federation has a lot of services. Furthermore, adding AP makes this problem more complex. The AP must know user identifiers in advance to identify themselves, usually by means of attributes from IdPs. Some SPs must also receive specific attribute values from an AP for authorization, which is uncertain for ordinary users. We propose a portal system for each stake holder to register some information. With this information it displays his / her usable services for any accessing user. For example, IdP operators register SPs configured on their IdPs, and SP operators register their strategies for authorization. APs provides membership information of the user. We do an elementary evaluation about our system, which calculates availability of some complicated SPs by some conditioned users. We will present a demo of this system working in our production federation.
        Speaker: Mr Takeshi Nishimura (Project Researcher)
        Slides
      • 94
        Framework for Developing Cloud enabled Applications for IoT
        In this paper we are proposing an Application Programming Interface (API) framework that can integrate cloud computing functionalities with Sensors to provide the on-demand computational resources, dynamic storage allocation, and database for developing cloud enabled application for Internet of Things (IoT). The functions of IoT, ie internetworking of physical devices, vehicles, buildings and other items—embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data will be taken care by this API. Since these are embedded devices have limited CPU, memory and power resources. Cloud computing technology will be used for design and development of this Architecture and its APIs. Since the IoT and Cloud computing domains are an emerging area there will undoubtedly be a surge in cloud centric applications with Sensors. This API framework is designed to interface with sensor networks to transfer the required data to centralized Cloud Storage. Then the Cloud Storage serves as Database as a Service to any Cloud Applications, Mobile Apps or any other web applications. For example, the proposed API framework will interface with Ubi-Sense sensor board developed with Air pollution and humidity sensor that supports digital output display, light sensors with infrared reacts and approximates person eyes, smoke sensor and buzzer to identify any abnormity in the above parameters. The well-known drawbacks of Sensor data such as lower visibility, non-accessibility will be addressed through this work. The Cloud computing functionalities Elasticity, Scalability, Optimal usage of resources, Minimal maintenance cost will be readily available for the IoT applications. The advantage of using Sensors and its data are reasonable cost, power capacity and small size. Some of the Applications of Sensors are climate monitoring, battle field surveillance, air pollution monitoring, habitat monitoring, tactical surveillance, distributed computing, vehicular network, spying, etc can be supported using this framework. The applications of this work are measuring the pollution level in the office premise, temperature monitoring in the data center. The information generated from the sensor will be stored in the cloud storage. The data of sensor information will be sent as input to the web application or third party applications. The sensor information will be archived in hierarchical manner for later usage and from API will provide as Database as a Service.
        Speakers: Mr Arunachalam Bala (C-DAC) , Mr Battepati Kalasagar (C-DAC) , Ms Mangala N (C-DAC)
    • 6:30 PM
      Welcome Reception Grand Mayfull Taipei Hotel

      Grand Mayfull Taipei Hotel

    • Keynote Session II Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Mr Ian Collier (STFC-RAL)
      • 95
        Caches all the way down: Infrastructure for Data Science
        The rise of big data science has created new demands for modern computer systems. While floating performance has driven computer architecture and system design for the past few decades, there is renewed interest in the speed at which data can be ingested and processed. Early exemplars such as Gordon, the NSF funded system at the San Diego Supercomputing Centre, shifted the focus from pure floating point performance to memory and IO rates. At the University of Queensland we have continued this trend with the design of FlashLite, a parallel cluster equiped with large amounts of main memory, Flash disk, and a distributed shared memory system (ScaleMP’s vSMP). This allows applications to place data “close” to the processor, enhancing processing speeds. Further, we have built a geographically distributed multi-tier hierarchical data fabric called MeDiCI, which provides an abstraction very large data stores cross the metropolitan area. MeDiCI leverages industry solutions such as IBM’s Spectrum Scale and SGI’s DMF platforms. Caching underpins both FlashLite and MeDiCI. In this talk I will describe the design decisions and illustrate some early application studies that benefit from the approach.
        Speaker: Prof. David Abramson (University of Queensland)
        Slides
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Wei-Hau Chang (Academia Sinica)
      • 96
        High-resolution Integrative Modelling of Biomolecular Complexes from Fuzzy Data
        Speaker: Dr Alexandre M.J.J. Bonvin (Utrecht University)
    • Poster Session Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Ludek Matyska (CESNET)
      • 97
        A Single Rack Cloud Center with Unprecedented Power and Thermal Efficiency
        Speaker: Dr Chih-hsun Lin (Institute of Physics, Academia Sinica)
        Slides
      • 98
        A Switching Mechanism of Visualization Middleware and Application Using Docker
        Speaker: Mr Kazuya Ishida (School Of Engineering, Osaka University)
        Slides
      • 99
        Automated Quality Control for Data from ASTI Automatic Weather Stations
        Speaker: Mr Jays Samuel Combinido (Department of Science and Technology -- Advanced Science and Technology Institute)
        Slides
      • 100
        Design and Implementation of Unified Authentication Management System of IHEP
        Speaker: Ms li wang (IHEP)
        Slides
      • 101
        GROMACS in the Clouds: A Portative User-friendly Interface Bridged to Grid-computing Resources
        Speaker: Dr Mikael Trellet (Utrecht University)
        Slides
      • 102
        ICT Enhanced Interactive Remedial Mathematics Program for Science and Engineering Students
        Speaker: Tosh Yamamoto (Kansai University)
        Slides
      • 103
        In Silico Drug Discovery of Potential HCV Helicase Inhibitors
        Speaker: Dr Choon Han Heh (University of Malaya)
        Slides
      • 104
        Metadata as Linked Data for Research Data Repositories
        Speaker: Mr Cheng-Jen Lee (IIS, Academia Sinica)
        Slides
      • 105
        Monitoring Virtual Devices in Mass Storage Environments
        Speaker: Mr Tim Chou (Brookhaven National Laboratory)
        Slides
      • 106
        Security Incident Response Procedure for Inter-Federation
        Speaker: Ms Hannah Short (CERN)
        Slides
      • 107
        Ten Years of Operations at the PIC Tier-1
        Speaker: Dr Josep Flix (PIC / CIEMAT)
        Slides
      • 108
        The Virtual Reality (VR) Training System for Disaster Preparedness
        Speaker: Mr Deep Ayadi (The Thin Page)
      • 109
        Toward Construction of Resilient Software-Defined IT Infrastructure for Supporting Disaster Management Applications
        Speaker: Dr Yasuhiro Watashiba (Nara Institute of Science and Technology)
        Slides
      • 110
        Virtualized Web Portals in EGI Federated Cloud
        Speaker: Dr Aleš Křenek (Masaryk University)
    • Coffee Break & Poster Session
    • Coffee Break & Poster Session
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Wei-Hau Chang (Academia Sinica)
      • 111
        POWERFIT and DISVIS (part 1)
        Speaker: Dr Alexandre M.J.J. Bonvin (Utrecht University)
        Slides
    • GDB Meeting Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Mr Ian Collier (STFC-RAL)
      • 112
        GDB Introduction
        Speaker: Mr Ian Collier (STFC-RAL)
        Slides
      • 113
        ASGC Report
        Speaker: Mr Felix Lee (ASGC)
        Slides
      • 114
        Asian Tier Forum report
        Speaker: Dr Sang Un Ahn (Korea Institute of Science and Technology Information)
        Slides
      • 115
        Asian Network Status
        Speaker: Dr Hsin-Yen Chen (ASGC)
        Slides
    • e-Science Activities in Asia Pacific II Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Ludek Matyska (CESNET)
      • 116
        e-Science Activities in Thailand
        The National e-Science Infrastructure Consortium was formed to collaboratively develop computing, data storage and fundamental datasets as a sustainable infrastructure to support research in Thailand. The computing resources are used to support research projects in the areas of Computational Science and Engineering, Computer Science and Engineering, Water Resource, Energy and Environment Management, Climate Change and High Energy Particle Physics. The consortium involves members from a number of universities, research institutions, and public sectors which have committed to provide computing resources to the consortium and have participated in relevant activities to make contributions of collaboration with the consortium and to promote e-Science in Thailand.
        Speaker: Dr Chalee Vorakulpipat (NECTEC)
        Slides
      • 117
        e-Science Activities in Indonesia
        Speaker: Dr Basuki SUHARDIMAN (ITB)
        Slides
      • 118
        eScience Activities in Malaysia
        Speaker: Dr Suhaimi NAPIS (UPM)
        Slides
      • 119
        eScience Activities in Vietnam
        Speaker: Dr Nam THOAI (HCMUT)
        Slides
      • 120
        eScience Activities in Philippine
        Speaker: Dr Peter Banzon (ASTI)
        Slides
      • 121
        Q&A
    • 11:15 AM
      Coffee Break
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Wei-Hau Chang (Academia Sinica)
      • 122
        POWERFIT and DISVIS (part 2)
        Speaker: Dr Alexandre M.J.J. Bonvin (Utrecht University)
    • 12:30 PM
      Lunch
    • 12:30 PM
      PC Meeting Room 901

      Room 901

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • 1:00 PM
      Lunch
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      Convener: Dr Wei-Hau Chang (Academia Sinica)
      • 123
        Scipion (part 1)
        Speaker: Prof. Jose Maria Carazo Garcia (National Center for Biotechnology - CNB - CSIC)
        Slides
    • GDB Meeting Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      Convener: Mr Ian Collier (STFC-RAL)
      • 124
        Traceability & Isolation Working Group update
        Speaker: Mr Vincent BRILLAULT (CERN/EGI)
        Slides
      • 125
        IPv6 Rollout
        Speaker: Dr David Kelsey (STFC-RAL)
        Slides
      • 126
        Sirtfi - Incident response for for Identity Federations
        Speaker: Ms Hannah Short (CERN)
        Slides
    • Humanities, Arts & Social Sciences I
      Convener: Dr David J. Bodenhamer (Indiana University-Purdue University)
      • 127
        Platform for Humanities Open Data
        Construction of humanities databases have difficulties due to some reasons. First, construction process requires expert knowledge and techniques of database systems, which impedes database construction by humanities researchers. Second reason is diversity of resource media, which enriches the humanities researches but is an obstacle to metadata standardization, and brings about heterogeneous databases. Third reason is this heterogeneity of metadata, which makes sharing data difficult. Center for Integrated Area Studies, Kyoto University (CIAS) has developed two information tools, named MyDatabase (MyDB) and Resource Sharing System (RSS), to solve these difficulties. The main component of MyDB is a database builder, allowing humanities researchers to construct and revise databases without expert knowledge. MyDB stores metadata and accepts any vocabulary of metadata, including nonstandard one. This enables humanities researchers to use their own metadata vocabulary according to their own purpose. On the other hand, those metadata varieties make the integration processes difficult. RSS is developed to integrate heterogeneous databases on the Internet and to provide users with a uniform interface to retrieve databases seamlessly in one operation. Thus, MyDB and RSS have contributed to accelerate humanities open data, but there are still two problems to solve, especially of RSS: small coverage of databases and initial costs of integration. First, for example, Kyoto University releases KULINE (OPAC), KURENAI (repository), KURRA (archive), Open Course Ware and various databases developed by each research institute in the university, but RSS does not integrate these databases. Second, it is time consuming to integrate new databases into RSS and impossible to trace links automatically, that is, for now, RSS is not the appropriate tool to discover hints and/or create new knowledge. To overcome these drawbacks, a new project has been launched to develop an innovative information platform for open humanities data. This platform comprises three sublayers. The first layer is "Open Data Layer" which accumulates heterogeneous metadata. This layer uses RDF to describe data of different structures. The second layer is "Data Link Layer." This layer uses ontology techniques such as RDFS and OWL to link ambiguous (uncontrolled) vocabularies and emerge "humanities big data.” The third layer is "Application Layer." As humanities big data is too huge and complicated to retrieve, categorize, and analyze by hands. This layer provides utilities to process big data. This platform will prepare for APIs to help mashup applications. We expect the platform to reconstruct knowledgebase from heterogeneous databases, which is used to construct meaningful chunks from scattered data. As a pilot study to determine the validity of the platform, we prepared a dataset of "Japanese Journal of Southeast Asian Studies" as a core dataset (the first layer) for trying to link words or documents in the core dataset to external resources such as KURENAI or DBpedia (the first layer also). Then, the relationship between heterogeneous internal and external databases will be described in the second layer, so that whole data is structured for clean API and exploits that data in an annotated paper viewer (the third layer).
        Speaker: Prof. HARA Shoichiro (Center for Integrated Area Studies, Kyoto University)
        Slides
      • 128
        Data Science as a Foundation Toward Open Data and Open Science: The Case of Taiwan Indigenous Peoples open research Data (TIPD)
        Tthe research aims are threefold: to (1) demonstrate the methodology of data science in constructing Taiwan Indigenous Peoples open research Data (TIPD, see [enter link description here][1]http://TIPD.sinica.edu.tw, and [enter link description here][2]https://osf.io/e4rvz/) based on Taiwan Household Registration (THR) administrative data; (2) to illustrate automated or semi-automated data processing as a determinant of constructing effective open data; and (3) to demonstrate appropriate utilization of “old-school” data format such as multi-dimensional tables as an effect means to overcome legal and ethic issues. The research extracts valuable information embedded in micro data of THR and enriches the extracted information through the processes of cleaning, cleansing, crunching, reorganizing, reshaping the source data. Major outputs of TIPD amount to 7,300 files in number and around 32 GB in size. TIPD now consist of three categories of open research data: (1) categorical data, (2) household structure and characteristics data, and (3) population dynamics data. Categorical data include two broad dimensions. The data enrichment processes produce a number of data sets that contain no individual information but retain most of source data information. The enriched data sets thus can be open to the public for open administrative data study. The open data are systematically constructed in an automated or semi-automated way through the integration of compiler programming language, software, and script languages. Keywords: data science, administrative data, open data, open science, TIPD [1]: http://TIPD.sinica.edu.tw [2]: http://osf.io/e4rvz
        Speaker: Dr Ji-Ping Lin (Academia Sinica)
        Slides
      • 129
        A Preliminary Study on Reconstructing Faded Color by Spectral Estimation Method for Heritage Object
        The color appearance of heritage object may reveal not only its unique style but also its culture characteristics. However as time goes by, the color of the heritage object may be changed, for example, the exposure to sunlight may cause the color on the exterior of historical building to fade gradually. This study proposes a color estimation method to recover faded color based on spectral reflectance data gathered from Bogd Khan Palace Museum in Ulaanbaatar, Mongolia. Several series of colors were measured at different locations of the building, which are under different level of shading in its structure. Photos of a white ruler were taken at the same time as a reference to indicate the intensity of sunlight exposure to each position of the measurement. The results indicate a possible application of spectral color technique to the culture heritage work. This study relies on the spectral technology in order to achieve unprecedented color restoration. Therefore, the data generated from the spectral processing technique is enormous especially when novel data analytics algorithms are required.
        Speaker: Prof. M. James Shyu (Chinese Culture University)
        Slides
    • 3:15 PM
      Coffee Break
    • Cryo-EM Workshop Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Wei-Hau Chang (Academia Sinica)
      • 130
        Scipion (part 2)
        Speaker: Prof. Jose Maria Carazo Garcia (National Center for Biotechnology - CNB - CSIC)
      • 131
        Closing remark/round table discussion
    • 3:31 PM
      Coffee Break & Poster Session
    • 3:31 PM
      Coffee Break & Poster Session
    • GDB Meeting Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      Convener: Mr Ian Collier (STFC-RAL)
      • 133
        Tier 1 Configuration Evolution & Options
        Speaker: Dr Josep Flix (PIC / CIEMAT)
        Slides
      • 134
        Wrap Up & WLCG Workshop update
        Speaker: Mr Ian Collier (STFC-RAL)
    • Humanities, Arts & Social Sciences II Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Prof. Richard MARCIANO (University of Maryland)
      • 135
        Using Advanced e-Systems for Community-Engaged Research Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Community Information Systems is a generic term that describes a wide range of methods to organize, manage, and disseminate community data for the purpose of increasing the capacity of citizens and organizations to participate effectively in decision-making. CIS is a growing phenomenon in the U.S., beginning in 2005 with a Brooking Institution report, National Infrastructure for Community Statistics. A 2011 U.S. General Accountability Office study revealed that over 35 American cities has a form of CIS, with many of them part of an emerging consortium, the National neighborhood Indicators Partnership. The SAVI Community Information System for Central Indiana (SAVI), developed and managed by the Polis Center at Indiana University Purdue University, Indianapolis, is the nation’s largest and most comprehensive CIS. In existence since 1995, SAVI brings together over 35 data providers into a rich spatially enabled, web-based environment that allows citizens and researchers to understand a wide range of social issues at more than a dozen geographic scales, from census blocks and neighborhoods to counties and metropolitan areas. Now it is transitioning to a community intelligence system, with information integrated at both individual and geographic units and with enhanced reporting capabilities and embedded predictive analytics. This system also has been selected to serve an innovative new Indiana University initiative to link scientific and social science data for the purpose of helping the state’s citizens to respond more effectively to 21st Century changes. This presentation will discuss both the technical infrastructure and research potential of SAVI, especially in the rapidly emerging area of community-engaged research. It also will outline how SAVI is part of an emerging system of systems that uses advanced computing to link a variety of administrative and human service databases for community advancement.
        Speaker: Dr David J. Bodenhamer (Indiana University-Purdue University)
        Slides
      • 136
        A Proposal: ePortfolio for enhancing active learning for the future generation Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        It is proposed that, the students’ meta-cognitive reflection being the key to learning, the development of the students’ expressive skills in writing literacy will, indeed, bring about a success in life after graduation. The four years of college education must fill up the “fuel tank of knowledge, wisdom, competencies and skills for lifelong learning”, which must not be depleted for over 40 years until retirement. By nurturing the students’ writing skills as the artifact or evidence for academic learning through meta-cognitive activities, is it possible to conduct the real education, producing graduates of high quality and caliber. The basic assumption of incorporating the development of writing skills in the college curriculum or program is for the propose of making our culture richer and elevating the value of our heritage for the benefit of the better future in terms of constructive and humanistic communication. It is seen that ePortfolio has potential to grow into such robust ICT enhanced system for the education in a new paradigm fulfilling the need for transdisciplinarity. This proposal is to put ePortfolio into a bigger picture in higher education, namely, in the realm of ePortfolio for academia, in which the process of learning leads to the benefit of career design and life-long learning. It cannot be denied that the proficiency in academic writing will bring students to a success in career as well as in the life long learning. In other words, the artifact in writing is the mirror of the learning mind. As the IFTF (Institute for the Future) claims that the 2020 skills include Global Awareness and Rich ICT, Media Literacy, as well as Digital Communication/ Presentation skills as the essential future skills, the mirror of the reflective learning mind incorporates not only the written information but also the rich media. Thus, the future education fortified with ePortfolio must also incorporate artifact of learning in rich media. While in the past, paper and a pencil were the optimal technologies to reflect the evidence for learning, digital media literacy has been becoming dominant due to the advancement of ICT. It is believed that the digital media have been providing us with richer ways of communication and presentation of the learning mind.
        Speakers: Prof. Maki Okunuki (Kansai University) , Mr Masaki Watanabe (iGroup Japan) , Dr Tosh Yamamoto (Kansai University) , Dr Yuri Kite (Kansai University)
        Slides
      • 137
        Occupation recommendation with major programs for adolescents Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Students make initial and critical decisions regarding what to study further and which career path to pursue. Many students enter post-secondary education without a clear idea of their major and future career plans. Mismatch of the major choice and lack of processing information through the professional study is one of the reasons to switch the major. Such changes are wasteful in time and resources, and also it is the cause of emotional and financial stresses of students. An approximated twenty to fifty percent of students enter college as undecided, and an approximated seventy-five percent of students change their major at least once before graduation. Hence, to find an adolescent’s major/occupation as earlier as possible can help them to choose correct learning direction. Due to the rapid development of society, adolescents need counselling session to enable them to choose a suitable major and occupation. The choice of the major and occupation has become increasingly complex due to the existence of multiple human abilities and interests which mean each person or human has their ability or skill at the certain area and can be applied to multiple occupations. The main problem of difficulty making the major/occupation selection among students is they do not know how to make decisions, lack of knowledge and information about majors/occupations and over information of common occupations. Therefore, it is essential to build the major/occupation recommendation system for the adolescents with a capacity to meet all the needs where it provides direction and guidance to adolescents in choosing the major/occupation that suits with their vocational interests and competencies. In this regard, this study proposes Occupation Recommendation System (ORS) with major program of study using Collaborative Filtering (CF) methods, a framework to assist students in the major/occupation selection, to provide the information about contemporary occupations, and to suggest the suitable occupations with the major program of study based on their interests, skills, favorite courses, high score courses, earned certificates, and learned tools/technologies. To do this, firstly we normalize data and secondly compute similarities. Then the predictions of factors are calculated if the factors are missing for the active student. Finally, the system executes hybrid CF algorithms to recommend the occupations with major program to the active student. The study was carried out in accordance with following two questionnaires with Mongolian 26 adolescents in the spring semester of the 2015/ 2016 academic year. In addition, Holland vocational Interest and skill questionnaires were employed respectively during the experiment for collecting data.
        Speaker: Ms Ankhtuya Ochirbat (National Central University)
        Slides
    • 6:30 PM
      PC & GDB Dinner
    • Keynote Session III Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Alexandre M.J.J. Bonvin (Utrecht University)
      • 138
        Big Data-Driven Drug Discovery
        Speaker: Prof. Jung-Hsin Lin (Academia Sinica)
        Slides
      • 139
        High Performance Computing Environment and Applications in CAS
        Speaker: Dr Xuebin Chi (Chinese Academy of Sciences)
        Slides
    • 10:30 AM
      Coffee Break & Poster Session
    • e-Science Actvities in Asia Pacific III Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Alberto Masoni (INFN National Institute of Nuclear Physics)
      • 140
        eScience Activities in Singapore
        Speaker: Dr John KAN (A-STAR)
        Slides
      • 141
        e-Science Activities in India
        Speaker: Dr Sarat Chandra Babu NELATURU (Centre For Development of Advanced Computing)
        Slides
      • 142
        eScience Activities in Australia
        Speaker: Dr Glenn Moloney (University of Melbourne)
        Slides
      • 143
        eScience Activities in Pakistan
        Speaker: Mr Saqib Haleem (National Centre for Physics, Islamabad, Pakistan)
        Slides
      • 144
        Q&A
    • 12:30 PM
      Joint DMCC/APGI Meeting Conf. Room 901

      Conf. Room 901

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • 12:30 PM
      Lunch
    • Biomedicine & Life Science I Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Jung-Hsin Lin (Academia Sinica)
      • 145
        The DisVis and PowerFit web servers: Explorative and Integrative Modeling of Biomolecular Complexes harvesting EGI GPGPU resources Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Structure determination of complex molecular machines requires combination of an increasing number of experimental methods with highly specialized software geared towards each data source to properly handle the gathered data. Recently we introduced the two software packages PowerFit [1,2] and DisVis [3]. These combine high-resolution structures of atomic subunits with density maps from cryo-electron microscopy or distance restraints, typically acquired by chemical cross-linking coupled with mass spectrometry, respectively. To facilitate their use by a broad community, they have been implemented as web portals harvesting both local CPU resources and GPGPU-accelerated EGI HTC resources [4], making use of GPGPU-enabled Docker containers developed under the MoBrain competence center of EGI-Engage[5] and the INDIGO-Datacloud EU project [6]. The web portals offer user-friendly interfaces, while minimizing computational requirements, and provide a first interactive view of the results. The portals can be accessed freely after registration via http://milou.science.uu.nl/services/DISVIS and http://milou.science.uu.nl/services/POWERFIT. 1. G.C.P. van Zundert and A.M.J.J. Bonvin. Fast and sensitive rigid-body fitting into cryo-EM density maps with PowerFit. *AIMS Biophysics*. **2**, 73-87 (2015). 2. G.C.P van Zundert and A.M.J.J. Bonvin. Defining the limits and reliability of rigid-body fitting in cryo-EM maps using multi-scale image pyramids. *J. Struct. Biol.*, **195**, 252-258 (2016). 3. G.C.P. van Zundert and A.M.J.J. Bonvin. DisVis: Quantifying and visualizing accessible interaction space of distance-restrained biomolecular complexes. *Bioinformatics*. **31**, 3222-3224 (2015). 4. http://www.egi.eu 5. https://mobrain.egi.eu 6. http://www.indigo-datacloud.eu
        Speaker: Prof. Alexandre Bonvin (Utrecht University)
        Slides
      • 146
        NMRBox and VCell: Common Flexible Infrastructure Supporting Two Very Different Virtual Research Communities Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        NMRbox is a shared computational platform for NMR which aims to simplify and integrate the dissemination, maintenance, support, and application of NMR data processing and analysis software packages. From the NMRbox perspective, the collection of myriad tools has enabled (1) development of meta-packages that utilize multiple tools to accomplish complex tasks that are beyond the scope of individual packages and (2) fosters critical comparison of different software packages that accomplish the same or similar tasks (e.g. comparing 11 different non-Fourier methods of spectrum analysis). BioMagResBank (BMRB) is the international repository for biological NMR data, and depositions have already enabled meta-analyses yielding novel insights into biomolecular structure and function based on chemical shifts (nuclear resonance frequencies). However depositions of other types of bioNMR data remain insufficient for meta-analysis. NMRbox utilizes the data model underlying BMRB as the organizing principle for annotating NMR data processing and analysis workflows, and consequently transparently facilitates and enriches BMRB depositions. VCell is a unique computational environment for modeling and simulation of cell biology that is deployed as a distributed application freely available over the Internet. VCell provides for “one-stop simulation shopping” whereby deterministic (compartmental ODE or reaction-diffusion-advection PDE), stochastic (several SSA solvers), spatial stochastic (reaction-diffusion with Smoldyn), hybrid deterministic/stochastic, and network-free agent based simulations can be easily created to study the same biological system/hypothesis. Model geometries may be derived from idealized analytical expressions or from experimental 2D or 3D microscope images and support for membrane flux, lateral membrane diffusion and electrophysiology is included. Inexperienced modelers can enter reactions and pathways in a biology-based interface and VCell automatically creates the mathematical system of equations according to the all possible formulations listed above. Models and simulations can be accessed from anywhere and can be shared among collaborators or made publicly available through the VCell database. NMRBox and VCell are supported by a common HPC infrastructure that exploits modern virtualization, distributed, and cloud computing technologies to provide high availability as well as software persistence, using customized, flexible, and evolving solutions for the respective projects and research communities. Both NMRbox and VCell lower the barrier to entry by non-experts in a number of ways: first, by solving the problem of software discovery and installation; second, by providing GUI “wrappers” that hide the complicated syntax utilized by many packages and solvers; and third, by seamlessly handling data conversions and import/export operations required for interoperability within the respective software platforms or with other external tools.
        Speaker: Dr Ion I. Moraru (UCONN Health)
      • 147
        Application of non-uniform sampling method in NMR spectroscopy Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        To harness the resolving power of high-field NMR spectroscopy, non-uniform sampling (NUS) methods and efficient data reconstruction methods are absolutely required to obtain NMR spectra at otherwise unreachable spectral resolution in indirect dimensions. I will describe necessity of NUS methods in high-field NMR spectroscopy and my experience using the methods developed by Wagner laboratory at Harvard Medical School [Hyberts, et al., 2011; Hyberts, et al., 2012.]. These important methods include the Poisson-gap sampling method, the Forward Maximum entropy (FM) reconstruction method [Hyberts et al., 2012], and the iterative soft thresholding (IST) method [Hyberts et al., 2011]. These methods will be used to record and reconstruct all demanding 3D or 4D experiments. References Hyberts S. G., Milbradt A. B., Wagner A. B., Arthanari H., Wagner G. (2011). Application of iterative soft thresholding for fast reconstruction of NMR data non-uniformly sampled with multidimentionsal Poisson Gap Scheduling. J. Biomol. NMR 52, 315-327. Hyberts S. G., Arthanari H. and Wagner G. (2012).Applications of non-uniform sampling and processing. Top. Curr. Chem. 316, 125-148.
        Speaker: Dr Tsyr-Yan Yu (Inst. of Atomic & Molecular Sciences, Academia Sinica)
      • 148
        Image processing in cryo Electron Microscopy (cryo EM): Analyzing reliability and quality Conf. Room 2

        Conf. Room 2

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Modern electron microscopes equipped with direct electron detectors are nowadays providing 2D images (“micrographs”) with quasi atomic resolution. However, the way from 2D image collection to the calculation of accurate structural maps showing the electrostatic potential of biological macromolecules demands complex image processing operations, involving many local optimizers. Naturally, the possibility always exists of getting trapped into local minima, representing wrong structural maps. In this context, I will present several approaches to analyze reliability and quality in cryo EM, including ways to analyze precision and accuracy in estimating the relative geometrical orientation among images (a key step in the 3D reconstruction process) and a novel approach to resolution estimation, totally automatic, allowing for local (pixel-based) analysis. All these new approaches are freely available as part of both XMIPP and Scipion image processing suites.
        Speaker: Prof. Jose Maria Carazo Garcia (National Center for Biotechnology - CNB - CSIC)
    • Massively Distributed Computing and Citizen Sciences Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Andrea Valassi (CERN)
      • 149
        Explore multi-core virtualization on the ATLAS@home project
        The exploitation of volunteer computing resources has become a popular practice in the HEP (High Energy Physics) computing community as the huge amount of potential computing power it provides. ATLAS@home, as a pioneer project in this practice, uses the BOINC middleware to harness thousands of worldwide volunteer computers, has been harvesting a very considerable number of computing resources since its official launch in 2013. With more and more resource diverse volunteer computers participating in this project, it is very necessary to explore ways to optimize the usage of volunteer computing resources in terms of memory, storage and CPU to qualify more computers of running ATLAS@home jobs. The ATLAS software Athena has already supported multi-core processing, and been running stably and efficiently on its Grid sites for over 1 year. Statistics from Grid sties show that using multi-core processing can significantly reduce the total memory usage upon utilizing the same amount of CPU cores. Based on this practice, we explore the usage of multi-core virtualization on the ATLAS@home project, which is to spawn one virtual machine with all the available CPU cores from the volunteer computer, and then a multi-core Athena job is launched inside this virtual machine. This Athena job uses all the available CPU cores from the virtual machine. The ATLAS@home multi-core application was officially launched in July 2016 and the statistics from a few months’ full load running confirmed the reduction of total memory usage, however the performance of this practice is different from the Grid sites due to the virtualization. Through the historical statistics and testing, we find out that factors including the allocation of CPU cores, different versions of hypervisors and the scheduling policies of the BOINC middleware can significantly impact the performance of the multi-core application. In this paper, we will cover the following aspects of this practice: 1) the implementation of the multi-core virtualization; 2) experiences gained through a few months’ full load running; 3) tuning and optimization of its performance.
        Speaker: Prof. wenjing wu (IHEP)
        Slides
      • 150
        Using virtualized computing resources with the DIRAC Interware
        Multiple scientific communities are using computationally intensive applications in their research activities. They are largely relying on national and international distributed computing infrastructures, which are mostly based on well-known grid technologies. However, with a progressively wide adoption of the cloud technologies more and more computing power is available in a form of groups of dynamically created virtual machines. The machines can be of general purpose or specialized virtual appliances suitable for a particular application. This provides a very high level of flexibility in using computing resources but makes it quite difficult to manage large amount of cloud computing power by an average user. On the other hand, still most of the computing resources are available through grid infrastructures. Therefore, it is necessary to provide a transparent user access interface to both types of computing infrastructures in order to enlarge the overall available power and to ensure a smooth transition to the use of the new technology. The DIRAC Interware project offers software and multiple ready to use components to build distributed computing infrastructures. It provides tools to integrate various types of computing resources including grid and cloud systems. DIRAC users see cloud resources as logical entities in the same way as grid sites. The DIRAC Workload Management System allows combination of grid, cloud and other resources within the same complex workflow. In this contribution we will describe the recent progress in the development of the cloud management subsystem of the DIRAC Project, its architecture and main components. We will demonstrate how the combined configuration, usage and monitoring of the grid, cloud and other computational resources is performed. We will present how resources provided by cloud federation infrastructures are made available via the DIRAC services. We will give several examples of their usage by large high energy physics experiments and other scientific communities.
        Speaker: Dr Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
        Slides
      • 151
        Citizen Earthquake Science in Taiwan
        Taiwan is located at seismically highly active area that geologists called the convergent plate boundary between the Eurasian plate and the Philippine Sea plate. To bring seismology in a simple way to citizens at school and home, we are incorporating the research-based Quake-Catcher Network (QCN) program into an educational seismic network that is maintained by teachers in tens of high schools in the whole island of Taiwan. We established a web-based educational platform so that users are encouraged to interact with these collected seismic waveform data and even to conduct further signal analysis on their own. In addition, to collect field observations for any earthquake-induced ground damages, such as surface fault rupture, landslide, rock fall, liquefaction, and landslide-triggered dam or lake, etc., we are developing an earthquake damage reporting system for public but particularly relying on trained volunteers who have taken a series of workshops, organized by this project. This Taiwan Earthquake Scientific Report (TSER) system is based on the Ushahidi mapping platform, which has been widely used for crowdsourcing. Some online games and materials for educational purposes on learning earthquakes will be ready in a near real-time manner for students and teachers. All These constructed products are now operated at the Taiwan Earthquake Research Center (TEC). With these newly developed platforms and materials, we are aiming not only to raise the earthquake awareness and preparedness, but also to encourage public participation in earthquake science in Taiwan.
        Speaker: Dr Wen-Tzong Liang (Academia Sinica)
        Slides
    • Network, Security, Infrastructure & Operations III Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Kenny Huang
      • 152
        The SDN application on data transfer and cyber sercurity in IHEP
        Network performance and security are important for HEP experiment data and information exchange. the Software Defined Network(SDN) gives us a new idea and choice to design and implement a new network model to gurantee or improve the network performance and security. there are two SDN application projects in IHEP have been launched, SDN@WAN is focusing on the quality of HEP data transfer among different computing sites around China with the current available network infrastructure and SDN@SEC is focusing on the network intrusion-detection and protection with SDN architecture.
        Speaker: Mr Fazhi QI (Institute of High Energy Physics,CAS)
        Slides
      • 153
        IPv6 Deployment and Migration of WLCG TEIR-2 site resources on Private Cloud
        National Centre for Physics (NCP) in Pakistan, maintains a large computing infrastructure for scientific community, including a Teir-2 site of Worldwide LHC Computing Grid (WLCG) and local scientific cluster. Need for IP address space has been increased, due to expansion of infrastructure, and adoption of Cloud technology for hosting virtual machines. On the other side, IPv4 address space is almost near to depletion in this region, and hence migration to IPv6 is inevitable. NCP is among the few organizations in the country, which is actively involved in promoting IPv6 and has deployed next generation IPv6 protocol in its campus network. NCP network is configured to provide IPv6 support by ensuring high availability of services, security, and optimized routing. Most of the corporate services are running in dual stack mode. WLCG TEIR-2 site is also being tested on IPV6. In order to optimize the utilization of computing resources, open stack based private Cloud has also been deployed. All the computing resources are now being managed through that cloud. This paper discusses the details of IPv6 deployment status and migration status of TEIR-2 site on private cloud.
        Speaker: Mr Saqib Haleem (National Centre for Physics, Islamabad, Pakistan)
        Slides
      • 154
        TRANSITION FROM IPv4 TO IPv6 IN MONGOLIA: KEY ISSUES AND RECOMMENDATIONS
        - List item 1.Indtroduction 1.1.ICT Sector policy and Regultions 1.2.Results of IPv6 Survey 2.IPv6 deployment,challenges and current initiatives 2.1.1.Policy and regulation 2.1.2.IPv6 deployment for Industry and Business 2.1.3.Security Considerations in IPv4 to IPv6 Migration 3. Mongolia Industry Case Studies 4. IPv6 Deployment and Infrastructure Security Training for Mongolia 5. The way forward–Summary of Recommendations
        Speaker: Dr TumeUlzii Naranmandakh (Communications Regulatory Commission, Mongolia)
    • 3:30 PM
      Coffee Break
    • Biomedicine & Life Science II Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Danny HSU (Academia Sinica)
      • 155
        Molecular dynamics of proteins in the cloud
        A key computational technique in Structural Biology is Molecular Dynamics (MD), a computer simulation of the motion of atoms and molecules as a function of time. MD simulations capture the behavior of biological macromolecules in full atomic detail using statistical thermodynamics laws. Such simulations serve as a computational microscope, revealing biomolecular mechanisms at various spatial and temporal scales. ---------- In our work, we have implemented and tested a variety of approaches to the use of grid and cloud computational infrastructures for MD simulations. Previously, we implemented a web interface to setup MD simulations that were then executed on the European Grid Infrastructure [1]. We then exploited the technologies developed by the INDIGO-Datacloud project to expand both the types of computational infrastructures that can be used and the portfolio of services. For this we implemented the AmberTools suite [2] as a container for Docker. Next, we implemented the use of cloud storage resources to save trajectories and perform their analysis and comparison with experimental data, such as NMR order parameters. Cloud storage uses the Onedata solution, which can work with large files such as those typically output by MD simulations. Cloud computing can also be exploited for specific applications. The setup of simulations and analysis are still performed via web interfaces, so that the user does not need to know which kind of infrastructure is used. For this use the FutureGateway, a programmable interface of a RESTful API Server developed within INDIGO-Datacloud. The availability of these new solutions to support MD simulations allows non-expert users to get access to standardized protocols for state-of-the-art calculations and analysis that enable the successful application of MD with a low learning barrier. The solutions are available also through the West-Life Virtual Research Environment for Structural Biology [3]. ---------- [1] A Grid-enabled web portal for NMR structure refinement with AMBER. Bertini I, Case DA, Ferella L, Giachetti A, Rosato A. [2] http://ambermd.org/ [3] http://about.west-life.eu/
        Speaker: Prof. Antonio Rosato (University of Florence)
        Slides
      • 156
        Investigating community detection algorithms and their capacity as markers of brain diseases
        In this paper, we present the workflow for evaluation of brain functional connectivity with different community detection algorithms, and their strengths to discriminate between health and brain disease. We further analyze the computational complexity of particular pipeline steps aiming to provide guidelines for both its execution on computing infrastructure and further optimization efforts. The human brain consists of 1011 neurons interconnected with approximately 1014 connections creating large complex structure. This anatomical connectivity is a substrate for brain function which we can both measure and model on various scales. The goal of current research is understanding the brain as a system and its behavior as whole, i.e. identifying the spatiotemporal structure of brain function. This large-scale functional connectivity of neuronal populations is defined as statistical relations between neural activities of distinct brain cortical regions and is usually measured by fMRI (functional magnetic resonance imaging), EEG or MEG (electro/magneto-encephalography). The very promising tool is to model the system as a network using network science methodology, drawing inspiration from previous applications in research areas such as economics, transportation, communication or immunology. The network analysis models complex system as a set of discrete units and describes interactions between them, and provides ways to measure the importance of nodes and detects communities, by diverse metrics (properties) describes topology and whole-system events or can be used to model the growth of the system in size. Primarily, network analysis is a data-driven technique and is an excellent tool to study real systems and to understand them. It requires interdisciplinary and big data processing approaches. Neuroscience community puts great effort into finding a potential marker of disease that would be noninvasive, easy to establish and stable. In this paper, we evaluate strengths of community detection methods to differentiate between networks of healthy and diseased subjects on the example of MCI-AD and AD (Mild Cognitive Impairment and Alzheimer’s Disease) as illnesses known to influence network topology. We apply discriminant analysis including parameter sweeps of methods for data preprocessing to quantify the ability of community detection algorithms to distinguish between patient and control and thus evaluate each one of them as a promising or inconclusive marker of a disease. The pipeline consists of a relatively large number of subsequent steps. We provide the analysis of the computational complexity which will guide further optimization efforts. Since we're dealing with large amounts of data we use computing infrastructure of MetaCentrum, CESNET, Czech Republic, to be able to analyze the data.
        Speaker: Prof. Eva Hladka (Faculty of Informatics, Masaryk University, Brno, Czech Republic)
        Slides
      • 157
        2D and 3D Medical Images for Anatomy Education using a cloud computing platform
        Human anatomy is the basic scientific study for students in medical schools to learn the shape, position, size, and various relationships of the organ structures in the human body. In this paper, we make use of the images produced by the Visible Human Project (VHP) that are free for access through the internet to develop a 2D and 3D anatomy learning system for the students in medical schools to learn the anatomy more effective and efficient. Since the system needs to use a huge amount of storage to store the data and to use a huge amount of computation power to generate 3D images of various organs interactively, the system is installed in a cloud platform. The students can access the system through a mobile device, such as a pad computer so that they can easily learn anytime anywhere with affordable devices.
        Speaker: Prof. Lihshyang Chen (National Cheng Kung University)
        Slides
    • Netwok, Scurity, Infrastructure &Operations IV Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr David Kelsey (STFC-RAL)
      • 158
        The EGI CernVM-FS infrastructure - latest developments and evolution towards a global facility Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        The CernVM-FS is firmly established as a method of software and condition data distribution for the LHC experiments at WLCG sites. Use of CernVM-FS outside WLCG has been growing steadily and an increasing number of Virtual Organizations (VOs), both within the High Energy Physics (HEP) and in other communities (i.e. Space, Natural and Life Sciences), have identified this technology as a more efficient way of maintaining and accessing software across Grid and Cloud computing environments. Following initial success of a CernVM-FS service offered to small VOs in the UK, RAL Tier-1 enlarged it and an EGI CernVM-FS infrastructure has been developed since September 2013. In this paper we describe the work carried out at RAL to expand the infrastructure to provide a resilient, distributed CernVM-FS service to non-LHC VOs across Europe and replicated around the world. We focus on the current status of its main elements: the Master Repository (Stratum-0), the Replica/Mirror (Stratum-1) and the customised mechanism to upload and maintain the master repositories by the VO Software Grid Managers. The latest developments to widen and consolidate the CernVM-FS infrastructure as a global facility (with main contributors in Europe, North America, Asia) are reviewed, such as the mechanism implemented to publish external repositories hosted by emerging regional infrastructures (eg. South Africa Grid). Progress on enabling the ‘squid auto discovery’ mechanism at CernVM-FS client level (which is a specific demand from the communities using the EGI Federated Cloud resources) is described alongside the implementation of protected CernVM-FS repositories, a requirement for academic communities willing to use CernVM-FS technology.
        Speaker: Mr Catalin Condurache (STFC Rutherford Appleton Laboratory)
        Slides
      • 159
        A solution for secure use of Kibana and ElasticSearch in multi-user environment Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        In order to check health, activities, or resource usage of IT service, monitoring is indispensable. A combination of Kibana and ElasticSearch is used for monitoring in many places such as KEK, CC-IN2P3, CERN, and also non-HEP communities. Kibana provides a web interface for rich visualization and ElasticSearch is a scalable distributed search engine. However, these tools do not support authentication and authorization features by default. There is no problem in the case of single-user environment. On the other hand, in the case of single Kibana and ElasticSearch services shared among many users, any user who can access Kibana can retrieve other's information from ElasticSearch. In multi-user environment, in order to protect own data from others or share part of data among a group, fine-grained access control is necessary. The CERN cloud service group provides cloud utilization dashboard to each user by ElasticSearch and Kibana. The group has been deployed a homemade ElasticSearch plugin to restrict data access based on a user authenticated by the CERN Single Sign On system. It enables each user to have a separated Kibana dashboard for cloud usage and cannot access to others. Based on the solution, we propose an alternative one which enables user/group based ElasticSearch access control and Kibana dashboards separation. It is more flexible and can be applied to not only the cloud service but also other various situations. We confirmed our solution works fine in CC-IN2P3. And a pre-production platform for CC-IN2P3 is under construction. We will describe our solution for the secure use of Kibana and ElasticSearch including integration of Kerberos authentication, development of a Kibana plugin which allows Kibana dashboards to be separated based on user/group, and contribution to Search Guard which is an ElasticSearch plugin enabling user/group based access control. We will also describe the effect on performance from using Search Guard.
        Speaker: Wataru Takase (KEK)
        Slides
      • 160
        dCache, towards Federated Identities and Anonymized Delegation Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        For over a decade, X509 Proxy Certificates are used in High Energy Physics (HEP) to authenticate users and guarantee their membership in Virtual Organizations, on which subsequent authorization, e.g. for data access, is based upon. Although the established infrastructure worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks. In addition, it is more efficient to guarantee the identity of a scientist at his home institute, since the necessary identity validation has already been performed. Scientists also depend on service portals for data access and processing, on their behalf. As a result, it is imperative for the infrastructure providers to support delegation of access to these portals for their end-users without compromising data security and identity privacy. The growing usage of distributed services for similar data sharing and processing have led to the development of novel solutions like OpenID Connect, SAML etc. OpenID Connect is a mechanism for establishing the identity of an end-user based on authentication performed by a trusted third-party identity provider, which thereof can be used by infrastructures to delegate the identity verification and establishment to the trusted entity. After a successful authentication, the portal is in possession of an authenticated token, which can be further used to operate on infrastructure services on behalf of the scientist. Furthermore, these authenticated tokens can be exchanged for more flexible authorized credentials, like Macaroons. Macaroons are bearer tokens and can be used by services to ascertain whether a request is originating from an authorized portal. They are cryptographically verifiable entities and can be embedded with caveats to attenuate their scope before delegation. In this presentation, we describe how OpenID Connect is integrated with dCache and how it can be used by a service portal to obtain a token for an end-user, based on authentication performed with a trusted third-party identity-provider. We also propose how this token can be exchanged for a Macaroon by an end-user and we show how dCache can be enabled to accept requests bearing delegated Macaroons.
        Speaker: Dr Paul Millar (DESY)
        Slides
      • 161
        A Method for Remote Initial Vetting of Identity with PKI Credential Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        With the growth of large-scale distributed computing infrastructures, a system that enables researchers -- not only international collaborative research projects but also small research groups -- to use high performance computing resources in such infrastructures is established. For the computing resource use system which invites researchers in the world to submit the research proposal, it is tough to carry out initial vetting of identity based on a face-to-face meeting at a window for the system if the researcher whose proposal is accepted lives in a foreign country. The purpose of this paper is to propose a method to solve the difficulty of initial vetting of identity for a remote user. An identity management (IdM) system vets the identity and reality of a user by checking the beforehand registered personal information against the identity documents. After the identity vetting, the user can obtain a credential used in the infrastructure. Suppose that the IdM system(A) needs to initially vet the identity of a user and that the user already possesses a credential issued by the other IdM system(B). The basic idea of this paper is that the IdM system(A) uses the credential issued by the IdM system(B) for the initial identity vetting if the level of assurance of the IdM system(B) is the same as or higher than the IdM system(A). However, the IdM system(A) cannot always check the identity against the attribute information provided by the credential. In a trust federation, the IdM system will be able to finish vetting the identity by making reference to the other IdM system that issued the credential for the necessary and sufficient identity data. As the credential handled in this paper, we focus on Public Key Infrastructure (PKI) credentials that often used in large-scale high performance computing environments. We discuss necessary condition and procedure for ensuring that the remote initial vetting of identity with a PKI credential is the same assurance as the one based on a face-to-face meeting. The proposed method can be introduced to an existing PKI without large changes. The basic idea of the proposed method can be also applied to an infrastructure based on another authentication technology. The applicability of the basic idea is also considered.
        Speaker: Dr Eisaku Sakane (National Institute of Informatics)
        Slides
      • 162
        Q&A Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • Supercomputing, High Throughput, Accelerator Technologies and Integration Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr David Britton (University of Glasgow)
      • 163
        Building CNGrid As a HPC Application Cloud Service Provider
        Public grids (WLCG, OSG, XSEDE2) provide huge computing resources to scientific users all around the world, those services are continuously and constantly evolving for the last 20 years. In the mean while, recently public clouds show great interest in HPC besides its traditional IaaS/PaaS/SaaS market. Companies like Amazon, Microsoft, Google and Ali all have their own HPC cloud solutions. As one of the biggest computing grid in the world, China National Grid (CNGrid) integrates top HPC resources all over China, including Tianhe, Tianhe2 and Shenwei. It has been providing grid computing services for scientific user for more than 15 years. CNGrid has developed its own grid software SCE, as the kernel middleware to link all the HPCs and schedule job requests. Based upon SCE, there is a RESTful wrapper called SCEAPI, enabling rapid development of a donzen of science gateways,specialized domain communities or easy-to-use mobile apps. A very successful example is that CERN ATLAS simulation jobs are running in CNGrid, by connecting ARC-CE middleware and SCEAPI.
        Speaker: Mr Haili XIAO (Supercomputing Center, Chinese Academy of Sciences)
        Slides
      • 164
        EGI federated platforms supporting accelerated computing
        While accelerated computing instances providing access to NVIDIA GPUs are already available since a couple of years in commercial public clouds like Amazon EC2, the EGI Federated Cloud has put in production its first OpenStack-based site providing GPU-equipped instances at the end of 2015. However, many EGI sites which are providing GPUs or MIC co-processors to enable high performance processing are not directly supported yet in a federated manner by the EGI HTC and Cloud platforms. In fact, to use the accelerator cards capabilities available at resource centre level, users must directly interact with the local provider to get information about the type of resources and software libraries available, and which submission queues must be used to submit accelerated computing workloads. EU-funded project EGI-Engage since March 2015 has worked to implement the support to accelerated computing on both its HTC and Cloud platforms addressing two levels: the information system, based on the OGF GLUE standard, and the middleware . By developing a common extension of the information system structure, it was possible to expose the correct information about the accelerated computing technologies available, both software and hardware, at site level. Accelerator capabilities can now be published uniformly, so that users can extract all the information directly from the information system without interacting with the sites, and easily use resources provided by multiple sites. On the other hand, HTC and Cloud middleware support for accelerator cards has been extended, where needed, in order to provide a transparent and uniform way to allocate these resources together with CPU cores efficiently to the users. In this paper we describe the solution developed for enabling accelerated computing support in the CREAM Computing Element for the most popular batch systems and, for what concerns the information system, the new objects and attributed proposed for implementation in the version 2.1 of the GLUE schema. For what concerns the Cloud platform, we describe the solutions implemented to enable GPU virtualization on KVM hypervisor via PCI passthrough technology on both OpenStack and OpenNebula based IaaS cloud sites, which are now part of the EGI Federated Cloud offer, and the latest developments about GPU direct access through LXD container technology as a replacement of KVM hypervisor. Moreover, we showcase a number of applications and best practices implemented by the structural biology and biodiversity scientific user communities that already started to use the first accelerated computing resources made available through the EGI HTC and Cloud platforms.
        Speaker: Dr Marco Verlato (Istituto Nazionale di Fisica Nucleare - Sez. di Padova, Italy)
        Slides
      • 165
        A Novel Architecture towards Exascale Computing
        As data volumes grow rapidly in the science domain, the ability to process this data efficiently is becoming increasingly of interest. While in many applications the processing of very large volumes can be accomplished efficiently with map/reduce algorithms (e.g., using frameworks such as Hadoop), this does not cover a large class of problems which are best run in an HPC environment. This class of problem requires a new paradigm commonly referred to as Big Data and Extreme Computing (BDEC). This talk explains why current HPC systems are not fully suited to BDEC class problems, and how some upcoming technologies such as advanced non-volatile memories could help to resolve some of these issues. We introduce a novel architecture that aims to address this problem, currently being developed within the SAGE project ([sagestorage.eu][1]) as part of the Horizon2020 program. We explain how this architecture has been co-designed between leading industrial partners and research organisations covering a wide spectrum of scientific disciplines, and outline new programming models that are being developed to make use of this new technology, including tools to optimise its use. Finally, we present details of testing of this new architecture and explain where such a system can overcome the limitations of traditional HPC systems. [1]: http://www.sagestorage.eu/
        Speaker: Mr Shaun de Witt (Culham Centre for Fusion Energy)
    • 6:00 PM
      Gala Dinner Ji Ping Restaurant

      Ji Ping Restaurant

    • Earth, Environmental Science & Biodiversity I Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Mr Matti Heikkurinen (LMU)
      • 166
        Listening to the ecosystem: the integration of machine learning and a long-term soundscape monitoring network
        Information on the variability of environment and biodiversity is essential for conservation management. In recent years, soundscape monitoring has been proposed as a new approach to assess the dynamics of biodiversity. Soundscape is the collection of biological sound, environmental sound, and anthropogenic noise, which provide us the essential information regarding the nature environment, behavior of calling animals, and human activities. The recent developments of recording networks facilitate the field surveys in remote forests and deep marine environments. However, analysis of big acoustic data is still a challenging task due to the lack of sufficient database to recognize various animal vocalizations. Therefore, we have developed three tools for analyzing and visualizing soundscape data: (1) long-term spectrogram viewer, (2) biological chorus detector, (3) soundscape event classifier. The long-term spectrogram viewer helps users to visualize weeks or months of recordings and evaluate the dynamics of soundscape. The biological chorus detector can automatically recognize the biological chorus without any sound template. We can separate the biological chorus and non-biological noise from a long-term spectrogram and unsupervised identify various biological events by using the soundscape event classifier. We have applied these tools on terrestrial and marine recordings collected in Taiwan to investigate the variability of environment and biodiversity. In the future, we will integrate these tools with the Asian Soundscape monitoring network. Through the open data of soundscape, we hope to provide ecological researcher and citizens an interactive platform to study the dynamics of ecosystem and the interactions among acoustic environment, biodiversity, and human activities.
        Speaker: Dr Yu-Huang Wang
        Slides
      • 167
        Collaboration on monitoring Asian soundscape and the challenges
        Speaker: Dr Yu-Huang Wang (Independent Scholar)
        Slides
      • 168
        Revealing Philippine Climate Type Using Remotely-sensed Rainfall Estimates
        Rainfall variability is a key feature of climate. Understanding rainfall distribution over space and time is particularly of interest because it impacts several aspects of human activities. It has been known that rainfall variations in the Philippines is influenced by an interplay of various synoptic systems affecting the country -- southwest and northeast monsoons, easterlies and tropical cyclone activity. In a 1920 report, Coronas was able to identify and map four distinct climate types based on monthly rainfall time series obtained from synoptic stations over the Philippines. Using the same source of data, subsequent studies by various authors demonstrated similar results. A common challenge in utilizing ground observations for climate analysis, especially when dealing with rainfall, is the spatial resolution of the ground station network. Taking into consideration the sparsity of the stations, the geographical delineation between climate types must have been determined by the archipelago's topographical features such as mountain ranges blocking passages of rain-producing systems (rain shadow) and other factors known to the authors. Satellite-based technology overcomes this limitation. In this work, we demonstrate the viability of satellite-based rainfall estimates in capturing rainfall variations in the Philippines. Performing K-means clustering algorithm onto Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis (TRMM/TMPA) 3B43 rainfall data revealed four climate types with unique characteristics: (Type 1) two pronounced season with peak rainfall occurring during JJA; (Type 2) no dry seasons with extreme rainfall during DJF season; (Type 3) relatively dry from January to April with rainfall amount gradually increasing until December; and (Type 4) evenly distributed rainfall throughout the year. These rainfall patterns can be explained by southwest monsoon, easterlies, tropical cyclone visits, position of the inter-tropical convergence zone and topography. It is also interesting that the spatial extent of each climate type naturally manifested during the classification process and that the effect of topography was inferred from the results.
        Speaker: Mr Jay Samuel Combinido (Advanced Science and Technology Institute)
    • Infrastructure Clouds and Virtualisation I Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Ludek Matyska (CESNET)
      • 169
        The 'Cloud Area Padovana': lessons learned after two years of a production OpenStack-based IaaS for the local INFN user community
        The Cloud Area Padovana is an OpenStack-based scientific cloud, spread across two different sites - the INFN Padova Unit and the INFN Legnaro National Labs - located 10 km away but connected with a dedicated 10 Gbps optical link. In the last two years its hardware resources have been scaled horizontally by adding new ones: currently it provides about 1100 logical cores and 50 TB of storage. Special in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for Single Sign-On via the INFN-AAI Identity Provider as a new option for the user authentication. The collaboration with the EU-funded INDIGO-DataCloud project, started one year ago, allowed to experiment the integration of Docker-based containers and the fair-share scheduling: a new resource allocation mechanism analogous to the ones available in the batch system schedulers for maximizing the usage of shared resources among concurrent users and projects. Both solutions are expected to be available in production soon. The entire computing facility now satisfies the computational and storage demands of more than 100 users afferent to about 25 research projects. In this paper we’ll present the architecture of the Cloud infrastructure, the tools and procedures used to operate it ensuring reliability and fault-tolerance. We’ll especially focus on the lessons learned in these two years, describing the challenges identified and the subsequent corrective actions applied. From the perspective of scientific applications, we’ll show some concrete use cases on how this Cloud infrastructure is being used. In particular we’ll focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES is using a client-server system called TraceWin to exploit INFN's virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.
        Speaker: Dr Marco Verlato (INFN-Padova)
        Slides
      • 170
        Synergy, a new approach for optimizing the resource usage in OpenStack
        Managing resource allocation in a Cloud based data center serving multiple virtual organizations is a challenging issue. In fact, while the LRMS (Local Resource Management Systems) are able to maximize the resource usage by fairly distributing computing resources among different user groups according to specific policies imposed by the data centre administrator, this is not so straightforward in the most common Cloud management frameworks (e.g. OpenStack, OpenNebula). For example, the current OpenStack implementation provides a too simplistic scheduling model based on an immediate First Come First Served paradigm. Therefore, a user request will be rejected if no resources are immediately available: it is then up to the user to later re-issue the same request. Moreover the resource provisioning is only limited to the static partitioning strategy. In particular each project is assigned an agreed and fixed quota of resources that cannot be exceeded by one group even if there are unused resources allocated to other groups. 
The EU-funded INDIGO-DataCloud project is addressing this issue through ‘Synergy’, a new advanced scheduling and resource provisioning service targeted at OpenStack. With Synergy it is possible to maximize the resource utilization by allowing OpenStack projects to consume extra shared resources in addition to those statically allocated. Therefore such projects can now access two different kind of quotas: the private quota and the shared one. The first one is the OpenStack quota operated in a standard OpenStack way. The shared quota is instead handled by Synergy and is composed of resources non statically allocated. Such shared resources are fairly distributed among users following the fair-share policies defined by the administrator. In case the user request cannot be immediately satisfied, it is not rejected but instead inserted in a persistent priority queue and scheduled later. We present the architecture of Synergy, the status of its implementation, some results demonstrating its functionalities and the foreseen evolution of the service.
        Speaker: Dr Lisa Zangrando (INFN - Sez. Padova)
        Slides
      • 171
        Efficiency Improvement on Distributed Cloud System
        Speakers: Dr Eric Yen (ASGC) , Mr Felix Lee
        Slides
    • Physics & Engineering I Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Prof. Hiroshi Sakamoto (The University of Tokyo)
      • 172
        Examination of dynamic partitioning for multi-core jobs in the Tokyo Tier-2 center
        The Tokyo Tier-2 site, which is located in International Center for Elementary Particle Physics (ICEPP) at the University of Tokyo, is providing computer resources for the ATLAS experiment in the Worldwide LHC Computing Grid (WLCG). The official site operation in the WLCG was started in 2007 after the several years development since 2002, and the site has been achieving a stable operation since then. In the current system, which was upgraded in 2016, 6144 CPU cores have been deployed as worker nodes for the WLCG, where each worker node consists of 24 CPU cores. The ATLAS experiment developed the multi-core implementation of their software framework for reconstruction and simulation jobs, which provides an efficient memory sharing. In 2014, the experiment started to submit eight-core jobs using the above software framework to the Grid. The Tokyo Tier2 site has been processing this multi-core job and normal single-core job (e.g. user analysis job) separately using dedicated worker nodes and computing elements (static partitioning). However, we have often observed idle CPUs of the worker nodes due to this static partitioning when either multi-core or single-core jobs are not assigned to the site. Therefore, we started to evaluate an implementation of the dynamic partitioning of the worker nodes using HTCondor batch scheduler to reduce this idle CPU time, and have deployed a small cluster (1536 CPU cores) in the production. For the dynamic partitioning, draining of single-core jobs is necessary in order to dispatch a new multi-core job into a worker node when the worker node is filled by only single-core jobs. This draining should be performed until the number of running multi-core jobs reaches a target share. In order to perform an efficient draining, we need to consider several parameters, such as the number of draining machines at the same time, based on properties of the jobs. In this presentation, improvement of CPU efficiencies by introducing the dynamic partitioning and optimization of the drain parameters in the Tokyo Tier-2 center will be reported.
        Speaker: Dr Tomoe Kishimoto (The University of Tokyo)
        Slides
      • 173
        The High Throughput Strategy of IHEP
        IHEP computing center serves for many high energy physics experiments. We have more than 13,000 CPU cores and hundreds of active users. There are tens of thousands jobs per day. We divide users into many groups classically according to which experiment they belong to. And each computing node is privately owned by one group. The peak requirements of different groups are not coincident in general. Then there can be a great waste without resource sharing between groups. It makes good sense to improve the resource utilization and the jobs throughput. We managed to deploy a high throughput system, which considers both the sharing of resources and the fairness between groups. The system is based on HTCondor. However, it is necessary to customize a strategy to manage user groups in a single cluster pool. We keep a number of their own nodes for each group considering fairness. This ensures that there are resources available for each group forever. Meanwhile, a ratio of resources have to be shared between all groups. So it is possible for busy groups to take benefits from free groups. This is important to increase the entire jobs throughput. We provide real time statistics and try to ask free groups for more sharing resources. The sharing ratio of each group can be tuned automatically with owners’ approval. We are developing an accounting system, which will provide statistical details of free groups’ contribution and busy groups’ occupation. An error recovery mechanism is provided and integrated with the cluster monitoring system. Nodes with fatal problems are removed from the pool automatically. We also developed a set of toolkits to users. The toolkits add a series of attributes to users’ jobs, which are necessary in our approach. The entire strategy needs an enhanced central control system of HTCondor. We have implemented the essential components for central control, and deployed the customized strategy. The result shows great effects to our high throughput computing management.
        Speaker: Dr Jiaheng Zou (IHEP, Chinese Academy of Sciences)
        Slides
      • 174
        The Billing System of IHEP Data Center
        IHEP manages a number of China’s major scientific facilities, including BEPC, BES, BSRF, HXMT, ADS ,JUNO, CSNS, CEPC, the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Neutrino Experiment etc. Data generated by these facilities is processed in IHEP data center. There are many computing resources in IHEP data center, including cloud computing resources, local cluster resources, distributed computing resources, GPU resources, etc. But sometimes the utilization rate is imbalance because some resources belong to and can only be used in an experiment. So we develop the billing system to promoting resource sharing. The billing system is a system for renting some kinds of computing resources of the data center and implementing integrated billing for these resources. Users can rent any computing resources by this system, and they can check real-time bill too. The managers of experiment resources can also check the resource usage and the income. The billing system is designed and implemented based on B/S structure and MVC pattern, using JAVA programming technology. It mainly covers three function modules: product & service, user center and system management. (1) It lists all the computing resources and the accounting rules in product & service module. Users can login the billing system use the account of IHEP unified authentication, they can rent the resources according to their needs, and they can apply to use free resources too. (2) In user center, users can view details of resources they had rented, and they can check the real-time bill too. (3) Management platform provides services for managers. They can manage and publish computing resources product, they can redeploy resources pool, and they can manage billing information too. Now the billing system mainly manages computing resources, in future we will extend modules for private cloud storage, database resources, etc.
        Speaker: Ms Hongmei Zhang (Institute of High Energy Physics, CAS, China)
        Slides
    • 10:30 AM
      Coffee Break
    • Earth, Environmental Science & Biodiversity II Media Conf. Room

      Media Conf. Room

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Horst Schwichtenberg (Fraunhofer Institute for Algorithmen and Scientific Computing SCAI)
      • 175
        NeIC EISCAT_3D support project: Nordic computing challenge Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        EISCAT_3D will establish a system of distributed phased array radars that will enable comprehensive three-dimensional vector observations of the atmosphere and ionosphere above Northern Fenno- Scandinavia. The use of new radar technology, combined with the latest digital signal processing, will achieve ten times higher temporal and spatial resolution than obtained by present radars while simultaneously offering, for the first time, continuous measurement capabilities. The flexibility of the EISCAT_3D system will allow the study of atmospheric phenomena at both large and small scales unreachable by the present systems. The EISCAT_3D system will, in its first stage, consist of three radar sites: one with both trans- mitting (TX) and receiving (RX) capabilities and two with only RX capabilities. The sites will be located in remote locations in three different countries (Finland, Norway and Sweden) and will be separated geographically by approximately 130 km. Two additional receive sites, at distances 200-250 km from the transmit site, are planned for the full EISCAT_3D system. In addition to the radar sites, EISCAT_3D will also have an operations centre and one or more data centres. The NeIC EISCAT_3D support (E3DS) project aids the future EISCAT_3D project in planning and tendering their required e-infrastructure. This includes the gathering of the EISCAT_3D use-cases and transforming these into a set of standard requirements for the various components of the overall EISCAT_3D computing e-infrastructure. The NeIC EISCAT_3D support project interacts with EISCAT_3D and Grid and Cloud e-infrastructure projects. This interaction is needed to match the expertise in EISCAT_3D with corresponding expertise in the existing e-infrastructure projects in the various fields. In effect, the E3DS project builds the collaborations among EISCAT_3D, national e-infrastructure providers and network providers. The NeIC EISCAT_3D support project contributes by making it possible to expand and enhance the usage of existing Nordic e-infrastructures. By introducing a new field of research to the existing Nordic e-infrastructure the overall capabilities will be extended. Adding another large field of research into the existing e-infrastructures can spread the load of resource and cost-sharing. In return the EISCAT_3D project will benefit from already supported and maintained e-infrastructures for computing and storage. Ideally, the EISCAT_3D project will be able to use the e-infrastructure transparently and focus on atmospheric science.
        Speaker: Dr John White (NeIC)
        Slides
      • 176
        Towards Environmental Computing Compendium Media Conf. Room

        Media Conf. Room

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        Environmental computing is focusing on producing actionable knowledge by advanced environmental modelling on high performance computing platforms. The environmental computing community shares a tacit understanding of what are the initiatives, tools and approaches that belong to the core scope of this discipline. A website has been used as an interim community resource for collecting links to resources that are of relevance to this discipline. This link collection forms an interesting dataset for assessing different aspects of environmental computing as a discipline. Obviously it allows making an estimate of the size of the community – both the core group considering themselves as environmental computing specialists and the group of individuals and organisations involved in the related initiatives. While this measure is very reductionist, it already allows to make arguments e.g. with regards to inclusion of environmental computing aspects in a broad range of curricula (ranging from computer science to different specialties dealing with environmental modelling). In terms of analysing the more fine-grained distribution of the entries in the above mentioned environmental computing dataset, the main value of the current data is that it is sufficiently large for us to test different categorisation approaches. The challenges encountered in this process are especially useful, as they will force us to reflect not only which (if any) of the categories are exclusive in nature and which of them are actually attributes (or “tags”) that represent specific aspects of them. This differentiation can be important when considering ways to identify and benefit from synergies between different environmental computing groups, initiatives and tools. As an example, a project and an organisation both represent a community, and in both cases you can identify both the formally “core” team (individuals with a contract of some kind, specifying their role in the community), and the surrounding, larger group of stakeholders. However, due to the typically limited lifetime of a project and different internal structure (a project usually consists of several legal entities), the engagement strategy needs to be adjusted. A project collaboration usually needs to provide relatively short-term tangible benefits to be successful, whereas inter-organisational collaboration arrangements can be more loosely defined. In addition to presenting the observations the current dataset allows us to make we will also discuss the limitations of the current dataset. The paper also presents plans on how to overcome them to increase the representativeness of the dataset and its ability to capture more comprehensive picture of the environmental computing landscape.
        Speaker: Mr Matti Heikkurinen (LMU)
        Slides
      • 177
        Future warming scenario and impacts study over Taiwan: Results from ECHAM5/MPIOM-WRF dynamical downscaling Media Conf. Room

        Media Conf. Room

        Speaker: Dr Chuan Yao Lin (Academia Sinica)
        Slides
    • Infrastructure Clouds and Virtualisation II Conf. Room 1

      Conf. Room 1

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Tomoaki Nakamura (KEK)
      • 178
        VCondor - an implemention of dynamic virtual computing cluster Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        As a new approach to manage resource, virtualization technology is more and more widely applied in high energy physics field. We have built virtual computing cluster at IHEP based on Openstack, with HTCondor as the job management system. In traditional computing cluster, fixed number of slots are pre-allocated to the job queue of different experiments. However, this kind of policy has gradually become dissatisfy with the peak requirements of different experiments, and also leads to a low CPU utilization. To solve the problem, we designed and implemented a dynamic virtual computing cluster system - VCondor based on HTCondor and Openstack. This system performs unified management of virtual machines according with queue status in HTCondor. One or more VMs will be created automatically when some jobs are waiting to run. VM will be destroyed when job is finished and there is no more job in HTCondor queue. Job queue status is checked in a period of time such as 10 minutes, so a VM will continue to run if there are new jobs in the period of time. VCondor also support resource provision and reservation for different experiments. VCondor has to request and get the available number of VM from a VM resource scheduling system called VMQuota before it acreatea VMs. VMQuota tells how many VMs VCondor can create and how long these VMs will be reserved before they are created. This talk will present several use cases of LHAASO and JUNO experiments. The results show virtual computing cluster can dynamically expanded or shrunk while computing requirements changed. Additionally, CPU utilization of overall computing resource is significantly improved compared with traditional resource management system. The system also has good performance when there are multiple condor schedulers and multiple job queues. It is stable and easy to maintain as well.
        Speaker: Mr Yaodong CHENG (IHEP, CAS)
        Slides
      • 179
        GUOCCI – The Entryway to Federated Cloud for Small-scale Users Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        The Open Cloud Computing Interface (OCCI) – a standard released by the Open Grid Forum – has found wide adoption. First came, quite naturally, server-side components. Later, with a variety of attractive computing resources made available over the standardized protocol, user-side submission tools or science gateways followed. These are usually tailored with a specific use case in mind, and – as such – they are generally capable of setting up heterogeneous data processing platforms and orchestrating complex workflows specific to their given area of science. They put all this functionality at user’s disposal, often “at a single click”. On the other hand, small-scale users, backed by no software development teams, are usually relegated to the use of the simplest OCCI client in the command line. That is freely available and amply documented, but still, it by no means lowers the threshold of entry for potential federated cloud users, who often come from non-technical areas in the long tail of science. Therefore, the GUOCCI (GUI for OCCI) has been conceived as a rudimentary graphical user interface to OCCI-compliant cloud services. It is by design kept as simple as possible to address the needs of small-scale or one-off users who typically require little dynamism in their virtual resources, and who are perfectly happy to set up their virtual resources by hand, even one-by one, especially if they are offered a comprehensive graphical interface to do it. GUOCCI integrates not only with OCCI-compliant cloud sites, but also with the EGI Application Database and with authentication technologies used in academic federated clouds, namely with Virtual Organization Membership Services (VOMS). With that, the considerable resources available for instance in the EGI Federated Cloud are open up to all such small-scale or beginning users. This article introduces in greater depth the reasoning behind developing GUOCCI, and details the architecture of the product, making it into an example OCCI client implementation.
        Speaker: Mr Radim Janča (CESNET)
        Slides
      • 180
        CloudIPStore: A Cloud SaaS Repository for your Intellectual Properties Conf. Room 1

        Conf. Room 1

        There are thousands of organizations and large corporates involved in research and product development , which generate a huge number of different varieties of intellectual properties (IPs). Once these organizations reach a certain level of maturity, it becomes very important to organize and categorize different IPs. In this paper, we describe CDAC CloudIPStore, which is a SaaS (Software as a Service) repository for the intellectual properties generated by an organization. This SaaS can be accessed through the Internet or the intranet, through a web-based graphical user interface. In the current version, it allows to store and index the patents, publications, software, trademarks and copyrights generated in the organization. Cloud IP Store allows storage, tracking, modifying, retrieving and searching of IPs. Role-based privileges for different categories of users are supported, that can be enabled or disabled. Multiple versions of Software IPs can coexist in the CloudIPStore as the SaaS offers a versioning mechanism/ version control feature We believe that this software will be of immense use to organizations to build on their abilities and strengths. The OpenStack Swift-based cloud storage is provided in the backend. The cloud technologies employed in this SaaS enables the scalability of storage, increased availability, fault tolerance, and ubiquitous access from anywhere in the world, all of which are critical to efficient utilization of this tool to improve organization-level productivity.
        Speaker: Mr Battepati Kalasagar (C-DAC)
        Slides
      • 181
        Supporting Open Science with the EGI Federated Cloud - Experiences, success stories, lessons learnt Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
        The use of cloud computing for scientific research is on the rise: majority of scientific projects and communities are either already using clouds to store, share and process research data, or considering/implementing the transition to clouds. The EGI Federated Cloud offers a scalable, flexible and highly customisable platform to cater for researchers' needs. The EGI Federated Cloud is a multi-national cloud system that integrates institutional Infrastructure as a Service (IaaS) clouds into a unified computing platform which can power data and/or compute intensive applications and services. Since its start in 2014, the EGI Federated Cloud has evolved into a hybrid system composed of public, community and private clouds. The participating clouds offer OpenStack-specific and Open Standard Interfaces to research users - depending on local capabilities and on the users' preferences. The federation is enabled by the EGI operational backbone, based on capabilities such as usage accounting, service registry, service availability monitor, Virtual Machine Image marketplace. This talk will present the current status of the EGI Federated Cloud, will demonstrate examples of scientific communities and use cases that already benefit from the system, will display some of the lessons learnt while we were serving research needs, and will provide an outlook into the future of the infrastructure services. The talk will highlight opportunities for cloud providers and for research communities of the Asia-Pacific region to participate or use the EGI Federated Cloud.
        Speaker: Yin Chen (EGI)
        Slides
      • 182
        Q&A Conf. Room 1

        Conf. Room 1

        BHSS, Academia Sinica

        No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
    • Physics & Engineering II Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Dr Andrea Valassi (CERN)
      • 183
        Framework for distributing Radio Astronomy processing across Clusters and Clouds
        The Low Frequency Array (LOFAR) radio telescope stationed near Exloo, the Netherlands is an international aperture synthesis radio telescope developed to image the universe in the 20-200MHz frequency bands. Unlike telescopes using dishes or mirrors to focus light, aperture synthesis requires large amounts of processing between data acquisition and creating science ready images. While data can be split by frequency and processed in parallel, creating a wrapper around the processing software is needed in order to standardize data reduction across multiple locations while keeping track of the overall processing status. This work presents a framework to wrap the radio processing software, a global location to track pipeline steps as well as an implementation of CERN's VM Filesystem (CVMFS) used to standardize the software install. With a central location to track the pipeline progress and a standard software install, this software suite allows to easily process LOFAR data on any computer, node or cluster that is connected to the internet. This distribution of processing allows to tap more resources than are available at any single cluster as well as a way to track the execution at all locations through a common interface. Installing the LOFAR software on a CVMFS server allows every location to use the same software install, by tracking the software on the main server. The set of scripts that define the processing pipeline are placed in a sandbox folder along with a shell executable tasked with setting up the processing. The shell script sets up the environment, downloads, processes and uploads the data. The progress of the job is logged in an Apache CouchDB database inside a 'job token' document which contains all relevant data required to run the job. The CouchDB token defines the parameters of the processing job. A user-friendly python interface was built to read/update fields from the tokens, download configuration files attached to the token, create and delete tokens as well as create/delete views from the database. Using this interface, it is easy to create batches of jobs to process large amounts of data on multiple nodes data centres or cloud services. Additionally, the python package logs the progress of each job by tracking the current processing step executed and updating the job token. Further work will include integrating a scheduler which can make decisions which locations to use based on the current workload. Additionally, decision boundaries can be inserted between execution steps that analyse the intermediate solutions and decide on re-processing or terminating based on current data quality.
        Speaker: Mr Alexandar Mechev (Sterrewacht Leiden)
        Slides
      • 184
        Exploiting clouds for smart cities applications - The Cagliari 2020 project
        CAGLIARI 2020 is a 25 million euro project funded within the framework of the National Operational Program for Research and competitiveness of the Italian Ministry of Education University and Research. The project starts end 2016 with a duration of three years. The partnership includes public and private organisms of the South Sardinia for the development of ICT technologies aimed at optimising the usage of the “city system” and improving the quality of life for those who work and/or live in the city. The main goal of CAGLIARI 2020 is the development of innovative and environmentally friendly solutions for urban mobility (and possibly metropolitan area mobility), so to boost energy and environmental performances. The project idea originates from the ever increasing need of innovative tools and technological solutions for the optimisation of urban mobility, for lowering travel times and improving air quality. Cagliari represents the ideal case study for the development and testing of the project, mainly because of its centralised public transport management system ranking among the most advanced in Europe. CAGLIARI 2020 is based on the study and testing of a sensors network comprised of: 1. Fixed sensors for the tracking of vehicles entering/exiting the urban area. These sensors allow real-time and/or historical analysis, especially helpful in gathering the information required to manage traffic lights systems and sending routing optimisation information to interested users; 2. mobile sensors for the collection of environmental data. Such data will be used to feed decision-making models for the reduction of carbon emissions and the consequent improvement of air quality in the urban area. 3. Mobile devices for the acquisition of the motion habits of people. The integration of environmental models and smart systems for the management of urban mobility will allow to optimise public and private traffic flows as well as to reduce carbon emissions. The main innovation brought by CAGLIARI 2020 is related to the application of the “netcentric” paradigm by means of a dynamic and pervasive net (the urban information grid) whose nodes can be both fixed and mobile. This feature allows the sensorial integration of the devices distributed in the urban area and turns public transport buses into “mobile platforms” for the urban road system monitoring thanks to the continuous gathering of traffic, carbon emissions and noise pollution data. It is therefore possible to develop models for the analysis of environmental parameters and to provide support tools to policies aimed at curbing traffic flows, energy consumptions, and carbon emissions within urban areas. The integration between the aforementioned information and the people’s travelling habits (by means of the anonymous tracking of their mobile phones) allows for the creation of people’s mobility maps. Moreover, the project intends to spur and help the growth of new multi-sectorial entrepreneurial realities operating in the fields of mobility management and energy consumption. Cloud services will play a key role within the project in supporting the applications dedicated to data traffic monitoring and analysis.
        Speaker: Dr Alberto Masoni (INFN National Institute of Nuclear Physics)
        Slides
    • Closing Keynote & Ceremony Conf. Room 2

      Conf. Room 2

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan
      Convener: Simon C. Lin (ASGC)
      • 185
        The Emergence of Computational Archival Science
        The large-scale digitization of analog archives, the emerging diverse forms of born-digital archives, and the new ways in which researchers across disciplines (as well as the public) wish to engage with archival material, are resulting in disruptions to transitional archival theories and practices. Increasing quantities of ‘big archival data’ present challenges for the practitioners and researchers who work with archival material, but also offer enhanced possibilities for scholarship through the application of computational methods and tools to the archival problem space, and, more fundamentally, through the integration of ‘computational thinking’ with ‘archival thinking’. The talk will discuss these paradigm shifts in the context of e-infrastructures.
        Speaker: Prof. Richard Marciano (University of Maryland)
        Slides
    • 1:15 PM
      Lunch 4F Recreation Hall

      4F Recreation Hall

      BHSS, Academia Sinica

      No. 128, Sec. 2, Academia Rd., Taipei, Taiwan