36th Annual Report (2023-24)

99 technologies required for IUCAA and present it to the Computer Facilities Committee for consensus. 2. Framing policy documents and finalizing them in consultation with the Computer Facilities Committee members. 3. Drawing up specification of the RFP (Request For Proposal) tender document for IUCAA IT required to be purchased and oversee all purchase related procedure and followup. 4. Maintenance of IT hardware in the campus including servers, desktops, mobile computing equipment, printers etc. 5. P r o v i d i n g i n - h o u s e d e s i g n , development and maintenance support to the Administrative Office automation software (iOAS) and IUCAA website. (Designed web portals consisting online application module for various workshops.) 6. Maintaining Zimbra email servers and mirror sites hosted at IUCAA, and their day-to-day administration. 7. Configuration and management of data backups. 8. D e s i g n , m a n a g e m e n t a n d administration of network topology and firewall rules. 9. Administration of Ruckus wireless network covering the entire office as well as residential campus. Providing end users support for Wi-Fi devices such as laptops, mobile devices. 10. Da y t o da y admi n i s t r a t i on o f Virtualization infrastructure and various servers catering to Administration such as AD, etc. 11. Maintenance of Video Conferencing equipment and end user support. 12. Management of inventory of computer center consumable items and Assets and Furniture and its tracking. 13. Procurement of SSL certificates and software for all the relevant web servers at IUCAA. 14. E n d u s e r s e r v i c e s u p p o r t t o Administrative staff, Academic Visitors and Associates. 15. Infrastructure, management and coding support to IT intensive projects such as LIGO, MALS, SUIT, AstroSat, BigData etc. 16. Procurement, installation and periodic upgradation of mathematical software such as Matlab, IDL, Mathematica meant for general IUCAA users and cluster users. 17. Procurement of Printers (Qty. 10), All in one Desktops (Qty 20), Laptops (Qty. 3), MacOS devices (Qty. 8) for the academic community, visitors and administrative officers. 18. Hardware Maintenance and General System Administration of clusters in IUCAA in coordinationwithOEM. 19. Assisting Library department to maintain their IT infrastructure. 20. Hosted GitLab for IUCAA users and associates. 21. Architecting new hardware solutions to address operational needs. HighPerformanceComputing IUCAA currently has three major independent HPC clusters dedicated to different applications, namely Pegasus, SARATHI and VROOM. The Pegasus Cluster is to serve the general computing requirement of the astronomy community associated with IUCAA. It has 80 compute nodes, 4 gpu nodes with 32 cores and 384 GB (on old) & 512GB RAM (on new). It uses InfiniBand EDR (100Gbps) as an inter-connect, and Portable Batch System (PBS) as a job scheduler. For visualisation purposes, there are two dedicated graphics nodes equipped with NVIDIA Tesla P100 GPU cards. The cluster consists of more than 2600 Physical cores. The cluster is attached to a 2 PiB parallel file system (Lustre), which is capable of delivering 15 Gbps throughput. Theoretical computing speed of the Pegasus Cluster is 150 TF. The Pegasus cluster has been utilized by about 70 high volume users from IUCAA and various Indian Universities, running applications for Molecular Scattering, Molecular Dynamics, Stellar Dynamics, Gravitational N-Body Simulations, Cosmic Microwave Background Evolution, Fluid Mechanics, Magnetohydrodynamics, Plasma Physics, and the analysis of diverse astronomical data. The Sarathi Cluster is primarily used for gravitational wave research and is mostly used by national and international members of the LIGO Scientific Collaboration (LSC), which includes many IUCAA members and Associates. The cluster is comprised of heterogeneous compute servers, it is built in three phases. The cluster consists ofmore than 8000Physical cores. The theoretical peak performance of the compute node CPUs of the cluster is nearly 530 TFlops. The cluster has 2PiB PFS storage with 30Gbps write and read (1:1) throughput. The Vroom cluster is used solely for the MeerKAT Absorption Line Survey (MALS). This cluster has 21 compute nodes, 2 MDS nodes, 4 GPU notes and 2 head node which delivers 25 TF computing speed and has a parallel file system of 3.5 PiB usable capacity attached to it. The cluster is also attached to 2 PiB archival storage for archiving/serving the processed data to international community. Sarathi Cluster Phase III, Pegasus Cluster, and Sarathi Cluster Phase II are listed at 36th, 50th and 53rd rank respectively in the list of top Supercomputers in India published on January 31, 2024. The list is maintained and supported by CDAC's Terascale Supercomputing Facility (CTSF), CDAC, Bangalore. The list is available at HPC clusters listed in Top Supercomputers in India https://topsc.cdacb.in/filterdetailstry?pag e=60&slug=January2024

RkJQdWJsaXNoZXIy MzM3ODUy