MPI-2: Extensions to the Message-Passing Interface, 1997. ,
A case for using MPIs derived datatypes to improve I/O performance, Proceedings of SC98: High Performance Networking and Computing, 1998. ,
A Multiplatform Study of I/O Behavior on Petascale Supercomputers, Proceedings of the 24th International Symposium on High-Performance Parallel and Distributed Computing, HPDC '15, pp.33-44, 2015. ,
DOI : 10.1109/SC.2008.5222721
Tuning parallel I/O on Blue Waters for writing 10 trillion particles Cray User Group (CUG) meeting Available: https://sdm.lbl.gov/ sbyna, 2015. ,
Recent progress in tuning performance of large-scale I/O with parallel HDF5, p.2014 ,
Scalable Parallel I/O on a Blue Gene/Q Supercomputer Using Compression, Topology-Aware Data Aggregation, and Subfiling, 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, pp.107-111, 2014. ,
DOI : 10.1109/PDP.2014.60
GPFS: A shared-disk file system for large computing clusters Lustre filesystem website, Proceedings of the 1st USENIX Conference on File and Storage Technologies: USENIX Association, 2002. ,
Performance Evaluation of Collective Write Algorithms in MPI I/O, pp.185-194, 2009. ,
DOI : 10.1007/978-3-642-01970-8_19
Optimized process placement for collective I/O operations, " in Proceedings of the 20th European MPI Users' Group Meeting, ser. EuroMPI '13, pp.31-36, 2013. ,
Collective I/O tuning using analytical and machinelearning models, IEEE Cluster 2015, p.9, 2015. ,
DOI : 10.1109/cluster.2015.29
Improved parallel I/O via a two-phase run-time access strategy, ACM SIGARCH Computer Architecture News, vol.21, issue.5, pp.31-38, 1993. ,
DOI : 10.1145/165660.165667
MPICH2: A new start for MPI implementations Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface, Proceedings of the 9th European PVM/MPI Users, p.7, 2002. ,
Data sieving and collective I/O in ROMIO, " in Proceedings of the The 7th Symposium on the Frontiers of Massively Parallel Computation, ser. FRONTIERS '99, p.182, 1999. ,
Improving collective I/O performance using pipelined two-phase I/O, Proceedings of the 2012 Symposium on High Performance Computing, ser. HPC '12, pp.1-7, 2012. ,
Multithreaded Two-Phase I/O: Improving Collective MPI-IO Performance on a Lustre File System, 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, pp.232-235, 2014. ,
DOI : 10.1109/PDP.2014.46
Automatically Selecting the Number of Aggregators for Collective I/O Operations, 2011 IEEE International Conference on Cluster Computing, pp.428-437, 2011. ,
DOI : 10.1109/CLUSTER.2011.79
Data locality aware strategy for two-phase collective I/O, " in High Performance Computing for Computational Science -VECPAR Revised Selected Papers, 8th International Conference, pp.137-149, 2008. ,
Improving Data Movement Performance for Sparse Data Patterns on the Blue Gene/Q Supercomputer, 2014 43rd International Conference on Parallel Processing Workshops, pp.302-311, 2014. ,
DOI : 10.1109/ICPPW.2014.47
Topologyaware data movement and staging for I/O acceleration on Blue Gene/P supercomputing systems Storage and Analysis, ser. SC '11, Proceedings of 2011 International Conference for High Performance Computing, Networking, pp.1-19, 2011. ,
DOI : 10.1145/2063384.2063409
Topology-aware data aggregation for intensive I/O on large-scale supercomputers Available: https, Proceedings of the First Workshop on Optimization of Communication in HPC, ser. COM-HPC '16, pp.73-81, 2016. ,
DOI : 10.1109/comhpc.2016.013
IBM system blue gene solution -blue gene/Q application development, IBM Redbooks, 2014. ,
Lustre: Building a file system for 1,000-node clusters, PROCEEDINGS OF THE LINUX SYMPOSIUM, p.9, 2003. ,