Parallel and distributed computation in design flood estimation

Publication Type:
Conference Proceeding
The Art and Science of Water - 36th Hydrology and Water Resources Symposium, HWRS 2015, 2015, pp. 1538 - 1544
Issue Date:
Filename Description Size
Brady Ball HWRS2015 Paper.pdfAccepted Manuscript version504.07 kB
Adobe PDF
Full metadata record
© 2015, Engineers Australia. All rights reserved. Reliable and efficient design flood estimation remains a concern for many catchment managers. The search for reliable and efficient approaches to design flood estimation together with the increased computational capacity available to analysts has resulted in the development of computationally intensive methods for design flood estimation; for example, the application of a Genetic Algorithm for calibration of a catchment modeling system and the use of a Monte Carlo technique for generation of a POT series requires multiple executions of the catchment modeling system. As the execution of a given simulation is entirely separate from others, the computation step in the method is embarrassingly parallel. The key step to reduce computational run times, therefore, is to efficiently distribute, compute and gather the results amongst a cluster of computers or processing cores. Within this paper we present two methods of efficiently distributing the individuals such that they are computed in parallel. The example application is the use of a Genetic Algorithm for calibration of SWMM applied to an urban catchment. The first method is based on the recognition that SWMM 4.4 is a single threaded application. We can, therefore, place a wrapper that is multi-threaded in front of the SWMM execution step and compute the population in parallel on a multi-core machine. With this wrapper we achieved linear speed up to three cores, which peaked at a 5.5 times speed up on 12 core machine. The second method builds on the first and is based on the BOINC framework. Implementing a distributed methodology to deploy the multi-core programme across a cluster of shared workstations using the BOINC middleware, we were able scavenge 20-30% of the computational resources of a 100+ node cluster with over 1000 cores.
Please use this identifier to cite or link to this item: