Data intensive research needs high capacity frictionless networks that can reliably and consistently deliver very large research data transfers without detrimental impacts on other uses of the network.
Much has already been done to date to enhance Australia’s National Research and Education Network, AARNet, to make this frictionless networking a reality. The national backbone now operates at 100Gbps, multiple 100Gbps services are in place across the Pacific, and Science DMZ architecture has been deployed at various sites to improve data transfer for science while preserving site network security. This combination has created the potential for very large data flows to consume all the available bandwidth at both the instrument generating the massive dataset and at the distributed storage and compute resources that they utilise.
However, the game changes again when individual sites are connected directly at 100Gbps to match the national backbone capacity.
Testing to date has demonstrated that a little effort put into the data transfer tools and workflow can result in extremely large data flows (dubbed “elephant flows”) between research infrastructure services and instruments. This in turn greatly increases the likelihood that research flows may impact on a broader range of users of the entire national network.
The possible solution for this dilemma is to provide network capacity specifically for data intensive science, by enhancing the network to allow business as usual traffic to traverse paths that are separated from research flows.
AARNet’s new pathfinder network infrastructure, AARNet-X (AX) is designed to address this challenge, and to support extreme, unique and evolving customer requirements. It will also enable AARNet to develop expertise with new platforms and technologies.
This talk will identify the science drivers and subsequent design approach of the AARNet-X network, and how the research community can use it to freely move data for better science outcomes.