And how it compares with Azure Standard_H16r
Earlier this year we published a benchmark that compared four cloud providers with respect to their performance for distributed memory calculations. The original text is available here.
Back then we found that network latency could be a problem for scaling distributed memory calculations on AWS. Earlier this month, however, the next generation of compute-optimized instances (C5) from AWS came into production and promised improvements in the network layer. We tested them and ran standard latency benchmarks for message-passing interface using the same compute environment as the one described in the article above. Results are included below.
In short— AWS is back in the game. There are significant improvements in the network latency and preliminary compute benchmarks we ran internally also demonstrate it.
Here are the results for two c5.18xlarge instances:
Below are results for two Azure Standard_H16r instances that performed best in the Linpack benchmark mentioned above.
The new networking layer (ie. Elastic Network Adapter) introduced in C5 does make the latency better. And if you already crossed AWS off the list of high-performance computing vendors to partner with, you might want to check it out once more.