Do We Face A Supercomputing Crisis?

Virtual design, predictive modeling and simulations are increasingly essential for smarter science, faster innovation, better product development.

In finance, we want deal types with better return characteristics. Risk shall not only be priced, but optimized .... More exposure modeling, collateral management and what have you might force us to perform many millions of single valuations to price a single instrument in a certain counter party regime.


As in other industries we are told that we need supercomputer performance on our desks or a virtual supercomputer in the cloud. And the new computing muscles will be based on hybrid architectures of multi core CPUs and massive parallel processor collections (GPUs and alike).

"They" say it's past time to adopt to HPC and CPU based HPC is outdated. So, we think we want to stay on top of the game ... and reinvent UnRisk.

In Taming the Machine Infernal I outlined how we approached the new field - prototyping the most advanced valuation tasks in multi-model environments .... Quite impressive results are described here. In §4 you find the speed up table of a certain task. A speed up of 90 is not bad? But certainly not enough.

Manage your risk while you are sleeping

If we want to continue enabling our users to managing risk of large portfolios under the new regulatory regimes over night, a much higher speed up is required. We reinvent our pricing and calibration engines, making them inherently parallel ad platform agnostic. New computing muscles, you are most welcome. The cost involved in the reinvention are significant - the total cost of ownership uplifts for customers as well?

If innovation meets complexity reduction

Reinventing UnRisk we did lab work again and found faster an better schemes for normal PCs (taking the experiences from the telescope project where a speed up of 1000 was possible without GPU utilization, ..) and finally we found that the runtime computer needed to manage the risk of moderately complex portfolios must not be so immensely super?

What if regulators find that more complicated models may carry even greater risk?
What if customers find that they do not have access to enough reliable, real time data to meet the new requirements?
What if we find that many clever and insightful things can be done on (connected) smart devices?

The past has shown that programming had never enough resources, but mainframe makers suffered from their misunderstanding that only a mainframe is a computer.

Algorithm optimization is dead - long live algorithm optimization?