Quantitative Analysis Assures

I really like Diana's post of yesterday. It delves into a field where phantasies, biases, speculations bloom...

I've always known it. It's absurd. It's against enlightenment. It's antiscientific.

In general, if the degree of information is zero, the degree of speculation is infinite. And those stories are often enjoying, but speculative...

Diana's post assures.

Only a Superhero could do the monster task ;)

Is There a Santa Claus?


  • No known species of reindeer can fly. But there are at least 300 000 species of living organisms yet to be classified, and while most of these are insects and germs, this does not completely rule out flying reindeer which only Santa has ever seen.
  • There are 2 billion children (persons under 18) in the world. But since Santa doesn't (appear) to handle the Muslim, Hindu, Jewish and Buddhist children, that reduces the workload to 15% of the total - 378 million according to Population Reference Bureau. At an average (census) rate of 3.5 children per household, that's 108 million homes. One presumes there is at least one good child in each.
  • Santa has 31 hours of Christmas to work with, thanks to the different time zones and the rotation of the earth, assuming he travels east to west (which seems logical). This works out to 967.7 visits per second. This is to say that for each Christian household with good children, Santa has 1/1000th of a second to park, hop out of the sleigh, jump down the chimney, fill the stockings, distribute the remaining presents under the tree, eat whatever snacks have been left, get back up the chimney, get back into the sleigh and move on to the next house. Assuming that each of these 108 million stops are evenly distributed around the earth (which, of course we know to be false but for the purpose of our calculation we will accept), we are now talking about 1.2 miles per household, a total trip of 129 million miles, not counting stops to do what most of us must do at least once every 31 hours, plus feeding and so on. This means that Santa's sleigh is moving at about 1200 miles per second, 6 000 times the speed of sound. For purposes of comparison, the fastest man-made vehicle on earth, the Ulysses space probe, moves at a pokey of 27.4 miles per second - a conventional reindeer can run, tops, 15 miles per hour.
  • The payload on the sleigh adds another interesting element. Assuming that each child gets nothing more than a medium-sized Lego set (2 pounds), the sleigh is carrying 321 300 tons (assuming not all children are good), not counting Santa, who is invariably described as overweight. On land, conventional reindeer can pull no more than 300 pounds. Even granting that "flying reindeer" could pull ten times the normal amount, we cannot do the job with eight, or even nine. We need 107 100 reindeer. This increases the payload - not even counting the weight of the sleigh - to 329 868 tons. Again, for comparison - this is four times the weight of the Queen Elizabeth.
  • 329 868 tons traveling at 1200 miles per second create enormous air resistance - this will heat the reindeer up in the same fashion as space crafts reentering the earth's atmosphere. The lead pair of reindeer will absorb 14.3 quintillion joules of energy. Per second. Each. In short, they will burst into flame almost instantaneously, exposing the reindeer behind them, and create deafening sonic booms in their wake. The entire reindeer team will be vaporized within 4.26 thousandth of a second. Santa, meanwhile, will be subjected to centrifugal forces 17 500 times greater than gravity. A 250 pound Santa (which seems ludicrously slim) would be pinned to the back of his sleigh by 4 315 015 pounds of force.

In conclusion - If Santa ever did deliver presents on Christmas Eve, he's dead now!

Oder: "Glaub lieber an das Christkind!" ;-)


(Story can be found for example at PhysLink.com)

A 2014 Retrospect

The year 2014 was, from the computational finance view, quite challenging. With interest rates at a historical low level, lognormal models become impossible.

See Black versus Bachelier how we at UnRisk handle the difficulties with Black 76 models in that case.

Another 2014 story is credit / debt valuation adjustment. When regulators go the limit  (and sometimes beyond) of reasonability, when computational requirements get higher and higher, when millions of scenario values have to be calculated, then the UnRisk option is worthwhile to have a closer look.

In UnRisk's CVA project, cofunded by the Austrian Research Promotion agency, we have been working (and work is still ungoing) on bringing the xVA challenges to the ground.

UnRisk is not only about clever math, but also about stable and up-to-date realisations in modern software environments. Being a stone-age Fortran programmer myself, I enjoyed Sascha's post on the goats, wolves and lions problem very much.

There were more targest achieved by the UnRisk team in 2014: the releases of the UnRisk Engine version 8 and of the UnRisk FACTORY versions 5.1 and 5.2, the implementation of an HDF5 file format as a bsis for the CVA calculations and more things to come.

A comparison of MC and QMC simulation for the valuation of interest rate derivatives

For interest rate models, in which the evolution of the short rate is given by a stochastic differential equation, i.e.
the valuation process can be easily performed using the Monte Carlo technique. 
The k-th sample path of the interest rate process can be simulated using an Euler discretization:
where  z is a standard normal random variable. 
The valuation of interest rate derivatives using Monte Carlo (MC) Simulation can be performed using the following steps:
  1. Generate a time discretization from 0 to the maturity T of the financial derivative, which includes all relevant cash flow dates.
  2. Generate MxN standard normal random numbers (M=number of paths, N = number of time steps per path)
  3. Starting from r(0), simulate the paths according to the formula above for k=1..M.
  4. Calculate the cash flows CF of the Instrument at the corresponding cash flow dates
  5. Using the generated paths, calculate the discount factors DF to the cash flow dates and discount the cash flows to time t0
  6. Calculate the fair value of the interest rate derivative in each path by summing the discounted cashflows from step 5.:
      
  7. Calculate the fair value of the interest rate derivative as the arithmetic mean of the simulated fair values of each path, i.e.
The only difference of QMC simulation is to use deterministic low discrepancy points, instead of the random points used in step 2 of  the Monte Carlo algorithm. These points are chosen to be better euidistributed in a given domain by avoiding large gaps between the points. The advantage of QMC Simulation is, that it can result in better accuracy and faster convergence compared to the Monte Carlo Simulation technique.
The following picture shows the dependence of the MC/QMC valuation result on the number of chosen pahts for a vanilla floater, which matures in 30 years, pays annually the Euribor 12M reference rate. The time steps in the simulation method is chosen to be 1 day. One can see that using QMC, a much lower number of paths is needed to achieve an accurate price. 


Extremely Large Telescope to be Built

Image Source: ESO

 
At a recent meeting ESO’s main governing body, the Council, gave the green light for the construction of the European Extremely Large Telescope (E-ELT) in two phases.

For details, see ESO's press release.

Mathematics for Industry network

In Berlin, one of the agenda points was to formulate the next steps for the Mathematics for Industry network which will be funded by the European Union within the COST framework.

To be more specific, the trans-domain action 1409 (TD1409) is named Mathematics for industry network (MI-NET) with the objectives to encourage interaction between
mathematicians and industrialists, in particular through
(1) industry-driven problem solving workshops, and
(2) academia-driven training and secondment.

I was nominated by the national COST coordinator to become a member of the management committee of this cost actions and I am looking forward to the interactions with my colleagues.

For more information, click here.

SQL Databases - You Know Where The Door Is?

I've joined the team decisions, but when I read Michael's post on HDF5 yesterday, it made me brooding again. A correct and in time decision made with a careful view into the most probable future of data management. But what about the regulatory wishes and ability of technology providers?

No data silos banks!

The regulatory bodies do not like fluid data - they want them solid…they want evidence of every transaction. And we created the UnRisk FACTORY data base that stores every information of every detail of each valuation transaction forever. Every! And clearly, they are strictly SQL compliant and far beyond we provide functions in our UnRisk Financial Language (UnRisk-Q) that enable to manipulate its objects and data programmatically.

The UnRisk engines are blazingly fast and, obviously, database management became a nasty bottleneck.

The data space "explodes" with the valuation space

xVA - and the related regime of centralization - introduces immense complexity to the valuation space.

In xVA - fairer pricing or accounting VOODOO  I wrote sixteen months ago
….selecting momentary technologies blindly may make it impossible to achieve the ambitious goals. Data and valuation management needs to be integrated carefully and an exposure modeling engine needs to work event driven.With this respect we are in the middle of the VA project. Manage the valuation side first - and do it the UnRisk way: build a sound fundament for a really tall bullding
And this is what we did.

The new regime needs trust

Of course, we'll make inputs, results and important (meta)information available. But, what was still possible with our VaR Universe...store every detail...like VaR deltas…in SQL retrievable form...may be impossible under the new regime.

But, UnRisk Financial Language users will have the required access and much more…functions to aggregate and evaluate risk, margin...data and what have you.

So, ironically regulatory bodies may have boycotted a part of their transparency requests?

However, IMO, it needs more trust of all parties from the beginning…and the view behind the curtain will become even more important. You can't keep millions of valuations to get a single price…evident? But we can explain what we do and how our interim data are calculated.

With out pioneer clients we go already through the programs…and courses and workouts will become part of our know-how packages.

The options of future data management?

The world of data management is changing. Analytical data platforms, NoSQL databases…are hot topics. But, what I see in the core: new computing muscles do not only crunch numbers lightning fast, they will come with very large RAM memory.

This affects software architectures, functionality and scalability. Those RAM memories may become bases for NoSQL databases…however, ending up with disk-less databases.

There may be many avenues to pursue…but it's no mistake to think of a NoSQL world.

It's unprecedented fast again

Many years ago we've turned UnRisk into gridUnRisk performing single valuations on computational kernels in parallel. Then we started making things inherently parallel. Now we accelerate the data management immensely.

Prepared for any future. Luckily we've chosen the right architectures and technologies from the beginning.

New UnRisk Dialect: HDF5

Right now the UnRisk team is working on the development of a powerful xVA Engine (the corresponding new UnRisk products will be released in 2015). In order to being able to handle the huge amounts of generated data
  • positive exposures
  • negative exposures
  • realizations of all underlying risk factors in all considered Monte Carlo paths
we decided to choose HDF5 to be the best "Language" to connect the User Interface with the underlying Numeric Engines.
To be honest, HDF5 is not a language it is a file format.
An HDF5 file has the advantage that it may become very big without the loss of speed in accessing the data within this file. By the use of special programs (e.g. HDFView) one can easily take a look at the contents of such a file.
The UnRisk xVA calculation workflow consists of the following steps:
  1. Read in the User Input containing the usual UnRisk objects
  2. Transform this Input into the HDF5 Dialect and create the HDF5 file
  3. Call the xVA Engine, which
    1. reads in the contents of the file
    2. calls the numeric engine
    3. writes the output of the numeric engine into the HDF5 file
  4. Transform the output of the xVA Engine back into the UnRisk Language
  5. Return the results to the User
Here is a screenshot of the contents of such an HDF5 file (containing the xVA calculations for a portfolio consisting of several netting sets and instruments):



But why did we choose this workflow and do not use a simple function call?
The reasons are the following:
Reason 1: We are able to split our development team into two groups: the first one is responsible for steps 1, 2, 4 and 5 , the second one is responsible for step 3. Both groups simply have to use the same "UnRisk HDF Dictionary".
Reason 2: The different steps may be performed asynchronously - meaning that the workflow could look as follows:
  • Create many HDF5 files on machine 1 (i.e. perform steps 1 and 2 from above for a list of user inputs)
  • Call the xVA Engine for each of these files on machine 2 at any time afterwards
  • Extract the calculation output on machine 3 at any time afterwards
Reason 3: Users may only want to use our xVA Engine - i.e. they want to perform steps 1, 2, 4 and 5 themselves. The only Thing, they have to learn is the UnRisk HDF5 dialect (we may support such customers by the use of our Java functionality to write to and read from HDF5 files).
Reason 4: For debugging purposes it is very convinient that a customer only has to send his / her HDF5 file to us - we immediately can use this file to debug our xVA Engine
Reason 5: If the user wants to add a new Instrument to his / her portfolio, he / she simply has to add the Instrument description to the HDF5 file (may be done also by the use of our user interfaces) of this Portfolio. By the use of the already existing results (of the calculations for the portfolio without this Instrument) the performance of the calculations may be increased immensely.
We keep you informed on our xVA developments and let you know as soon as our corresponding products are available. If you have any questions beforehand, feel free to contact us.