Some of the blog readers may have recognised what the pictures I posted on last physic's friday have been about. I am currently working on the implementation of a xVA simulation, where xVA stands for all kind of VAs (CVA - Credit Value Adjustment, DVA - Debt Value Adjustment, BCVA - Bilateral Credit Value Adjustment and FVA - Funding Value Adjustment).
But what are these VAs about: Counterparty risk represents a combination of market risk, which defines the exposure, and credit risk, which defines the counterparty credit quality. Part of the global regulation arising in the aftermath of the global financial crisis (e.g., Basel III) seems to view CVA as necessarily being a marked-to-market trading book component, alongside the OTC derivatives position from which it is derived. The risky price of a derivative can be expressed as the risk-free price (the price assuming no counterparty risk) minus a component to correct for the counterparty risk. The later component is often called CVA.

In next weeks physic's friday I will start with the description of the exposure engine - the heart of each xVA simulation.

is the topic of the Ars Electronica Festival 2014, taking place in Linz from September 4-8. It is an inquiry into the conditions necessary to enable innovation and renewal to emerge and make an impact. C in creativity, but also in competition and collaboration (that seem to be conflicting ..). My favorite C its co-evolution.

The AEF is a yearly festival in Linz founded 1979 with the idea of understanding arts as catalyst for technologies. It embraces a symposium where the big names and stars tell the young wilds that they know more … This might be a dilemma with this year's topic. Talking about changes should include change of methods, lectures, mentors, … and other popular idols killing innovation.

Interestingly enough, I found various contributions about the dilemma of change and elitism - is popularity identical with importance? … in the last days.

Two assorted links related to education have been posted at the Marginal Relolution Blog - 24-Aug (on the Goat Discrimination) and 25-Aug (on Excellent Sheep).

In a way it touches the polarity of education and inspired me to write this post.

Denunciation of the Ivy League?

Find a performance metric for a (higher) education system its not easy - you usually need to pick two of three goals.

William Deresiewicz has released his book Excelent Sheep this August. And it seems it has provoked a firestorm. Not surprisingly. Pointedly speaking, it does suggest: "don't send your kid to the Ivy League".

I am not an educator, not a didactics expert, not a neuroscientist, not a psychologist … I have not read the book yet (only summaries and reviews), so I do not assess it. However the semantics of Ivy League may suggest that the critics is driven by the resentment of the excluded. Or, denouncing the Ivy League is Ivy League style ….. but I find, this are weak arguments in principle.

There is one thing I do understand: the uncompromising focus on one goal in the triangle - highest curriculum standards and not a high pass rate and not giving everyone a chance to enter - provokes a controversy, inevitably.

If performace-based funding of educational systems uses such a focused metric it may lead to an "elitary spiral"?

Popular idols may kill innovation?

Some neuroscientists claim that we cling too much to idols, like lectures, mentoring, even scientific methods, … as well as competition and performance reviews.

The didactics research of the last decades suggests the change from teacher- to learner-centered learning arrangements - the underlying philosophy is old, and called constructivism.

Newer is, that there is technology that supports explorative learning - in maths education we know the black-box / white-box principle: for motivation experiment with a black box and get a rough insights into a new mathematical topic, then acquire inside knowledge by learning what are the mechanisms behind (white box) and use a black-box intensifying knowledge in a later stage.

A technology supporting this principle is symbolic computation. The principle is recursive because "white" becomes "black" in a later stage and so on (a white-box may use black-boxes of earlier stages as building blocks).

And collaborating with people from other fields will help exploring new things.

I don't see these things as denunciation or critics, … I see it as challenges. So I do not understand the firestorms.

Reverse education

Meeting with the UnRisk Academy heads from time to time, I really enjoy talking about things that are not in the core of our business. Like, how can nine-year olds be motivated to learn much more about mathematics - equation solving for kids.

This leads to new ideas about our learning offerings for quants and risk experts .. and the technologies that serve them.

In the magic-forest-of-learning, a "Goat" may be transmuted into an Excellent Sheep or a Majestic Lion of original thinking ….? It's our choice.

I am currently attending the IPTA 2014 conference in Bristol (IPTA standing for Inverse Problems - From Theory to Applications). Yesterday, I gave my invited minisymposium talk on "Large Scale Adaptive Optics". I had challenged myself to give a presentation to a specializes mathematical community without the use of a single formula in my slides.

I succeeded, and, not surprising to me (as I have seen too many talks showing battlefields of mathematical formulas), my talk was very well received by the audience.

A recommendation: If you ever have the chance to see a talk by Samuli Siltanen, grab this chance. He is a brilliant and witty presenter.

This post was inspired when walking through online education portals after the goat discrimination joke.

You can't have everything? In the evaluation of products, projects, systems, ... you often seek a "performance metrics" and it seems to be organized by a kind of triangle - you want

High Standards

High Production

Flexibility

In education this may mean

High Curriculum Standards

Low Dropout Rate (high pass rate)

Open Access

From this three goals of education you can only pick two. If the curriculum is difficult, how can everyone pass? You need to select before the entry (difficult enough). Similar with the other pairs.

in discrete manufacturing

High Product Quality

High Throughput

A Broad Product Mix

If you want precision and low time-in-process you usually choose a flow-orieneted, highly automated production line that is optimized for a certain class of parts. If you want a broad part mix, you organize intelligent manufacturing islands that can do many operations for a broad variety of parts in one set up in parallel. You get much-work-in-process but parts stay longer in the system.

In computing we have more flexibility, because computers are not mere tools but universal machines.

What about quant finance?

At UnRisk we did our best efforts to offer benefits in a triangle that has goals that are not in conflict

High Quality of Risk-related Information

High Automation (Throughput)

Large Diversified Portfolios (Broad Coverage)

This is enabled by the intelligent combination of the latest own and purchased technologies and the clever utilization of new computer muscles. Based on the underlying principle of optimality, multi-strategy approaches, clever maths and open information.

It is not easy and one would expect it has its price. But we offer it for low license and operational cost.

Today I want to give a small example on how the UnRisk Products may be combined in an elegant
way.

On the one hand we have the UnRisk FACTORY, a web
application which allows, in a very convinient way, to

Set up Portfolios

Perform automized Stress Tests and VaR Calculations

with all the corresponding data (portfolios, instruments, models, calculation results) being stored in the Database.

On the other hand we have the UnRisk PRICING ENGINE,
which enables the user to

Perform Valuations within
Excel

BTW: Both products use the same Pricing Library: our
Mathematica package UnRisk-Q.

Taking a look at the different Advantages of these products

UnRisk FACTORY: comfortable user interface, automization, permanently storage of data in a database, easy
access to all the data

UnRisk PRICING ENGINE: flexibility of Excel

we thought of a possibility to combine these two
products.

SOLUTION: The UnRisk Web Service.

It enables,

via UnRisk-Q functionality, the user to bring all data, which is stored in the UnRisk FACTORY database, to Mathematica.

via our UnRisk Link for Excel, to bring this data to the Excel frontend

I
will describe 1., the UnRisk-Q Web Service functionality, in a later blog – today I am concentrating
on 2., Import to Excel:

Now we have reached our starting point: we have set up a workflow (UnRisk FACTORY …. UnRisk
Web Service … UnRisk-Q … UnRisk PRICING ENGINE) on top of which we can
implement a lot of new functions, like, e.g., the following (and this, finally, is my example ;-) ):

Take
the expected cashflows of all underlying instruments of a Portfolio and display the aggregation of these cashflows in a list of given time
intervals.

Here
are two screenshots:

Screenshot 1: Result of the Portfolio valuation (UnRisk FACTORY)

Screenshot 2: Result
of the Cashflow Aggregation in Excel

At the end I summarize the steps needed to get these results
to Excel:

Set up the Portfolio in the UnRisk FACTORY (can be easily set up by the FACTORY user)

Set up an automatically executed (e.g. every day) task to perform Stress Tests on this Portfolio (can be easily set up by the FACTORY user)

Use the UnRisk Web Service (installed in the the UnRisk FACTORY environment) functionality to extract the valuation results from the UnRisk FACTORY (this functionality comes with UnRisk-Q)

Write some simple Mathematica code to aggregate the Expected Cashflows

Use the UnRisk Link for Excel (comes with the UnRisk PRICING ENGINE) to bring these results to Excel

The needed UnRisk software is delivered to our UnRisk
FACTORY (the UnRisk FACTORY database is the basis of the UnRisk Web Service)
customers in one bundle.

I am very happy that we found a way, the UnRisk Web
Service, to enable our customers to combine the UnRisk products so that they
get the optimal benefit of the different "worlds".

MIT Professor David Autor has written this brand-new paper about labor market polarization discussing the paradox of Michael Polanyi: "We can know more than we can tell ..".

No, I will not contribute to the "end of labor" discussion, because I do not have the knowledge, informative data … and at the other hand, I think the problem its too complex for predictive modeling. I rather recommend to think: provided it happens, how can we get used to it.

But the paradox inspired a thought:

As a car driver, I cannot be copied by a machine?

I am a great car driver, but I have no theory about great car driving. I cannot tell you, why I'm able driving smooth through curves without fidgeting ... It's an implicit knowledge that cannot be replaced by a computer program … but hold on, why do car drivers cause accidents in the worst cases stop lives?

How, do I react to a moose suddenly trampling out of the bushes? As usual, make way and clash with another car …?

But, self-driving cars will save lives

Why? It's less about cars and their controls, but information and communication, connected local intelligence, learning and adaptation.

It's about sensors that provide much more information a driver could capture, about blazingly fast reactions much faster than those of humans, imaging technologies that can see much further and deeper and anticipate danger.

And if not, machines do not have a "social brain". Consequently, the self-driving car may decide to clash with the moose in a certain way as the best of a all possible solutions.

The polarity of computer use
It's common sense that computers are great in doing routine jobs faster and cheaper. But computers can do things we cannot do properly - solve extremely difficult problems in time.

So it might be wise to flip the human-machine interaction: let computers overtake, if situations become really difficult and unusual behavior is needed. Build und use computerized systems that can overcome Polanyi's paradox.

IMO, (financial) risk management is a field where this flip should happen. Currently not so few market participants are spending tons of money to install systems that guide them through situations where the conditions are well known and dangers are cleared out. But what about the situations where dangers are greater and less known? Situations that keep risk manager awake at night?

Putting computers to good use must include the risky horror?!

"There are three types of mathematicians, those you can count and those who can't". A bad joke. But when I studied algebra, I sometimes felt a "dubious, shady glance" was needed to see the "impossible structure".

Why with reading difficulties can come other cognitive strengths is excellently explained in this article of the Scientific American.

Maybe it's a kind of frogs and birds trade off. People who have difficulties digging in the details think differently and understand the big picture better.

Last week, I naively applied Dupire's formula for deriving a local volatilty surface from a call price surface (in The ingredients of Dupire's formula) to noisy data. The diificulties arose from the second derivative of the call prices with respect to the strike price in the denominator. We obtained, again by naive differentiation, the following second derivative

and, consequently applying Dupire's formula, the following local vol surface

Could we, similar to the process in Implied Black volatility continued, obtain better results by pre-smoothing?
To try it out, we just replace every call value on the surface by the average over the 5 point stencil

This yields

for the second derivative and

for the local volatiltiy. Not that we used the same vertical scale for the pliots arising from presmoothing as for the original ones.

Hence, presmoothing delivers better results. But we can still do it much better by regularisation techniques.
In the forthcoming week, I will attend an inverse problems conference in Bristol, UK. Therefore, the next local vol blog will appear in two weeks.

Taking on the ALS Ice Bucket Challenge - I recently read about it in Six Pixels of Separation that I read frequently. One can discuss it, find it honest or dishonest, self-promoting or not ... but it has raised significant money for a great purpose. From a promotion perspective, I think it does a great job.

At the risk of sounding like a broken record, I say it again … collaborate!

First it's just me an my idea. But it only materializes if it is clear what it is for, when it is implemented and there are a few who like it … But I (my brain) finds (weak) excuses before they ere executed.

Killing my ideas softly

I think it's not mature or detailed enough

I read too much about similar ideas

It will be too hard to get it to execution

I have other hard work to do

I will be distracted

I am afraid of my solitary responsibility

A classical dilemma: first it is mine, but when I disclose it, it will become other's too.

How to fix

Disclosing it is the way to fix the problem of killing the idea before execution. It's so simple: I recognize that it is all about the idea and not me. Early collaboration helps to complete it, embrace other ideas, implement it, share work, transform distraction into interaction and build joint responsibility and create awareness.

Actual optionality

And yes, it is about optimizing risk (although, if the idea is new it is most probably not quantifiable) - do what makes work exciting, but not without thinking about the purpose, adequateness and possible returns.

And I try to build-in real options, options relating to the project size, life and timing of operations (apply evolutionary prototyping, early experimentation, use temporary technology resources, …).

They are the underlying principle of agile practices, help to create value through flexibility and maximize the value of an innovative project.

What kind of ideas? Project for product cost- we launch a project, like xVA, and invite a handful of featured clients to join it with the benefit of early adoption, customization, skills leverage and share, … But they do not pay a portion of the development cost, but a fee limited by the future license price. For the participants it is a real option.
BTW, the UnRisk FACTORY has been "ideated" that way.

UnRisk Academy - it has been established to extend product use training with courses giving full explanation on quantitative theories, mathematical approaches and critical implementation. But before we have launched it we have conducted free workshops with featured clients that did exactly this. So, finding the right curriculum was easy. The Academy is now an independent organization, not bound to the UnRisk product business.

11 Years of UnRisk summit - we usually do not celebrate (anniversaries), but in Q4 2012 we felt things have tied together so amazingly that we wanted to bring people together talking about it. And
we asked clients dto disclose their most innovative applications and present them to those who care.
It became a junction changing the way we position our products and technologies as enablers of advanced quant systems.
…..

Innovation is not so much driven by competition but cooperation.

Pointedly speaking, engineers manipulate models contextually and mathematicians theorize how …

Yes, problem solving principles are different in different fields. The steps (workflow) of computational problem solving

Transform the problem description into a model

Transform the model into a form that makes it of good nature for calculation

Calculate

Interprete the results

In general step one is the most difficult one. It is the most important work of quants. The work to add value.

Pure or applied (calculational) mathematics?

In mathematical problem solving, in step one a linguistic description of a problem is transformed into expressions, in the language of mathematics (mathematical notation). And now the difference between mathematical and other problem solving comes into play: mathematicians do not want to test their solvers (the operational semantics) in finite many cases but proof that they are correct in infinite many cases. And those researching completeness and consistency do the theoretical work that makes mathematics an unprecedented powerful problem solving universe - they work with mathematical objects. It shows its power in step two - especially when closed form solutions can be achieved. To achieve closed form solutions you apply a kind of "quantifier elimination" task - for all n is there a k so that k=sum(i, i=1, …n)? Yes, k=n*(n+1)/2 .. simple proof by induction.

But this has its traps. The proof-for-infinite-many-cases technique may make the problem domain smaller - often too small.

Symbolic computation is the computer technique that supports step two in the mathematical workflow. But is goes far beyond, it understands symbols not only in mathematical notion … symbols are graphs, geometric elements, .. even programs. All can be represented in a unified expression language.

The calculation step can become difficult itself, if the model is complex (for a wide domain) and cannot be simplified much. This is where numerical schemes are indispensable.

Asymptotic mathematics

It suggests that striving for exact solutions, one could decompose a domain so that exact solutions are possible in the sub-domains, when impossible in the domain. The total solution comes from an asymptotic recomposition of sub-results.

What if a domain is influenced by dogmatic thinking?

There are complaints that in economics and finance rationale debates of ideas are often replaced by dogma - a set of principles that are defined by an "authority". To change this was a matter of education, … but also technology can help.

Critical thinking requires not only research, but doubt and questioning. From a system development point of view it requires a bottom up approach, evolutionary prototyping, constructive learning …

A model is a model is a model

But what if they do not fit to real world conditions? Engineers are really good at manipulating models contextually - they usually rely on computer-aided-engineering technologies that have a wide implementation of (mechanical, electrical, micro-electronical …) engineering languages.

Its easy to simulate-for-insight in such languages. And their algorithmic implementation covers a wide range of engineering objects, function complexes and systems.

This is why we have unleashed UnRisk Financial Language and its wide algorithmic implementation - the UnRisk Engines. To enable quants manipulating financial objects and models contextually. p.s. I know that theorem proving is more than a "test avoider" - a constructive proof is an algorithm that solves the problem described in the theorem. A proof might even tell a story of the transformation.

UnRisk-Q uses the expressive power of the Wolfram Language to model the domain specific objects that we deal with in quantitative finance. When it comes to doing computations with these objects, most of the financial algorithms are implemented in C++ because performance is crucial.

The Wolfram Language provides two interfacing technologies for calling C++ programs, MathLink and LibraryLink. As an example, we’ll demonstrate how to call the following simple C++ function which adds a scalar to a vector from a Wolfram Language program:

MathLink is both a communication protocol and and a programming library for C++. With the advent of Mathematica 10, MathLink has been rebranded as Wolfram Symbolic Transfer Protocol, but we’ll stick with the name MathLink for now.

The mechanism that connects a C++ program with the Mathematica kernel is shown below:

At runtime the C++ program is connected to the kernel through a bidirectional stream which allows for both reading and writing Wolfram Language expressions. The stream may be implemented by a shared memory or a TCP/IP connection.

The C++ function to be called from the Wolfram language needs to be compiled into a standalone executable C++ program which links to the MathLink library. One has to provide a MathLink template file which describes the the pattern to be defined to call the function from the Wolfram language side and the data type mapping from Wolfram language expressions to native C++ function argument types like scalars or arrays:

AddScalarToVectorML is the wrapper function which calls our example function add_scalar_to_vector and writes back the result on the MathLink stream represented by the global variable stdlink.

Once the MathLink template files is compiled to an executable program with the CreateExectuable function with

It supports running the Wolfram Language and the MathLink executable on different machines, perhaps running different operating systems.

It allows you to connect a 64-bit Mathematica kernel to a 32-bit MathLink executable which is necessary for connecting legacy 32-bit libraries.

Because the C++ function is run in a separate process, a crash of the MathLink executable will not affect the Mathematica kernel.

It is easy to debug a MathLink executable from within an IDE.

These strengths of MathLink have served the Wolfram Language platform well since MathLink’s inception.

Strength is irrelevant

The advantages of MathLink do not make up for its greatest disadvantage:

Arguments passed to and from a MathLink function cannot share data with the Mathematica kernel. Data has to be copied over the link, resulting in execution time and memory consumption overhead, especially with large amounts of data.

Enter LibraryLink

Wolfram LibraryLink provides a way to connect external C++ code to the Wolfram Language, enabling high-speed and memory-efficient execution. It does this by allowing dynamic libraries to be directly loaded into the Mathematica kernel, so that functions in the libraries become part of the kernel process and can be immediately called from the Wolfram Language:

The C++ function to be called from the Wolfram language needs to be compiled into a dynamic library. The dynamic library must export a wrapper function whose argument list conforms to the calling conventions required by LibraryLink:

EXTERN_C DLLEXPORT int AddScalarToVectorLL(
WolframLibraryData libData,
mint Argc, MArgument* Args,
MArgument Res)
{
MTensor vectorT = MArgument_getMTensor(Args[0]);
if (libData->MTensor_getType(vectorT) != MType_Real) return LIBRARY_TYPE_ERROR;
if (libData->MTensor_getRank(vectorT) != 1) return LIBRARY_RANK_ERROR;
mint vLen = libData->MTensor_getFlattenedLength(vectorT);
double* v = libData->MTensor_getRealData(vectorT);
mreal s = MArgument_getReal(Args[1]);
add_scalar_to_vector(v, vLen, s);
MArgument_setMTensor(Res, vectorT);
return LIBRARY_NO_ERROR;
}

Once the library source file is compiled to a dynamic library with the CreateLibrary function with

Unlike MathLink, the required data type mapping from Wolfram language expressions to C++ language function argument types is specified as {{Real, 1}, Real}, {Real, 1} upon loading the function. The result of LibraryFunctionLoad is a pure function which can be called as:

LibraryLink’s focus on memory and runtime efficiency comes at a price:

A crash in a LibraryLink loaded function takes down the Mathematica kernel with it.

The platform and architecture of the dynamic library must match the kernel one’s exactly.

Debugging a LibraryLink function is more difficult, because the IDE must be attached to a running Mathematica kernel process.

So, with MathLink and LibraryLink a developer has the freedom to choose between safety and speed.

Freedom is irrelevant

Ideally, you want both. During development you want the robustness and the easy debuggability of MathLink. For deployment, you want your native functions to enjoy the time and memory efficiency provided by LibraryLink.

This requirement is met with some resistance from developers, who have to go through the tedious process of writing not only one, but two different sets of wrapper functions for each low level C++ function that should be callable from the Wolfram Language.

Resistance is futile

It turns out there is a way to have the best of both worlds with little effort.

In hotels the PDND door hanger sign tells somebody to leave you alone in your room (hotels who want to differentiate say "tied up", "the star is in", .. or simply "I am busy!")

When I work hard leave me alone?

Yesterday we had a bank holiday here. It was a rainy day and I decided to do some I-want-to-be-left-alone work. Only me and my computer. Of course, somebody should do the admin for it, somebody should sell it and somebody should understand it instantly and pay for it - but not now, I need the full concentration to transform my best ideas into something really valuable. Something I want to be responsible for.

I need the leverage from working with other people!

But after some time I recognize (again) it does not work. I need the leverage that comes from interacting with other people. People about who I want to say: they do great work and they made so great contributions that my output is their output too - although it remains my responsibility.

When I work alone long enough I begin to distract myself

At UnRisk we have created a culture that not only motivates for interactions but also responds to agitators who are pointed and maybe even noisy when presenting or criticizing an idea ….

It's not so easy to find the optimum.

Sure, we have our job titles and descriptions. But we also hatch into different roles and look into things through different lenses. Sometimes even a ruckus is required in a complicated situation - when we think we need to leapfrog … or even bend reality a little. And we know, in rare cases it's not going to happen.

Here is what has been sticking with me in summer - when I had more time ...

1. The Montreal Tapes, Charlie Haden - a fine recording that effectively underscores why acoustic jazz is usually more compelling in a live, rather than studio, setting.

2. Box Set, Bill Laswell and Material - this three CD set is of material found on three different albums: One Down, Secret Life, and Into The Outlands.

3. Orphans, Brawlers, Brawlers & Bastards, Tom Waits - is a spectacular music journey through American song tradition. The diverse 3 -disc collection with 56 songs capture the full scope of Tom Waits' power as vocalist, literary lyricist …

4. The Yahoos Trilogy, Elliott Sharp - the American multi-instrumentalist, composer and performer in collaboration with other music experimentalists.

We start with a (syntehetic) noisy Black Scholes volatility surface

It starts with a value of 35% at the front left edge (strike 50 percent of spot, maturity 1 month) and drops to a value of 25% in the upper right (strike 150 percent of spot, maturity 5 years). An additional volatiltiy noise (uniformly distributed between 0 and 0.1%) is added. This leads to the following call Black Scholes call values

We now apply numerical differentiation (50 grid points in each direction) to the different components of Dupire and obtain:

Noisy dC/dK

Noisy dC/dT

But the second derivative explodes

Noisy d2C/dK2

This is obviously the source of severe problems with naively applying Dupire's formula.

A growing number of quants want to change the underlying systems that keep risk managers awake at night.

This post has been inspired by this article of FastCompany and I recall the insights we get from quants that want to change their systems radically. We speak to them in feed back workshops, UnRisk Academy courses, workouts, co-creation projects, ...

1. Analyze the system you want to change - but not too much
If you dive into the current system too much and understand every detail, you tend to optimize instead of changing it. You don't need to be a passive recipient of a history.

2. Prototype and experiment - expect to be wrong
You gain insight with trial and error - tinkering is not a bad idea if you want to make something new, but radical experimentation and tinkering are not necessarily twins.

3. Organize a feed back cycle - and learn
Changing is hard work, but if you do not present its interim results you will be unlikely successful. Presenting also helps you to understand what you did.

4. Don't do it alone - cooperate. cooperate. cooperate.
If you want to change, you need connections and alliances. This goes far beyond the agile development and reflection cycles.

5. Make it resilient - to recover quickly from difficulties
You want to do the difficult work and master all complexity. This may let you forget that there are traps, side effects, external influences, unintended consequences …

On the development side there are a few approaches that help:

develop the bottom up fashion and use a symbolic declarative language - that drives evolutionary prototyping and explorative, constructive learning

for resilience organize financial objets, models and methods orthogonally and build engines that implement your languages

with computable documents you can make convincing presentations

provide a framework for financial data and a universal deployment system.

It is our strongest business principles to help quants leverage their change processes.

Remark: If your idea is so agreeable that everybody is going to support it immediately, it's not going to change ….

Inspired by Stefan's post today, I recall the following paper. It describes that studies have found evidence that wind speed has strong influence on mood and consequently effect in stock returns. All extracted from data of wind speed and daily stock market returns across 18 European countries from 1994 to 2004. The authors claim: our findings contradict the rational asset-pricing hypothesis ...

This was a joke? - or weather forecasters would have become immensely rich and had published fake forecasts to avoid sharing.

Of course we all know that "correlation does not imply causation". Still, there's this irking feeling in the back of your head: if two things strongly correlate, they must have something to do with each other, right?

Tyler Vigen has collected some nice data on "spurious correlations" over here. (That's "spurious" in the sense of the every day meaning of the word, not in the mathematical sense).

Know the old saying: "an apple a day keeps the doctor away?". Well, if you are lucky enough to own a garden, I suggest you also take a note of their price while you do your grocery shopping. According to this data, there is a positive correlation between the price of red delicious apples and the number of people getting killed by misusing a lawnmower. If that apples burn a hole in your pocket, you'd better read the manual of your new robomow twice.

While still a perfect place for recreation and rest, lawnmowers are not the only place where the reaper lurks in your garden. Data suggests to be particularly wary of swimming pools next year: There is a strong positive correlation between the number of people who drowned by falling into a swimming pool and the number of movies Nicholas Cage appeared in in that year. According to IMDB, Mr. Cage is currently working on three movies to be released in 2015 - so better watch your steps in your garden next summer.

The world outside your garden is a dangerous place, too. In 2014, doom may strike from unexpected directions: Miss America. How could that pretty, innocent-looking face...? Well - while she may look unsuspicious, this year's Miss America is among the Methusalems of Miss Americas (she's 24) - and there's a strong correlation between the age of Miss America and murders by steam, hot vapours and hot objects - the data doesn't lie.

If you enjoy going skiing in winter, you might want to think twice before feeding that industry: The correlation coefficient between the revenue generated by US skiing facilities and the number of people who died becoming entangled in their bedsheets is a whopping 0.9697. Similarly high - at 0.9524 - is the correlation between people drowned after falling out of fishing boats with the marriage rate in Kentucky. I really hope there is no causal relation in the latter.

With all that said, it remains to wish you great and recreative holidays (and take care ;).

P.S.: You might not want to take me too serious, though - incidentally, the number of engineering doctorates awarded per year correlates with the number of people who tripped over their own feet and died (I can see a connection there ;).

The conditional probability that expresses the random evolution of a security is called the pricing kernel. It carries all the information required to price any path- independent option. Consider the random evolution of a stock price having stochastic volatility a maturity at time T and a payoff function g. Let p(x, y, T − t; x′, y′) be the (risk-neutral) conditional probability that, given security price x and volatility y at time t, it will have a value of x′ and volatility y′ at time T. The famous Feynman-Kac formula provides the price for a derivative:

Here the expression p(x,y,τ;x′,y′) is called the pricing kernel. What is the idea of the pricing kernel?
It is the kernel of the transformation that evolves the payoff function g(x′, y′) backwards in time to its current value at time t, and yields the price of the option C(τ; x, y). Next week we will discuss an eigenfunction solution of the pricing kernel.

Interest rate swaps and credit default swaps are going to be the first instruments to fall under the clearing and margin obligations.

This article in RBS Insight shows that "who needs to clear what by whom", and other issues, must not necessarily be clear.

However,

companies using OTC derivatives face a testing time as new rules designed to improve transparency and reduce risk come into focus this year

IMO, it is not only to understand the impact of … on cash flows but how to manage the data and valuation management requirements. It's not just buying a big system ….?

Three books I enjoyed reading this summer half. …(in German)

What Horses Are Those That Make Shadow On The Sea, Antonio Lobo Antunes - this is the story of a landowners family that live in the Alentejo, Portugal. The mother is going to die, her husband has lost all the fortune - they gained from breeding bulls for the bullfighting - betting. Five children also failed at life on … This family has very much misfortune. A story that does not delight a reader.

Why read Lobo Antunes?, a Portugiese novelist writing a book almost each year - with a very dense style although some sentences seem endless and let everything happening all at once …?

I discovered Antunes, reading The Inquisitors' Manual, and really enjoyed reading about 10 of his books. The Horses.. is not available in English yet.

Colorless Tsukuru Tazaki, Haruki Murakami - the story of a young man haunted by a great loss. In school Tsukuru focused on a group of best friends: But, when he returned from college he was rejected by the others without any explanation. Later, a new girlfriend convinces him to seek his former friends and uncover the truth about the rejection. Where the color came in? All his friends have colors in their names .. not him (his name means "to make things").

Why I read all of Haruki Murakami, an internationally acclaimed Japanese writer? - he transports me into other worlds (by fiction or science fiction) and it's like if he just sat next to me telling me stories..

I discovered Murakami, reading The Wind-up Bird Chronicle. I also enjoyed A Wild Sheep Chase, Norwegian Wood, Kafka on the Shore, 1Q84 ....

The Goldfinch, Donna Tartt - "Theo" recounts the story of his life thus far. 13 years old. he loses his mother when they visited the Metropolitan Museum of Art (NY) - she was killed by a terrorist bomb. Theo gets The Goldfinch a Dutch master painting. The painting is the novels heart.
First, Theo is taken by a rich family of a schoolfriend in NY then by his father to a sinister exurban development of Las Vegas …
Continuously the painting draws Theo into the underworld of art. As an adult, he moves more and more into dangerous circles …

Why I read The Goldfinch, of Donna Tartt the American writer taking years for writing a book? - because I really enjoyed reading The Secret History. The Goldfinch also presents excellent writing but I divide the story plot and the corresponding character design into three sections: the first (NY) is a gripping read, the second (Las Vegas) also, but falls into some "white trash" stereotypes, reinforced in the third (Amsterdam) - more a wild crime story (with "Russian" mafia …).

I am only a reader, but I dare analyzing: Antunes uses complex language to write about simple human-made misfortune and oppression. Murakami takes a simple narrative style to describe complex behavior. Tartt seems to have virtually created a film first and then writes about it (in all details) - no wonder that it took ten years to write the Goldfinch.

I find Antunes' texts algebraic (characters "operate" on each other). IMO, the Nobel prize for him is overdue.

In my recent posts on identifying an implied flat volatility, it turned out that numerical differentiation leads to stability problems when noise is comparably high and vega is low. But, to be quite honest, tt was a toy example, because typically it should not be too difficult to identify a single volatality number (maybe it is just reading from your Bloomberg screen).

In the case of local volatility, the situation gets more interesting. You would have implied volatility information derpending on the strike K and on the expiry T and would like to recover a local volatility surface depending on the spot price S and on time t.

Bruno Dupire developed his famous inversion formula in 1994. It reads as

C(K,T) is the call price function obtained from the implied volatilities. The ingredients to obtain the local volatility are:

We often work hard to create values by being efficient in one point …. and forget that this can have a downside.

This is the story of a plant known as Amaranth. Most of its species are commonly referred to as superweed. And farmers growing mass food like corn, soybean, … spend a lot of money to let it not grow. This is not so easy because it grows fast and has a high rate of seed production and a multitude of varieties cross with one another easily. Some are even herbicide resistant …

But ironically, this plant is also highly nutritious and its is increasingly seen as an important ingredient of "gourmet food".

Unintended consequences when driving local efficiency blindly

Provocatively speaking, agricultural industry invests a lot in killing off a valuable food source to grow "raw material" for the fast food more efficiently- With all known consequences, from unhealthy over weight to bee deaths.

I do not want to stress the moral (culinary, medical,…) aspects of this, I find, it is an example of a predictably irrational behavior (in Dan Ariely's sense)

Are OTC derivatives a weed?

I am afraid, there are not so few that think so. And claim from regulators to keep them off the markets.

As experts you know that this has unintended consequences. In short, derivatives are to increase the values of firms, by helping optimize risk, by putting more quantitative information into the system, …

No doubt, some (artificially complex) derivatives are toxic, but most are financially nutritious - if you know their recipe and exactly how they work when you have used them in your financial kitchen lab.

This story is inspired by Mark Buchanan's recent post at BloombergView

Recently one of our customers had the following support
question (Support Case #12341)

„We want to price a Knock-In / Knock-Out FX Call Option
having the following properties:

Property 1: Strike FX Rate = Knock-Out FX Rate = x

Property 2: Knock-In FX Rate = y (y > x)

Property 3: As soon as Knock-Out Level x is reached, the option is knocked out, no matter if it had been knocked in before or not

How can we set up this Instrument in UnRisk?“

By the use of UnRisk barrier Options being
either of type single / double barrier knock-in or knock-out my be priced, but there is no function for pricing a mixed barrier option. After some thinking (it was some kind of solving a small
riddle) we had the following conclusion:

The described instrument can be split into the following
two instruments:

Instrument 1: Long Position in a Single Down & Out FX
Call Option, with

Strike
FX Rate = Knock-Out FX Rate = x

Instrument 2: Short Position in a Double Knock-Out FX
Call Option, with

Strike
FX Rate = Lower Knock-Out FX Rate = x

Upper
Knock-Out FX Rate = y

At the end of our „Thinking Process“ we checked
if this replication behaves in the same way as the original instrument – so we had to compare the following cases:

Case 1: The FX Rate never reaches Knock-In FX Rate y

The
value of the given instrument is 0

Instrument
1 and Instrument 2 of the replication have the same Pay-Off (either 0 or (final FX Rate - x)), thus the value of
the replication is also 0

Case 2: The FX Rate reaches Knock-Out FX Rate x and
Knock-In FX Rate y

The
value of the given instrument is 0 due to Property 3

Both,
Instrument 1 and Instrument 2, have a value of 0, thus the value of the replication
is also 0

Case 3: The FX Rate reaches Knock-In FX Rate y but does
not reach Knock-Out FX Rate x

The
Pay-Off of the considered instrument is given by (final FX Rate – x)

The
Pay-Off of Instrument 1 of the replication is given by (final FX Rate – x), while
the Pay-Off of Instrument 2 is 0 (Knock-Out FX Rate y has been reached),
thus the Pay-Off of the replication is (final FX Rate – x)

So in all possible cases the replication delivers the
same result as the original Instrument - Support Case #12341 closed.

This week we will extend our quantum finance framework with stochastic volatility. The price of an option on an equity with stochastic volatility is given by the Merton-Garman Hamiltonian

Again we can define variables x and y as S and V are positive-valued random variables.

Using these definitions, we obtain

and again this equation can be written in the form of a Schrödinger equation

with the Merton-Garman Hamiltonian

.

We have now a system with two degrees of freedom. Except for α=1/2 and α=1 the system can only be solved numerically.