The idle ramblings of a Jack of some trades, Master of none

Just got back from three days in sunny Aix-en-Provence, where I attended the Financial Forecasting Markets Conference 2007. There were some interesting papers by various academics, some singularly pointless drivel offered by wannabe PhDs, and peculiar presentations by the odd(ly) nervous market practitioner (e.g., myself).

The conference was held between May 30 and June 1 at La Baume Les Aix, a privately owned establishment run as a spiritual and cultural centre by the Jesuits. The expansive buildings include accommodation, a dining room, various conference halls, sheep and goats, extensive lands, and co-ed toilets.

The conference has been running annually for fourteen years now, and has usually had an even mix of academics and market participants. This year, though, there were almost no attendees from the financial sector. I found, therefore, that much of the presentation was at too high a level for me to understand. When people toss around terms such as ergodic, I tend to gape a bit, my mathematical upbringing notwithstanding. Still, it was a good experience, especially as I met several interesting theoreticians, made a good contact or two, and came away with useful ideas for work.

There was, sadly, a clear lack of organisation. Several unethical people had submitted articles but didn't show up to present them (or even inform the organisers) , much to the chagrin of interested attendees. The projectors garbled graphs and distorted presentations owing to mismatches with the new Vista laptops that were being used. On the first day, lunch was delayed, and on the second, the conference dinner was held at what turned out to be a bit of a tourist trap.

Below is a summary of the presentations I attended.
  1. Forecasting Under Asymmetric Loss Functions, V. Arekelian and E. Tzavalis. The idea is that symmetric loss functions, such as those usually used in portfolio theory (e.g. quadratic utility, meaning an investor is as averse to a given loss as to the same amount in profit), should be replaced by utilities that are more appropriate to the investor. Didn't get a lot of out this, except for a reference to something we might find useful, namely VaR under asymmetric loss functions.
  2. Forecasting Stock Price Volatility, A. Rahou et al. This was a classic case of pointless drivel. The presenter showed that he could forecast the direction and level of the next day's opening stock price, if he presented a neural network with thirteen inputs: opening price at the five previous days, moving averages at 1 week, 2 weeks, 1 month, 2 months, the first difference of prices, the RSI trading signal, the stochastic trading signal, and 'turning points'. His testing, if I understood it correctly, was a strange mixture of in- and out-of-sample, conducted in somewhat ad-hoc fashion; the paper is representative of the sort of overfitted and badly reasoned trading systems proposed by sell-side quants; the presence of the neural network ensures that nobody can truly understand why and how the system works.
  3. Volatility Forecasting and Value-at-Risk Estimation in the Emerging Markets: Case of a Single-Asset Portfolio in South Africa, L. Bonga-Bonga. Basic thesis is that emerging markets suffer not only from internal shocks, but also from external influences, which therefore mandate the use of asymmetric GARCH models to fit and forecast portfolio volatility. I'm not sure that there's anything new in this; I remember reading similar results during my Cass days.
  4. Estimating Risk-Neutral Density Functions from EUR/HUF Currency Options and Forecasting Ability, C. Csávás. The presenter, an economist with the Hungarian Central Bank, was the first of the overhead transmitters that I listened to. As far as I can tell, he used the implied volatility surface to back out the risk neutral density functions from which he attempted to forecast one-step ahead volatility. He proceeded to show that there is not much forecastability. Who would have thunk it?
  5. Utility-based Pricing of Weather Derivatives, H. Hamisultane. This was one enthusiastic woman with a cheerful demeanour and mannerisms. She explained at rapid clip what weather derivatives were, described how to estimate a constant risk aversion coefficient using liquid option prices, and then use Lucas' consumption based asset pricing models for the pricing. She admitted happily that they probably don't work too well in practice, and when someone asked how to hedge these products, she candidly said that she had no idea. There was an interesting post-presentation discussion, which I contributed to. A delegate, Marianna Brunetti, suggested that since the underlying (the weather) for the derivative cannot be priced, the Wang transform might be used to obtain a risk-neutral measure for it; to which I rejoined that a paper by Pelsser showed that the Wang transform was not correctly applicable in financial risk modelling. This was pretty much the beginning and the end of meaningful contribution by yours truly to any discussion during the conference.
  6. Investigating the Corporate Spread: a Non-Parametric Approach, C. Peroni. She considers a multi-factor non-parametric (indeed, a nonlinear) model for the term-structure of corporate debt, in which she incorporates inflation as a driver. An attendee pointed out that expected inflation can be hedged; it's the unexpected inflation that should really drive the model. I pointed out that we use the term-spread of the yield curve as a proxy for inflation, which she also incorporated in the model, and asked why then have the inflation term. She said that that could probably explain why she found the term-spread insignificant at various points, perhaps implying that the inflation term was picking up the relevant effect.
  7. Back-testing Value-at-Risk Based on Tail Losses, W. K. Wong. The main contention in this paper is that the use of VaR is erroneous as it does not incorporate information from the loss tail, and that the expected shortfall (ES) is a better measure. Wong used a saddle-point approximation to the tail losses; with Monte-Carlo simulation, he finds that the technique is accurate even for small samples.
  8. The Role of Private Information in Return Volatility and Bid-Ask Spreads in the FX Market, F. J. McGroarty. Only caught the tail of the presentation. McGroarty used the EBS as representative of FX trading volumes and the bid-ask spread, but as Pierre pointed out, the average chunk of an EBS trade is about 5 million dollars, which hasn't changed over time, so the finding that the bid-ask spread is not volatile is unsurprising. There are OTC trades of much larger sizes, which incur larger transaction costs, which were not captured by McGroarty.
  9. Bootstrapping the Volatility of Real Exchange Rates, L. Copeland and S. Heravi. Another massive overhead-transmission. Copeland talked about STAR models (smooth transition autoregressive) and their estimation, application and so on, and I faded rapidly.
  10. Time Reversal Invariance in Finance, G. Zumbach. Gilles' contention in this paper is that if a financial model is assumed to be an accurate representation of reality, it should be able to imply such facts about the data as invariance (or the lack of it) under time reversal. He shows that several families of stochastic volatility fail to demonstrate these properties; multi-scale ARCH models do better.
  11. The Distribution and Non-Arbitrage Price of the CPPI Portfolio, A. Cipollini. The only mathematical (stochastic calculus) paper presented at the conference, as opposed to the myriad statistical and econometric items. Cipollini discussed the pricing of a constant-proportion portfolio insurance structure, and showed how under both a bullish and bearish market, an investor could hold on to the gains made hitherto. Can't say I understood much of it, especially because I arrived late, but was hoping to study the actual paper, which - it now appears - I won't be able to do, as it is not in the conference CD-ROM. Drat.
  12. Changes in the Statistical Features of G10 Currencies and Their Implications for International Investors, P. Lequeux and M. Menon. As Pierre returned to London the day before the presentation, I gave it. By subtle advertising and word-of-mouth (mainly my own), I had created a buzz about this, so it was rather well attended. Unfortunately, as it was the first paper after lunch on the last day, most of the attendees were half asleep, and after my whirlwind tour-de-force, everybody was clubbed into a sensation of having walked into a fog. Seriously, though, the gist of this descriptive paper is as follows: over the past 10 years, there have been two gross trends in the USD (an appreciation initially by about 32% against the G10 currencies followed by a depreciation of about 39%), after which it has been more or less range-bound; most currency managers follow variants of momentum-based strategies, which performed well during the trending periods and have done badly in the recent (range-bound) past; the statistical tests we tried (ADF, variance-ratio, Taylor price-trend tests) failed to detect the trending regimes (why?); currency volatilities are at historic lows, but correlation between them is at all-time high, so explicit risk replaced by severe contamination (diversification) risks; we suggest that the diversification of central bank reserves has contributed to this and provide graphical demonstration of the fact. Received two interesting suggestions for further investigation: try a Granger-causality test to determine if lack of currency diversification prompted the selling of the USD by the central banks, or vice-versa; Chiara recommended the use of tests by Robinson to see if there was local non-stationarity in our time-series, which might have made the detection of trends difficult.

And so, back to London.


Post a Comment