Research highlights

How to best create structure in more than 1500 stocks

Why does a child play with a toy car, while most grown-ups fancy a real car instead? Obvious as this may seem, most academics play around with financial models involving a handful of asset prices (or more recently "large" dimensions up to 100), whereas real asset managers have to manage asset portfolios with more than a 1,000 asset prices. Are academics like children, and is it maybe time they grow up and study financial phenomena at the scale that is empirically more relevant?

Like driving a real car is much more challenging than driving a toy car, so is coping with realistically large numbers of assets. In the paper Estimation Risk and Shrinkage in Vast-Dimensional Fundamental Factor Models the authors study one of the challenges: how to cope with the large number of parameters in models with a large number of assets. In particular, the authors study a number of recently suggested techniques (called linear and non-linear shrinkage) and their adequacy for solving this challenge. They do so not in the standard academic toy setting, but in a setting of models that are typically used in the financial industry. Such models are called fundamental factor models. They find that such fundamental factor models are already a strong benchmark: academic shrinkage techniques add little to the fundamental factor model performance in the settings studied. In addition, they find for their vast dimensional setting that shrinkage techniques need to be adapted: some of the tuning constants needed to make the techniques work, can no longer be put at their theoretically optimal plug-in values, but need to be readjusted to the new context in order to generate (modest) gains in performance.

Andries C. van Vlodrop and André Lucas (2018). Estimation of Risk and Shrinkage in Vast-Dimensional Fundamental Factor Models.



Can the strong persistence in (realized) dependence for stock prices help for forecasting at longer horizons?

Stock prices typically move together. For instance, two stock prices can be affected similarly by common news about the industry they are in, or about the local or global economy. Different methods have been proposed to describe this time variation. A recent method that appears to work well in many settings exploits stock price variations within the day: for instance, every minute, or even more frequent. At such high frequencies, the dependence between stock price movements is typically very persistent: the dependence structure found on one day has quite a long lasting relation to the dependence structure on future days.

In this paper the authors address the question whether this strong persistence can actually be exploited for longer term forecasting. The short answer is: yes. Even though the information underlying the measurements of dependence is on a minute-by-minute frequency, the resulting measurements can be helpful at horizons as long as up to one month. The tools needed for this have to account for many of the features of stock prices and stock price dependence, such as erratic big price movements (fat tails), changing market nervosity and uncertainty (time-varying volatility), and the strong persistence due to the high-frequency underlying measurements. All of these tool features are needed, but at different periods in time. Whereas accounting for changing market nervosity is always important, correctly accounting for erratic big price movements is particularly relevant during crises, and accounting for strong persistence is important during calmer episodes. Previous methods did not account for all of these features simultaneously in a coherent way.

Anne Opschoor and André Lucas (2018). Fractional integration and fat tails for realized covariance kernels. Journal of Financial Econometrics, accepted for publication.



Forecasting crash numbers on German roads using meteorological variables

At the end of each year, the German Federal Highway Research Institute (BASt) publishes the road safety balance of the closing year. They describe the development of accident and casualty numbers dis-aggregated by road user types, age groups, type of road and the consequences of the accidents. However, at the time of publishing, these series are only available for the first eight or nine months of the year. To make the balance for the whole year, the last three or four months are forecasted. In this study the accuracy of these forecasts is improved by applying structural time series models that include effects of meteorological conditions.

One of the issues facing the authors was that the road safety variables are monthly data measured at the national level while the weather variables are daily data measured at eight different German regional weather stations. The results incorporating meteorological data in the analysis show clear improvements in the forecasts. The authors conclude that their approach provides a valid alternative for input to policy makers in Germany.

Kevin Diependaele, Heike Martensen, Markus Lerner, Andreas Schepers, Frits Bijleveld, and Jacques J.F. Commandeur (2018). Forecasting German crash numbers: The effect of meteorological variables. Accident Analysis and Prevention, accepted for publication.



What are the rankings of over 500 ATP tennis match players on hard court, clay and grass surfaces?

It is widely acknowledged in the literature that time variation in the strength level of tennis players is one of the key ingredients to properly describe the outcome of tennis matches. The strength of a player typically increases from a young age and reaches a certain peak when he/she is in his/her twenties, followed by a decline until he/she ends his/her career. However, in all studies so far time variation has not been modeled explicitly by means of a fully specified probability measure for the outcome of a tennis match at some time period. Given that the outcome of a match relies mainly on the abilities of the two players, it is necessary to model the strength of each player explicitly. Furthermore, since the strength of a player can vary considerably with the court surface type, the model also needs to identify strength levels for different surfaces. In this paper 17 years of ATP (Association of Tennis Professionals) tennis matches for a panel of over 500 players is analyzed, and it is found that time varying player-specific abilities for different court surfaces are of key importance for analyzing the matches.

Additionally, both the home ground advantage and age of a player are found to have a significant effect on the match results: home ground advantage has a positive effect for players playing in their country of origin and the performance of players is generally highest at the age of 25. The paper provides evidence that the proposed model significantly outperforms existing models in the forecasting of tennis match results and yields separate rankings of all players for matches played on hard court, clay and grass surfaces.

Paolo Gorgi, Siem Jan Koopman, and Rutger Lit (2018). The analysis and forecasting of ATP tennis matches using a high-dimensional dynamic model. Tinbergen Institute Discussion Paper TI 2018-009/III.



The illusive wisdom of crowds: Why the many are not smarter than the few

What is an optimal communication architecture for finding out the ‘truth’? Consider, for instance, people trying to estimate the reliability of a product or whether some news items are fake. Here, everyone has some individual information that forms some individual belief, but nobody knows for sure what the truth is. Now assume people talk such that, over time, this belief gets updated by communication in the social network. What kind of social networks generate consensus, i.e. all people ending up with the same beliefs? If there is consensus, is it the truth? In other words, for what social networks can we observe wisdom of the crowds?

In this paper, the authors conclude that even if the social network does not privilege any agent in terms of influence, a large society almost always fails to converge to the truth. They conclude that wisdom of crowds is an illusive concept and bares the danger of mistaking consensus for truth. Moreover, they find that classic network influence measures fail to acknowledge that consensus level is highly sensitive to early communication.

Heidergott, B, Huang, J. and I. Lindner (2018). Naïve learning in social networks with random communication. Tinbergen Institute Discussion Paper TI 2018-018/II. Second round of reviews in Social Networks



Speedy dimensionality reduction without tossing coins

projection

Often we are faced with data with a large number of attributes ("dimensions"); think, for example, of stock prices where for each stock, we have the price at many different times. While it may seem that more data is always better, it brings with it many algorithmic challenges, colloquially referred to as the "curse of dimensionality". An important tool to deal with these difficulties in many applications is to reduce the dimensionality of the problem while preserving the key information in the data as well as possible. Improving our algorithms for such "dimensionality reduction" is an important area of study.

In many cases, we are interested less in the precise details of each individual data point, but on the relationship between different data points. For eample, we may wish to cluster "similar" points together. The Johnson-Lindenstrauss Lemma is a famous mathematical result that shows how the dimension of the input data can be dramatically reduced whilst approximately maintaining the distances between data points.

Because of its prime importance, there are many extensions and variations of the Johnson-Lindenstrauss Lemma. This paper concerns one of these variants, one which is optimized to handle the very sparse input data that is typical in many applications. Previous results relied heavily on randomization — meaning that the result of the procedure depends not just on the input data, but also on additional input that needs to be random for the procedure to reliably succeed. In computer science, high-quality randomness is often thought of as a resource, just like processing time, or memory. Their work shows how randomization can be completely avoided, while maintaining other good properties of the algorithm — in particular, keeping it fast.

Daniel Dadush, Cristóbal Guzmán and Neil Olver (2018). Fast, deterministic and sparse dimensionality reduction. Proceedings of the ACM-SIAM Symposium on Discrete Algorithms.



How to improve the long-lead predictability of El Nino

El Niño (EN) is a dominant feature of climate variability driving changes in the climate throughout the globe, and having wide-spread natural and socioeconomic consequences. Its forecast is therefore an important task, and predictions are issued on a regular basis by a wide array of prediction schemes and climate centers around the world. This study explores a novel and improved method for EN forecasting.

First, so far the advantageous unobserved components time series modelling has not been applied in this field of research. Also, customary statistical models for EN prediction essentially only use sea surface temperature and wind stress in the equatorial Pacific. Because earlier research indicates that subsurface processes and heat accumulation are also fundamental for the genesis of EN, in this paper subsurface ocean temperature variables in the western and central equatorial Pacific are additionally introduced into the model. A third important feature of the model is that different sea water temperature and wind stress variables are used at different lead months, thus capturing the dynamical evolution of the system and rendering more efficient forecasts. The new model has been tested with the prediction of all warm events that occurred in the period 1996–2015. Retrospective forecasts of these events were made for long lead times of at least two and a half years. The present study therefore shows that the limit of EN prediction should be sought much longer than the commonly accepted “Spring Barrier” that until now made correct predictions for the months of March, April and May impossible. The high similarity between the forecasts and observations indicates that the proposed model outperforms all current operational statistical models, and behaves comparably to the best dynamical models used for EN prediction.

Desislava Petrova, Siem Jan Koopman, Joan Ballester, and Xavier Rodó (2017). Improving the long‑lead predictability of El Niño using a novel forecasting scheme based on a dynamic components model. Climate Dynamics, 48, 1249–1276.



Making good peer groups for banks when the world moves fast: a marriage of financial econometrics and machine learning

How to identify the apple, the orange, and the banana if the juggler tosses everything around at high speed? Or differently, how to identify which groups of banks and other financial institutions are alike in a situation where new regulation kicks in fast, fintech companies are rapidly changing the scene, and central banks are implementing non-standard policies and keep interest rates uncommonly low for uncannily long? In this paper the authors investigate how to come up with groups of peer banks in such a volatile environment. Identifying peer groups is extremely important for regulators to create a level playing field. Similar banks should be charged similar capital buffers to safeguard the financial system and to retain fair competition at the same time.

The authors devise a new technique inspired by the machine learning literature (clustering) combined with ideas from the financial econometrics literature (score models). Think of it as allowing the moving center of the fruits of the juggler (the apple, orange and banana) being described by one statistical model, and the size of the fruits being determined by another model. The model is put to work on a sample of 208 European banks, observed over the period 2008-2015, so including the aftermath of the financial crisis and all of the European sovereign debt crisis. Six peer groups of banks are identified. These groups are partly overlapping and partly different from classifications by ECB experts. The findings can thus be used as a complement for policy makers to compare bank profitability, riskiness, and buffer requirements. For example, leverage (as a measure of riskiness) and share of net interest rate income (as a measure of profitability) evolve quite differently for some groups of banks than for others, particularly during the low interest rate environment. A follow-up of this paper is “Do negative interest rates make banks less safe?”.

André Lucas, Julia Schaumburg, and Bernd Schwaab (2017). Bank business models at zero interest rates. Journal of Business and Economic Statistics.



Do negative interest rates make banks safe?

The European Central Bank has implemented a number of unconventional monetary policies since the financial crisis and the subsequent sovereign debt crisis. One of the policies involves setting the official rate at which banks can park money at the ECB close to zero, or even below zero.

Was this a wise decision?

The idea of setting the official rate close to or below zero is that banks then have more incentive to <b>not</b> put money at the central bank. Rather, it would pay for banks to lend the money out, thus providing more financing to the people and businesses and helping the economy to grow again.

Then again, others argue that low interest rates squeeze the profit opportunities for banks, making them more vulnerable to new economic shocks and risking a new crisis in the financial sector.

In this paper, the authors investigate how markets perceived the effect of the ECB's decision to impose negative interest rates on the riskiness of banks. In particular, they are interested in whether some bank business models are more prone to the potential negative effects of ECB's policy than others. They measure riskiness of the banks by well-established methodology: the expected amount of capital that has to be injected into a troubled bank in case of an extreme market-wide shock. It is important to consider a situation of extreme market stress, as in that case injecting more capital into a troubled bank is most problematic and hurts most.

The authors find that policy rate cuts below zero trigger different Systemic Risk responses than an equally-sized cut to zero. There is only weak evidence that large universal banks are affected differently than other banks in the sense that the riskiness of the large banks decreases somewhat more for rate decreases into negative territory.

Frederico Nucera, André Lucas, Julia Schaumburg, and Bernd Schwaab (2017). Do negative interest rates make banks safe? Econometrics Letters, 159, 112-115.



Understanding the time-varying dynamics of traffic equilibria

traffic-jam-688566_640

Modelling traffic congestion and understanding its behaviour is a challenging, multidisciplinary endeavour. One approach is to look at simplified, abstracted models of traffic, and to use mathematical tools from game theory and optimization to obtain insights that still hold in the more complicated reality. For example, Braess's paradox says that in some situations, closing a road can improve the traffic situation! A counter-intuitive result that was then investigated and confirmed in reality.

Recently, there has been a lot of attention paid to a model of traffic that is simplified, but at the same time captures a key component of real traffic: it's dynamic nature. The model treats traffic as a time-varying flow of cars which are each strategically looking for the quickest route to their destination. It is closely related to models used in detailed urban traffic simulators. The time-varying nature of this model makes it extremely challenging from a mathematical perspective, and many natural questions are not yet understood.

This paper concerns one such natural question: if the amount of traffic trying to use the road network stays the same, does the traffic pattern eventually settle down to a simple steady-state? Or can oscillations pervade the system forever?

Roberto Comminetti, Jose Correa and Neil Olver (2017). Long term behavior of dynamic equilibria in fluid queuing networks. Proceedings of the 19th Conference on Integer Programming and Combinatorial Optimization.