Value At Risk

by hilzoy

One of the things I love about blogs is that they allow people who really know what they’re talking about to respond, publicly, to what they read, and to do so almost instantaneously, so that the rest of us can benefit. There’s a wonderful example today. It starts with a long NYT article by Joe Nocera on a risk management tool called ‘Value at Risk’, or VaR.

“Built around statistical ideas and probability theories that have been around for centuries, VaR was developed and popularized in the early 1990s by a handful of scientists and mathematicians — “quants,” they’re called in the business — who went to work for JPMorgan. VaR’s great appeal, and its great selling point to people who do not happen to be quants, is that it expresses risk as a single number, a dollar figure, no less.”

If you want to understand the risk management part of the financial meltdown, it’s worth reading the article in its entirety, in order to see how what started out as a tool for measuring certain types of risk ended up as a tool used by regulators and in reports, and then as a measure that people started to game, and that other people placed altogether too much confidence in:

“There were the investors who saw the VaR numbers in the annual reports but didn’t pay them the least bit of attention. There were the regulators who slept soundly in the knowledge that, thanks to VaR, they had the whole risk thing under control. There were the boards who heard a VaR number once or twice a year and thought it sounded good. There were chief executives like O’Neal and Prince. There was everyone, really, who, over time, forgot that the VaR number was only meant to describe what happened 99 percent of the time. That $50 million wasn’t just the most you could lose 99 percent of the time. It was the least you could lose 1 percent of the time.”

However, you should then read Yves Smith’s takedown of the article here. She argues, basically, that VaR is much more deeply flawed than Nocera lets on, and systematically underestimates risk in well-known ways. (Technically: it assumes a normal distribution, and the distribution of asset prices is known not to be normal, and to be abnormal in ways that make them riskier.) As far as I can tell, the reason it’s used anyways is (in part) because the mistaken assumptions it uses make the math more tractable. But that’s a classic and obvious mistake: like a drunk looking for the keys he dropped when he got out of his car under the light across the street because that’s the only place where he can see what’s on the ground.

James Kwak then chimes in with a different fundamental problem with VaR: the fact that it assumes that the world (or at least the world of asset prices) does not change in fundamental ways. As Kwak puts it, are asset prices like coin tosses, which you can safely assume will continue to show the probability distribution they’ve showed in the past? Or are they like games between two basketball teams, where the probability of one winning changes dramatically the day it drafts Michael Jordan? VaR assumes the answer is: like coin flips.

As best I can tell, Smith’s and Kwak’s critiques do not imply that VaR, and how it was used, are not screwed up in the ways Nocera claims; just that there are additional, more fundamental problems with it. But as a primer on financial risk management over the past decade or so, the combination of the three is hard to beat.

23 thoughts on “Value At Risk”

  1. As someone who teaches data analysis I am flabbergasted that these models rely on normal distributions–I spend a lot of time telling my students *never* to assume that data will be normal. And even I, no follower of the economics literature, knew that financial prices were a case in point.
    I think the real problem is the one identified in both posts (and your own), namely the difficulty of getting people to plan for the rare downside when there is more money to be made (short-term) by not doing so. This is, I’d note, hardly a problem confined to the financial markets–and it seems to be soluble only by enforcing rules that demand that people be more responsible than they would be left to their own devices. In my own area, this takes the form of building codes that require you to build to withstand rare events (earthquakes) that may not occur in your lifetime, or that of the building–there is a clear parallel to capital requirements for bank-like entities.

  2. 1. VaR does not assume a normal distribution. It’s an approach that measures risk by looking at a percentile figure (95th, 99th) of a probability distribution. It works with any distribution.
    2. Quantitative measures of risk provide a systematic but flawed approach to assessing risk. The advantages of a systematic approach include a common language for communication and a reasonable assessment of relative risk (if the VaR for a portfolio doubles from one month to the next, this is a valuable thing to know even if the actual absolute figures don’t have a very solid grounding in reality). The major disadvantage is that something like VaR can be “reified” by practitioners and understood as a real quantity rather than a limited tool.
    Why would reification occur? One theory is that people were simply blinded by complex math; another theory is that this emerged as a result of the incentives faced by traders and regulators. My vote is on the latter option.
    3. True, it’s reasonable to attempt to predict the future when the future is roughly the same as the past; it’s basically impossible when fundamental change happens. My humble opinion is that this is a remarkably obvious truth, and if some people lost sight of it it’s because they had incentives to do so.

  3. One of the things I love about bogs is that they allow people who really know what they’re talking about to respond, publicly, to what they read, and to do so almost instantaneously, so that the rest of us can benefit.
    Consider this an instantaneous public response from someone who at least feigns fluency.
    😉

  4. “It starts with a long NYT article”
    Argh at my repetition! This is a NY Times Magazine piece. The newspaper and its news reporting are editorially separate from all of the following: the editorial department, the Magazine, the Sunday Book Review, and outside Op-Eds.
    Each is run and written by separate people (with an occasional overlap between contributors of the separate Magazine and Book Review, and the others, but no overlap between who is running the respective publications or departments), and each are editorially separate. They also have different policies towards what they publish. (This is also why people are often puzzled as to why books are reviewed both by the newspaper, and the Sunday Book Review: it’s because they’re different publications.)
    This may not seem important, but it very much is if you work for any of these folks, or know anyone who does, or assign responsbility to any of these folks wrongly to one of the others.
    Misattribution gives the wrong people responsibility for something they did not publish.
    A piece in the Magazine or Book Review is not a piece in the newspaper. Or vice versa.
    Note the credit in this specific case:

    Joe Nocera is a business columnist for The Times and a staff writer for the magazine.

    They carefully distinguish that these are two different jobs, for two different employers.
    It’s not a “New York Times article”; it’s a New York Times Magazine article, which is as different as an article in Time magazine is from an article at CNNMoney.com which equally isn’t an article in Sports Illustrated, no matter that they are all owned by Time, Inc. It’s as wrong to mush these all together as it would be to attribute a piece in the old Life Magazine to “Time”. It’s the same deal: owned by the same people, ultimately, but run by different people, and separate publications.

  5. Nocera’s article is nicely done but it’s not perfect. However, both of the criticisms mentioned are a bit off the mark. On the first, anyone numerically literate in finance understands that return distributions are fat-tailed. Thus, the standard normal distribution is inappropriate – and hence Nocera’s mention of kurtosis. Skewness really takes us in a bit different direction that Nocera discusses extensively, although without mentioning skewness when he is implicitly discussing it. (That is, long periods of small positive returns with short bouts of crushing negative returns, and a distribution with a “tail” to the left.)
    VAR typically looks at, say, a 95% confidence interval. The construction of such an interval is independent of what the underlying distribution is. To the extent that the tails of the assumed distribution are “fat,” then that 95% confidence interval will be wider and there is no problem. The issue, as Nocera’s article suggests time and again, is whether the construction of the model is appropriate – e.g. non-normal underlying distributions – and whether those using the model understood how to use it.
    The second criticism also is a bit off the mark and is addressed in Nocera’s article also. The example of Goldman running the model every day and seeing a period of strange results in mortgages is a good example. The issue isn’t whether asset prices are like coin tosses or basketball games. Both miss the point, unless you can construct a Darwinian coin! Financial markets evolve every single day. Most of the time we don’t see anything dramatic occur although there are small but subtle changes going on. We see the accumulation as history.
    Take a simple example. Until 2003 there was no securitization of CRA (Community Reinvestment Act) loans. Then some bright soul had the idea to try securitizing them. That caught on, amazingly fast in retrospect, and with a few months lots of CRA loans were being securitized. That securitization in turn changed incentives for mortgage originators to create more of those loans. Those incentives led to more fraud. And those incentives led to a movement out of Fannie and Freddie securities which in turn held prod Fannie and Freddie to become more aggressive in the subprime market. Everything happened gradually and the problems developed gradually. The recognition that there was a serious problem, however, changed virtually overnight.
    Realistically, you have evolution here with minor changes taking place all the time. Very very rarely do you have a game-changing event like the drafting of Michael Jordan. The major events typically don’t get identified until after the fact and then you have a game-changing recognition of a game-changing event.
    Basically, you have a market that continuously evolves and the calculation of VaR – WHEN DONE CORRECTLY – will evolve as well. There is an issue that VaR fundamentally is backward-looking and thus must be interpreted cautiously.
    Nevertheless, my point is that both critiques of Nocera, while containing a kernel of truth, do miss their mark.

  6. On the first, anyone numerically literate in finance understands that return distributions are fat-tailed.
    Yes, and most of the people who are numerically literate in finance understand things wrong. Returns can’t be modeled by any single distribution. They are not simply fat-tailed, assymetric distributions, as you imply.
    Using a distribution to model something implies that each of the draws from that distribution are independent. In finance, that independence doesn’t exist. You even touch on this, but miss the importance. You can’t do VaR correctly, in the sense of giving a value that would be lost at a 95th percentile move. Attempting to do so means that you are making assumptions that do not hold.
    Finance needs to come to grips with the problem that it can’t produce the kind of assurances that it has been claiming for two decades. There are things that can’t be quantified, and finance is awash in them.

  7. I would consider the VaR a primer on why using one number to make decisions about complex issues is a bad idea. As a quant myself there have been multiple occasions where my company has asked me to “create one number” that will allow them to understand a complex situation. They just want one number, and when it goes up, they want to be happy, and when it goes down, they will fire people.
    This, as may have implied when they asked me to do this, is stupid.
    When you try to simplify a complex problem like risk you will inevitably miss out on key points. Any single number that you would use will have some serious caveats, and if you don’t keep those caveats in mind you will end up making stupid decisions. (Or you will make it easy for other people to decieve you. Same difference really.)
    The VaR is, in my opinion, just a symptom of people’s desire to oversimplify complex problems.

  8. Gary — In all sincerity, your post above brought a smile to my face. I hope you are feeling a little better than recently.

  9. VaR also contained untested assumptions about what the actual distribution and deviations are and they relied on the rest of the players in the market maintaining the same risk profile as the VaR assumptions attributed to them. If you don’t know what the actual risk profile is in the current financial environment, you don’t actually know what percentile you have dealt with in analyzing your risk or the market risk.
    Add that to the pressure from executives with bonus programs that encourage them to take imprudent risks, marketing incentives that reward sales that are imprudent, and shareholders who like simplistic comparisons with XYZ Corp. while rewarding those who understate their apparent risk while increasing actual risk, the interaction of business and segment risk with market risk, and other internal and external incentives to mislead folks about the risks undertaken (can you still buy an AAA rating from S&P for security that the S&P analyst cannot understand?) and you end up with totally useless risk analysis.

  10. Shinobi –
    Risk management for investments has always been an area that is tolerated at best. Companies appoint a chief risk officer because someone told them they have to and then do their best to ignore them. Given that actuaries and other risk analysts are not generally known for their interest in corporate politics and infighting, the only time they are seriously listened to is after a disaster like this.
    The credit crunch we are seeing now shows that bankers still do not understand their risk managers. Just as they blindly ignored the risks they were taking a few years ago, now they are blindly fearful of every tiny risk that comes along.
    Banks that lend wisely today are going to be the biggest beneficiaries of this economic cycle since they will be able to get a larger risk premium than reasonable only because the panic in the market has caused another overreaction.

  11. “Using a distribution to model something implies that each of the draws from that distribution are independent. In finance, that independence doesn’t exist. You even touch on this, but miss the importance. You can’t do VaR correctly, in the sense of giving a value that would be lost at a 95th percentile move. Attempting to do so means that you are making assumptions that do not hold.”
    Michael, a couple of points here. (1) The values need not be independent, aren’t likely to be independent, and that is recognized. That’s why there’s an emphasis on autocorrelation and adjusting for the lack of independence. Whether that adjustment is sufficient is open to debate. (And there’s also adjustment for heteroskedasticity as well, that is, for a changing variance over time.)
    (2) One can debate whether a single distribution is appropriate or whether a combination of distributions is more appropriate, but the bottom line is that there is some distribution which may well be a combination of distributions and may well not be “neat” in the sense of normal or even a simple skew.
    (3) Most fundamental, anyone in finance that has given assurances like “VaR will tell you all you need to know about risk” is guilty of malpractice. There needs to be some serious caveats about the limits of any calculation.
    When you walk out the door in the morning, you assume that you won’t be hit by a meteorite. When you cross the street you assume you won’t be hit by a bus. When you brush your teeth in the evening you assume that no one has poisoned your water supply. You can’t avoid making assumptions and you likely can’t even quantify all the things that you’re making assumptions about. However, you do need to make decisions and take actions, and those actions are going to be based on your underlying assumptions.
    To the extent that quantitative analysis and VaR allow you to make better decisions, then it is of value. To the extent that quantitative analysis and VaR hides some assumptions or makes incorrect assumptions or to the extent that you do not understand the limits of VaR, then it is not of value. For many, the value has been overstated. For some, the value is very very great.

  12. When you walk out the door in the morning, you assume that you won’t be hit by a meteorite. When you cross the street you assume you won’t be hit by a bus. When you brush your teeth in the evening you assume that no one has poisoned your water supply. You can’t avoid making assumptions and you likely can’t even quantify all the things that you’re making assumptions about. However, you do need to make decisions and take actions, and those actions are going to be based on your underlying assumptions.
    Absolutely. However, just as it would be stupid of me to allow uncertainty to prevent me from acting, it would also be stupid of me to think that I can quantify the risks of something that can’t be quantified. My problem is that something like VaR doesn’t allow anyone to make better decisions about risk. We may want it to. We may even need it to, but that doesn’t mean that it can. Pretending that it does isn’t helpful.
    Yes, you do need the draws to be independent in order to be able to construct a probability distribution. Autocorrelation, heteroskedasticity, and all of the other tools of time series analysis don’t change that. They simply change the nature of the model you are using. They do allow for the model to update itself. What they do not allow for is the self-referential nature of the modeling.
    If you distribute a tool like VaR, and everyone starts using it, that affects the outcomes. The behavior of the markets is dependent upon the tools that are used to measure it. That inherently makes the tools unreliable, because they can’t capture their own effects.
    One of the things that finance must come to grips with is the fact that it can’t quantify risks in a rigorous way. It just can’t, no matter how badly everyone wants to be able to do so. The models will always break down, and they will always do so at the worst possible moment, because those are the moments when the effects of the models on the system are greatest.

  13. Rich S. and J. Michael Neal are just messing with us. No way is “heteroskedasticity” a real word. 😉

  14. “The behavior of the markets is dependent upon the tools that are used to measure it. That inherently makes the tools unreliable, because they can’t capture their own effects.”
    Haha yeah this is the point that there is really no way to address. I also don’t think that autocorrelation is sufficient because that assumes linear relationships. In fact any statistical measure is basically impossible because the correlation strength (from a linear perspective) changes drastically over the course of a coupled nonlinear system.
    My job is to try to analyze biological timeseries and it is insanely tough. We’re making progress by using a variety of tools that approach from different angles (like a “probability distribution” even of a non-independent source gives a lot of information, and so does the variance, even though they don’t “describe” the system because they aren’t the right type) but the systems don’t change. If the mere characterization of them caused a change it would be impossible.
    And don’t even get me started on coupled non-linear systems. Even a very basic two system one is nearly impossible to accurately analyze at this point, and the linear relationships are in constant flux.
    That said I don’t think that things are hopeless for finance. We just need to have a rules of thumb framework and it’s a constant process. What I don’t get is that nearly everyone warned of collapse by arguing that we were spending too much as a country, the world was over utilizing natural resources, there was too much income disparity, etc. and now we are surprised that things broke down. There are things fundamentally wrong with the models, but the greater issue was things wrong with the fundamentals.

  15. Michael, to say that the draws must be independent in the way that I think you’re using the term is to adopt a nihilist view of statistics in general. Even flipping a coin would not be a random act, e.g. do you start with heads up or down and how “aggressively” do you flip the coin? But typically you don’t “construct” a probability distribution; you assume that the data is characterized by a distribution.
    You’re correct to note that when everyone starts using VaR you change the very outcome. But that does not mean the approach is inappropriate and in particular it does not imply that “VaR doesn’t allow anyone to make better decisions about risk.” It means only that you do not find the results useful to you and that you believe the results have been oversold and even that VaR models need to change over time. I’ve no argument with any of those statements.
    What I do strongly disagree with is the general statement that “finance must come to grips with … that it can’t quantify risks in a rigorous way.” One needs to distinguish between risk and uncertainty. To the extent that we’re dealing with risk – where we know the outcomes and can assign probabilities, even subjectively – then tools like VaR are very useful for some of us. When we’re dealing with uncertainty – in this case where we don’t know all the outcomes – then the typical tools of finance are much less useful.
    Perhaps the point should be that financial models aren’t equipped to deal with uncertainty and we keep trying to make the world look like one dominated by risk rather than uncertainty. I have no problem with using risk measures because they’re useful to me and because I know the limitations. And that’s why I keep stumbling on the notion that “VaR doesn’t allow anyone to make better decisions about risk.” I know what the model is saying – and what it cannot state. And I’m better off knowing what it says than not knowing, even if some are given a false sense of certainty by a single number.

  16. Rich S: That was a very good comment. It’s something I’ve run into a lot in my own experience and is really a paradigm shift for science and the meaning of rationality in general — not just finance. I could go on and on about how the worldview you describe that is mechanistic and based on probabilistic risk is what has separated modern viewpoints from some very “obvious” points. Really I am very uncertainty oriented in my understanding but believe that the best measures we have for trying to grasp what we do know are still statistical.
    The problem of course is that economics/finance has a huge influence on real life, and the over reliance of models has created a situation where all the overlapped risks starts generating uncertainty.

  17. But, Rich, surely uncertainly is a risk like any other and insurers have been quantifying risk for over a century. Tools exist to evaluate whether you are being rewarded appropriately for the risks you have chosen to incur, but they are only good if you use them and do not lie to yourself about whether you have actually taken on any risks and what the uncertainty of the outcome is.
    Credit default swaps are a perfect example of poor evaluation of risk. They were designed to be an analogue of credit insurance, so it would have been reasonable to treat them as such, but the sellers of the ‘insurance’ portion, even companies that owned insurers, didn’t appear to take the risk seriously and appear to have assumed that they were only taking enterprise-level risks with no market risk involved, an assumption that their actuaries could quickly have pointed out was wrong.
    Real insurance companies lay off their excess risk, the risk to the far right in the tail, to reinsurers, but folks who were estimating their assets at risk in these investments didn’t lay off the risk, they appear to have ignored it.

  18. (3) Most fundamental, anyone in finance that has given assurances like “VaR will tell you all you need to know about risk” is guilty of malpractice. There needs to be some serious caveats about the limits of any calculation.
    The real problem is that this approach doesn’t work. If you make a number that looks as though it’s putting a dollar value on risk, that’s the way that people will wind up interpreting it. It doesn’t matter how many caveats you put onto it, or how carefully you try to describe the method’s limitations. People will eventually start paying attention to the bottom line and ignoring the caveats; that’s just the way people think.

  19. Rut-roh. Bayesians vs frequentists argument rages on. Pretty soon pirates and ninjas will pick sides, and then we’ll all be in trouble. 🙂

    I have no problem with using risk measures because they’re useful to me and because I know the limitations.

    There are a couple of reasons I’m with J. Michael on this. You have perfectly legitimate reasons to believe that you know the limitations, but models like VaR are inherently problematic. Quantifying risk is inherently problematic, and relying on it for decisions about what to do in the system being modeled creates a distinct, non-modelable risk all its own. See John Boyd or the evolution of the brain for details.
    The problem everybody talks about nowadays is that as soon as a very low probability event occurs, your mathematical model of conditional outcomes turns out, ipso facto, to have been horribly wrong. See Taleb for details, though he wasn’t the first by a long shot.
    But the danger starts well before that. The more sensitive your model is to the observed distribution of high probability events, the more useful it is in the absence of any very low probability events. Useful models of complex nonlinear processes have to be hypersensitive to their inputs, because the process being modeled is hypersensitive to its inputs. Less sensitive models with bigger margins of error are more robust, but — duh — correspondingly less predictive. See Lorenz for details, and he really was the first as far as we know.
    Once you decide to feed the (now amplified) output of your high sensitivity model back into the system you lose your ability to predict whether you’re about to trigger some sort of cascade and destabilize the system you’re modeling. So yeah, basically you’re up against a law of nature — the law that the only way to “quantify” a risk is by obscuring other risks. Even if you switch to agent based models, where the nature of the problem changes somewhat, it doesn’t go away. The only “solution” is to use a wide variety of structurally divergent models in parallel, which is very expensive, and more of a procedural improvement than a solution.
    This isn’t an argument against any use of models like VaR whatsoever. It’s just an argument that “knowing the limitations” should mean “expecting the model to fail unexpectedly, and possibly take the system down with it.”

  20. The problem here is assuming that VaR is an honest attempt to measure risk rather than part of a sales pitch. It allows the financial firms to say to investors – we’ve measured the risk and it’s not that bad so buy buy buy! Trust us the math says so.


  21. Why would reification occur? One theory is that people were simply blinded by complex math; another theory is that this emerged as a result of the incentives faced by traders and regulators. My vote is on the latter option.

    Barbar brings up a key point which tends to get lost in the epistemological dispute over risk modeling – even given perfect models with perfect inputs, our financial system was set up to fail because of an asymmetry between the risks undertaken by the banks and hedge funds on the other hand, and the consequences of those risks, to which they were exposed.
    I made this point at greater length over on Yves Smith’s blog – while the banks, et. al. claimed they were engaged in “risk management”, what they were really engaged in was “consequence management”, because consequences larger than failure of the firm were not particularly relevant or of concern to them.
    Our system of limited liability creates a terminal point out on the tail of bad consequences (i.e. Ch 7 liquidation at the level of a firm, loss of career at the level of an individual trader) beyond which no additional meaningful pain is imposed on that decision maker (other people feel the pain instead). Beyond that point on the distribution of possible outcomes, we had actors taking risks without any additional consequence for them personally. The risk distribution may have been fat-tailed, but the distribution of consequence was not fat-tailed beyond the point of maximum penalty.
    This skewed set of incentives (essentially playing with other people’s money) is likely to produce high stakes gambling behavior with socialization of the losses, and that is exactly what we got.

  22. Editing/proofreading fail – bad writer, no doughnut.
    Strike the phrase “on the other hand” from my 1st paragraph above.

  23. The real issues surrounding VaR is summed up by the quote:
    “The story that I have to tell is marked all the way through by a persistent tension between those who assert that the best decisions are based on quantification and numbers, determined by the patterns of the past, and those who base their decisions on more subjective degrees of belief about the uncertain future. This is a controversy that has never been resolved.”
    — FROM THE INTRODUCTION TO ‘‘AGAINST THE GODS: THE REMARKABLE STORY OF RISK,’’ BY PETER L. BERNSTEIN
    In other words those who believe that uncertainty can be quantified and those who don’t. It is either part of your belief systems or its not.
    The ability to quantify the level of risk should be an aid to good decision making not be used instead of it. We are kidding ourselves if we think VaR or any other sophisitcated model can measure uncetainty – if they can then, well, it ceases to be uncertain doesn’t it!!!!
    The “art” (and I use that word deliberately) of risk management is to be able to manage uncertainty and while we should use all of the tools and models at our disposal we need to understand the limitations of those tools and models.
    Prior to the the Washington bridge literally shaking itself apart through harmonic resonance, engineers did not factor this into their bridge design. Now it is done as a matter of course.
    We need to try and understand what will be the next harmonic reasonance and put mitigation strategies in place to make our organisations more resilient against new and adverse events.

Comments are closed.