Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Tuesday, February 18, 2020

18/2/20: Irish Statistics: Fake News and Housing Markets


My latest column for The Currency covers the less-public stats behind the Irish housing markets: https://www.thecurrency.news/articles/9754/fake-news-you-cant-fool-all-of-the-people-all-of-the-time-on-property-statistics.

Key takeaways:
"Irish voters cast a protest vote against the parties that led the government over the last eight years – a vote that just might be divorced from ideological preferences for overarching policy philosophy."

"The drivers of this protest vote have been predominantly based on voters’ understanding of the socio-economic reality that is totally at odds with the official statistics. In a way, Irish voters have chosen not to trust the so-called fake data coming out of the mainstream, pro-government analysis and media. The fact that this has happened during the time when the Irish economy is commonly presented as being in rude health, with low unemployment, rapid headline growth figures and healthy demographics is not the bug, but a central feature of Ireland’s political system."

Stay tuned for subsequent analysis of other economic statistics for Ireland in the next article.

Monday, October 9, 2017

9/10/17: Nature of our reaction to tail events: ‘odds’ framing


Here is an interesting article from Quartz on the Pentagon efforts to fund satellite surveillance of North Korea’s missiles capabilities via Silicon Valley tech companies: https://qz.com/1042673/the-us-is-funding-silicon-valleys-space-industry-to-spot-north-korean-missiles-before-they-fly/. However, the most interesting (from my perspective) bit of the article relates neither to North Korea nor to Pentagon, and not even to the Silicon Valley role in the U.S. efforts to stop nuclear proliferation. Instead, it relates to this passage from the article:



The key here is an example of the link between the our human (behavioral) propensity to take action and the dynamic nature of the tail risks or, put more precisely, deeper uncertainty (as I put in my paper on the de-democratization trend https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2993535, the deeper uncertainty as contrasted by the Knightian uncertainty).

Deeper uncertainty involves a dynamic view of the uncertain environment in which potential tail events evolve before becoming a quantifiable and forecastable risks. This environment is different from the classical Knightian uncertainty in so far as evolution of these events is not predictable and can be set against perceptions or expectations that these events can be prevented, while at the same time providing no historical or empirical basis for assessment of actual underlying probabilities of such events.

In this setting, as opposed to Knightian set up with partially predictable and forecastable uncertainty, behavioral biases (e.g. confirmation bias, overconfidence, herding, framing, base rate neglect, etc) apply. These biases alter our perception of evolutionary dynamics of uncertain events and thus create a referencing point of ‘odds’ of an event taking place. The ‘odds’ view evolves over time as new information arrives, but the ‘odds’ do not become probabilistically defined until very late in the game.

Deeper uncertainty, therefore, is not forecastable and our empirical observations of its evolution are ex ante biased to downplay one, or two, or all dimensions of its dynamics:
- Impact - the potential magnitude of uncertainty when it materializes into risk;
- Proximity - the distance between now and the potential materialization of risk;
- Speed - the speed with which both impact and proximity evolve; and
- Similarity - the extent to which our behavioral biases distort our assessment of the dynamics.

Knightian uncertainty is a simple, one-shot, non-dynamic tail risk. As such, it is similar both in terms of perceived degree of uncertainty (‘odds’) and the actual underlying uncertainty.

Now, materially, the outrun of these dimensions of deeper uncertainty is that in a centralized decision-making setting, e.g. in Pentagon or in a broader setting of the Government agencies, we only take action ex post transition from uncertainty into risk. The bureaucracy’s reliance on ‘expert opinions’ to assess the uncertain environment only acts to reinforce some of the biases listed above. Experts generally do not deal with uncertainty, but are, instead, conditioned to deal with risks. There is zero weight given by experts to uncertainty, until such a moment when the uncertain events become visible on the horizon, or when ‘the odds of an event change’, just as the story told by Andrew Hunter in the Quartz article linked above says. Or in other words, once risk assessment of uncertainty becomes feasible.

The problem with this is that by that time, reacting to the risk can be infeasible or even irrelevant, because the speed and proximity of the shock has been growing along with its impact during the deeper uncertainty stage. And, more fundamentally, because the nature of underlying uncertainty has changed as well.

Take North Korea: current state of uncertainty in North Korea’s evolving path toward fully-developed nuclear and thermonuclear capabilities is about the extent to which North Korea is going to be willing to use its nukes. Yet, the risk assessment framework - including across a range of expert viewpoints - is about the evolution of the nuclear capabilities themselves. The train of uncertainty has left the station. But the ticket holders to policy formation are still standing on the platform, debating how North Korea can be stopped from expanding nuclear arsenal. Yes, the risks of a fully-armed North Korea are now fully visible. They are no longer in the realm of uncertainty as the ‘odds’ of nuclear arsenal have become fully exposed. But dealing with these risks is no longer material to the future, which is shaped by a new level of visible ‘odds’ concerning how far North Korea will be willing to go with its arsenal use in geopolitical positioning. Worse, beyond this, there is a deeper uncertainty that is not yet in the domain of visible ‘odds’ - the uncertainty as to the future of the Korean Peninsula and the broader region that involves much more significant players: China and Russia vs Japan and the U.S.

The lesson here is that a centralized system of analysis and decision-making, e.g. the Deep State, to which we have devolved the power to create ‘true’ models of geopolitical realities is failing. Not because it is populated with non-experts or is under-resourced, but because it is Knightian in nature - dominated by experts and centralized. A decentralized system of risk management is more likely to provide a broader coverage of deeper uncertainty not because its can ‘see deeper’, but because competing for targets or objectives, it can ‘see wider’, or cover more risk and uncertainty sources before the ‘odds’ become significant enough to allow for actual risk modelling.

Take the story told by Andrew Hunter, which relates to the Pentagon procurement of the Joint Light Tactical Vehicle (JLTV) as a replacement for a faulty Humvee, exposed as inadequate by the events in Iraq and Afghanistan. The monopoly contracting nature of Pentagon procurement meant that until Pentagon was publicly shown as being incapable of providing sufficient protection of the U.S. troops, no one in the market was monitoring the uncertainties surrounding the Humvee performance and adequacy in the light of rapidly evolving threats. If Pentagon’s procurement was more distributed, less centralized, alternative vehicles could have been designed and produced - and also shown to be superior to Humvee - under other supply contracts, much earlier, and in fact before the experts-procured Humvees cost thousands of American lives.

There is a basic, fundamental failure in our centralized public decision making bodies - the failure that combines inability to think beyond the confines of quantifiable risks and inability to actively embrace the world of VUCA, the world that requires active engagement of contrarians in not only risk assessment, but in decision making. That this failure is being exposed in the case of North Korea, geopolitics and Pentagon procurement is only the tip of the iceberg. The real bulk of challenges relating to this modus operandi of our decision-making bodies rests in much more prevalent and better distributed threats, e.g. cybersecurity and terrorism.

Tuesday, September 20, 2016

19/9/16: Survivorship Bias Primers


Not formally a case study, but worth flagging early on, especially in the context of our Business Statistics MBAG 8541A course discussion of the weighted averages and stock markets indices.

A major problem with historical data is the presence of survivorship bias that distorts historical averages, unless we weigh companies entering the index by volumes. Even then, there are some issues arising.

Here are few links providing a primer on survivorship bias (we will discuss this in more depth on the class, of course):



Monday, September 19, 2016

19/9/16: US Median Income Statistics: Losing One's Head in Cheerful Releases


In our Business Statistics MBAG 8541A course, we have been discussing one of the key aspects of descriptive statistics reporting encountered in media, business and official releases: the role that multiple statistics reported on the same subject can have in driving false or misleading interpretation of the underlying environment.

While publishing various metrics for similar / adjoining data is generally fine (in fact, it is a better practice for statistical releases), it is down to the media and analysts to choose which numbers are better suited to describe specific phenomena.

In a recent case, reporting of a range of metrics for U.S. median incomes for 2015 has produced quite a confusion.

Here are some links that explain:


So, as noted on many occasions in our class: if you torture data long enough, it will confess... but as with all forced confessions, the answer you get will bear no robust connection to reality...