Apr 8, 2008

Let's play a game.

I often make mental notes when reading: "Ah, THAT's an interesting connection! Great for the blog." Sometimes, often just before falling asleep, I actually write the posts in my head. But, alas, when I wake up the grain of interestingness or relevance quickly vanishes.

I'm struggling with this right now.

Sitting in my To Do box are an article by James Surowiecki on the bond wackiness that became widely known earlier this year and a printout of an interview with Nassim Nicholas Taleb, author of The Black Swan.

Since I can't remember exactly what struck me about these two pieces, I'm going to give you the two sections I carefully underlined back in February. The idea is that might tell me what you think they might mean--and probably make more interesting connections than I can.

In the more probable case that no one responds, I'll cobble something together in a week or so.

Here you go. Have fun.

Surowieki
In that sense, the potential collapse of monoline insurers looks like a classic example of what the sociologist Charles Perrow called a “normal accident.” In examining disasters like the Challenger explosion and the near-meltdown at Three Mile Island, Perrow argued that while the events were unforeseeable they were also, in some sense, inevitable, because of the complexity and the interconnectedness of the systems involved. When you have systems with lots of moving parts, he said, some of them are bound to fail. And if they are tightly linked to one another—as in our current financial system—then the failure of just a few parts cascades through the system. In essence, the more complicated and intertwined the system is, the smaller the margin of safety.

Taleb
Take the Google phenomenon or the Microsoft effect — "all-or-nothing" dynamics. The equivalent of Google, where someone just takes over everything, would have been impossible to witness in the Pleistocene. These are more and more prevalent in a world where the bulk of the random variables are socio-informational with low physical limitations. That type of randomness is close to impossible to model since a single observation of large impact, what I called a Black Swan, can destroy the entire inference.

This is not just a logical statement: these happens routinely. In spite of all the mathematical sophistication, we're not getting anything tangible except the knowledge that we do not know much about these "tail" properties. And since our world is more and more dominated by these large deviations that are not possible to model properly, we understand less and less of what's going on.

No comments: