Not too long ago, I questioned Lord Keynes (LK) on the issue of whether governments reduce uncertainty in his post on his reflections on Dave Prychitko’s article on the Financial Instability Hypothesis. Now, for the most part, I agree with LKs criticism on the article. While I liked the idea that Austrians were willing to look at FIH, it seems they were only looking at it in a restricted sense, since Prychitko appeals to the ABCT throughout the article. But Dave did hit on one point which I think is a spot on criticism, he states that the FIH talks about uncertainty (and expectations) but implies that government can reduce the uncertainty. This is exactly the same criticism I have in my two posts on uncertainty (here, and here), but instead of specifically talking about the FIH and uncertainty, I just broadly talk about ontological uncertainty and Paul Davidson’s work on it.
LK replies asking 1) if I know there are thing not subject to uncertainty, such as risk and 2) also stating that intervention implies new outcomes to systems than that of without intervention.
1) yes, I agree there are systems that are not subject to uncertainty, and may be subject to risk instead. But I do not see how this makes a difference to my overall criticism. Obviously, I do not see markets (and reality in general) as a thing of risk but as one of uncertainty.
2) I do not really disagree with LKs statement that systems without intervention have different outcomes than that with the intervention. Though I would say that both systems may have the some same outcomes too. But, again, this says nothing about reducing ontological uncertainty. Unless we know all the outcomes to where we can apply risk, we are still living in a world where “each step is a step into the void” and also this completely ignores the element of surprise. This is why Shackle states the quite radical position that uncertainty “goes beyond the reach of legislation or improvement of organization and technology.” Jack Wiseman in his article ‘Cost and Decisions’ essentially talks about the same thing. Current economic theories and models are only concerned about the outcomes they can imagine and thus revolve around analysis around these given outcomes. Thus, they ignore the surprise element in where outcomes that are not inherent in the model actually happen.
Unlearningecon replies to my second post on uncertainty by saying that while I disagree with a state vs market mentality , I still use it. But I do not see how I use it. I understand that the market is one that arises under spontaneous and political institutions, as Menger would say, but this is the same market which I say is ontological uncertain. In fact, I would claim that it is the post Keynesians that fall into the state vs market mentality. Instead of viewing markets as they currently exist in reality as ontological uncertain, they simply view a market with insufficient intervention or an unhampered market as an ontological uncertain one. As I stated in a comment reply to Unlearningecon:
[Post Keynesians], on one hand, talk perfectly about the dangers of using historical data to predict the future and thus reject a predictive state but fail to see how that applies to policy as well. They speak perfectly about the flaw to assume how increase knowledge means reducing uncertainty because data is changing and we don’t know the ‘amount’ of data to even make such a statement but fail to see how this applies to government policies into the market. It’s almost like the [post Keynesians] are implying that reality is not predetermined, thus we should be skeptical of data for historical data does not imply that we know anything about future states but once there is a sufficient amount of intervention in the market there can be such a predictive state and thus, it might be reliable to use historical data for a good source to determine future states.
To me, I don’t see how this follows. The problem of uncertainty is still with us. This seems close as an example of special pleading in which uncertainty is applied in markets and not on the state.
Also Unlearningecon sees me talking about governments and uncertainty as if I am talking about it superficially but I am not. I state:
I understand government just doesn’t wave a wand and says, ” we fixed uncertainty,” if only that was the case, but the question to ask is “how does government limit uncertainty. What tools and data do they use to interpret the results?” Well if we hold to the three things of the realities of uncertainty (ontological uncertainty, non ergodic, transmutable reality) should we still question the tools and data that we interpret? Why isn’t historical data looked with skepticism like what we did when talking about markets and reality? Why is it alright to now say that governments reduce uncertainty when data is still constantly changing, when the future remains to be written by the human actions we do in the present, and when each step is a step into the void? Shackle’s work on uncertainty, which much of Davidson’s work is based off of, was meant to describe reality as a whole, not simply just markets.
A couple concluding remarks. First, I am not condemning intervention, I am just questioning the idea that governments reduce uncertainty. For example, I do not agree with looking at uncertainty with appeal to complexity, this is to say, I do not agree with the statement, “the increase of knowledge reduces uncertainty.” I am not condemning increased knowledge, I am just questioning whether it reduced uncertainty. Second, I am not saying there isn’t economic regularities. What I am saying is that just because there are regularities does not mean that they aren’t subject to surprise (ontological uncertainty). We live in a world similar to that of a kaleidoscope, in which a ‘twist of hand’, a change of news or data, can shatter one picture and make it into another, to use Shackle’s analogy.
It’s also worth to note that there are mainstream economists that understand this uncertainty problem. Wiseman comments on Frank Hahn, a Hicks influenced economist:
These observations lead naturally to a consideration of the role of equlibrium. The concept has been central to the development of economists’ understanding of their universe. It’s appeal is easy to appreciate: it generates harmonious outcomes, and facilitates the manipulation of problems by mathematical techniques. But, in the present state of our technical (mathematical) competence, it also restricts our field of enquiry to problems with a known (ie predetermined, pre programmed) set of possible outcomes, and encourages the elevation of compatibility with consistency conditions over relevance to the real world in the specification of problems for study. This view does not go unrecognized, in quotes such as the one given earlier, and (at least in my own case) in discussion with colleagues. It leads most commonly to the view put forward in Hahn’s well argued defence [‘On the Notion of Equilibrium in Economics’]. This acknowledges that we have no theory of learning in its entrepreneurial sense of identifying and acting upon new opportunities: much less a formal analysis embodying the unexpected. In these circumstances, Hahn argues, formal general equilibrium models embodying only routine learning behavior are the best we can hope to do – and of course such models are rich in technical interest and potential diversity. This seems to me to value formal elegance more than relevance. It will do if we see ourselves as ‘schoolmen manqué’, debating the niceties of pinpoint dancing. If we want better to explain the world we actually live in, it leaves too many problems unexplored.
I differ from Wiseman’s conclusion in that while I understand his view that we might have to deal with forever having theories that are ignorant of the future, I like to view economics as an evolutionary subject, thus I still have optimism in that one day we will develop theory that has satisfactory treatment of ontological uncertainty.