Theories may be informative to agents. When agents' actions are determined by the predictions of a theory, the predictions endogenize the data that they are tested against. In this case, it is unclear in what manner these theories can be regarded as scientific. I characterize this problem in a general framework, and state an idealized criterion for theory selection. I then informally suggest the application of this framework in the analysis of various settings.
The paucity of numerical data need not result in scholars implementing ad hoc empirical methods in their study of ancient economies. Using the method of partial identification, I give an example of how formal econometric analysis may be implemented in the study of ancient economies, and show how this method allows scholars to explicitly demonstrate the effect on analyses of their beliefs regarding the representativeness of archeological evidence.
The basic theory of oligopolistic competition implies that a firm's decision to maintain tacit collusion is a function of their discount rate, which governs their sensitivity to changes in future payoffs (Tirole, 1988). In basic models, the discount rate is determined by the risk free rate of interest. Central banks can influence this rate through monetary policy. I am interested in analyzing the relationship between montetary policy and the incidence of tacit collusion across markets. Rotemberg and Saloner (1986) analyze the changes in firms' ability to maintain collusion in the presence of "booms." Dal Bo (2007) analyzes tacit collusion under randomly fluctuating discount rates. My plan would be to sythesize these frameworks: analyze the affect on tacit collusion through flucuation of discount rates as inlfuenced by changes in demand. Empirically, I am interested in characterizing "bounds" on the linkage between monetary policy and competition in partial equilibrium.
In the context of merger review, evaluating a regulator's stringency equates to retrospectively determining
how well its behavior aligned with what would have been an optimal enforcement criterion. As Carlton (2009) notes,
determining the regulator's degree of stringency, like too lax or too strict, requires comparing the realized effects
of proposed mergers with the the regulator's beliefs about what they would be at the time of the proposal's review.
Recent retrospective evaluations of merger policy have adopted this perspective.
Bhattacharya et al. (2024) argue that a modest increase in stringency would have reduced prices in US consumer packaged
goods. Similarly, Brot-Golderberg et al. (2024) find that antitrust enforcement is relatively lax in the US hospital
sector given what regulators could have known about the realized effects of mergers. In the US specifically,
budgetary constraints or the biases of the judiciary may prevent regulators from committing to a certain degree of
stringency across merger proposals (Baer et al. 2020). Under these constraints, and recognizing that enforcement
has likely been too lax in the US within the last two decades, how can a regulator maintain an efficient level of stringency?
The merger review process in the United States enables regulators to extract information from firms pre-merger about their conduct
and the market that they operate in. Given that post-merger enforcement can be costly, and pre-merger challenges may fail, regulators
may be able to use merger reveiw to reduce the cost of enforcement ex-post by inducing firms to partially reveal their type through the information
that regualtors extract. Through this revelation, the regulator can credibly signal their willingness to pursue ex-post enforcement and thus
deter prospective mergers from engaging in in anti-competitive behavior.
The direct effect of minimum wage policy on poverty and unemployment is controversial both theoretically and empirically. The indirect effects may be even harder to pin down robustly, as they may especially be influenced by the idiosyncracies of particular labor markets. For example, if a worker is receiving a wage below a prospective floor, that worker will likely receive some form of increased compensation once the floor is imposed, or be dismissed. One interesting question is how workers receiving a wage above (but close) to the new minimum are affected. One might reason that these workers have similar skills to workers at the minimum, but benefitted from frictions in the labor market (or by capitalizing on networks, etc). How is the mobility of these workers affected? Does the friction shift with the newly imposed floor, or do firms attempt to condense wages to the new minimum? Are these workers less able to leverage an outside option in their wage negotiations as the distance of their wage to the minimum is decreased? Empirically, can one robustly identify a minimum threshold for wage increases that predicts a spillover effect on mobility?
Many models of risk management utilize distributional assumptions on the risk associated with porfolios. In particular, FHVaR utilizes assumptions on the stability of the distribution of returns over time. When these assumptions are relaxed, the VaR and requisite level of margin to protect against risk becomes uncertain. I would like to study whether the determination of margin using rules from the theory of decision under uncertainty may acutally benefit trading firms, and the extent of the sensitivity of market risk to these benefits. Intuitively, the distribution of required margin over a time-series may not be the same across all assumed distributions of market risk. What gains can be made by relaxing these assumption on market risk, and what cost does this entail for risk managers?