Experimental testing of old and new hypotheses in Economics

Sammanfattning: Chapter summaries Evaluating Replicability of Laboratory Experiments in Economics The reproducibility of scientific findings has been called into question. To contribute data about reproducibility in economics, we replicate 18 studies published in the American Economic Review and the Quarterly Journal of Economics in 2011-2014. All replications follow predefined analysis plans publicly posted prior to the replications, and have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We find a signifi- cant effect in the same direction as the original study for 11 replications (61%); on average the replicated effect size is 66% of the original. The reproducibility rate varies between 67% and 78% for four additional reproducibility indicators, including a prediction market measure of peer beliefs. Trading performance in prediction markets with different structures This paper presents preliminary evidence on how researchers in the field of psychology judge the replicability of the 28 effects replicated in the Many Labs 2 project. We use individual surveys in combination with prediction markets to elicit beliefs about two replication success metrics — whether the estimated effect in the replication study is statistically significant, and what the ratio between the original and replicated effect size is. We find that survey answers and final market prices are very highly correlated for the binary measure suggesting that the prediction markets provide little additional value, but that the correlation is lower for the effect size measure. x EXPERIMENTAL TESTING IN ECONOMICS The impact of decision rules on the predictive accuracy of decision markets An appealing prospect of prediction markets is that their estimates of how likely future events are to occur can be used as inputs when making a decision. As prediction markets used in this way help guide decisions, they are called decision markets. These have stricter requirements on how scoring rules (payment schemes) must be specified to guarantee that traders are incentivized to trade according to their beliefs. They also require that decision rules (the link between market outcomes and what decision is taken) is specified in certain ways. We let participants trade on hypothetical markets using three different combinations of the rules to explore how the predictive accuracy of the markets is affected. Our main finding is that the decision markets perform worse than traditional prediction markets — likely due to their increased complexity — but that there is little impact of the specific rules used. Gamelab: An online game-theory laboratory The Gamelab platform offers a novel and easy way to perform experiments in game theory. Its options are flexible enough to allow for a wide range of experiments. It is particularly well designed for play against anonymous and randomly drawn opponents. Thanks to its responsive design it can be used on almost any device with internet access. We here report the implementation of experiments in two different settings. In both settings, the subjects were given data about past aggregate play of the same game, thus giving them the possibility for social learning how to play. This platform thus provides a tool to test non-cooperative solution concepts. Demand effects of consumers’ stated and revealed preferences Knowledge of how consumers react to different signals is fundamental to understanding how markets work. The modern electronic marketplace has revolutionized the possibilities for consumers to gather detailed information about products and services before purchase. Specifically, a consumer can easily – through a host of online forums and evaluation sites – estimate a product’s quality based on either i) what other users say about the product (stated preferences) or ii) how many other users that have bought the product (revealed pref- xi erences). In this paper we compare the causal effects on demand from these two signals based on data from the biggest marketplace for Android apps, Google play. This data consists of daily information, for 42 consecutive days, of more than 500 000 apps from the US version of Google play. Our main result is that consumers are much more responsive to other consumers’ revealed preferences, compared to others’ stated preferences. A 10 percentile increase in displayed average rating only increases downloads by about 3 percent, while a 10 percentile increase in displayed number of downloads increases downloads by about 25 percent.

  Denna avhandling är EVENTUELLT nedladdningsbar som PDF. Kolla denna länk för att se om den går att ladda ner.