Professor Borjas, whose work I recently cited, has a new blog. It promises to be stimulating and insightful, and should offer previews of his forthcoming book.
In his post introducing his recent research on the Mariel boatlift, he has a great graph, right off the bat. Pictures can be deceiving, of course, but I think that Card, Peri et al would have a hard time disputing the implications of this graph.
[…] scientist and patriotic immigration activist Dr. Norm Matloff writes on his blog that Harvard Labor economist George Borjas has a new blog of his own (LaborEcon at […]
LikeLike
> Pictures can be deceiving, of course, but I think that Card, Peri et al would have a hard time disputing the implications of this graph.
Agreed. Regarding the Card study of the Mariel boatlift, I heard an interesting Money Planet podcast recently titled “The Experiment Experiment”. A link to the podcast and a transcript of it can be found at http://www.npr.org/sections/money/2016/01/15/463237871/episode-677-the-experiment-experiment . It describes an effort to replicate the results of 100 experiments that had been published in 3 of the top psychology journals. The result was that they were only able to replicate 39 out of a hundred. The podcast describes two factors that likely contributed to that result. The first is called the file drawer effect and is described in the podcast as follows:
GOLDSTEIN: Because it’s such a boring (laughter) finding, right? Like, you get this finding. Nobody’s going to publish it. And if you’re the researcher, you think – I’m not even going to send it off. I’m just going to stick my results here in this file cabinet with all the other failed experiments.
KESTENBAUM: Nosek says there’s even a name for this.
NOSEK: It’s called the file drawer effect. Journals are much more likely to publish a positive result than a negative result. In fact, 97 percent of results in psychology that are published are positive results.
Following is how the podcast describes the second factor:
KESTENBAUM: Nosek thinks there is another big issue going on here. And this one is a little touchier because it points to this human sort of weakness we have, our ability to trick ourselves, to subconsciously kind of skew the results.
A little further on, the podcast quotes Nosek on this issue:
NOSEK: When I do research in the laboratory, I have choices I make about how to analyze the data and about what of the data that I get to report. And so I might be more likely to find a way of analyzing the data that looks good for me – right? It confirms my hypothesis. It provides a result that’s exciting, that’s very publishable. I might decide that must be the right way to analyze the data, and I might do that while thinking and trying to be genuine and accurate. But – and the fact that I have a conflict of interest in this, where the results have implications for me and my career advancement, means that I might construct stories to myself that lead me to finding results and reporting results in literature that just are exaggerations of reality that just aren’t true.
Further on, the podcast reports on one idea to deal with these problems:
GOLDSTEIN: Nosek says there is this one thing that would go a long way toward fixing these problems – the file drawer effect and researchers tricking themselves – and this thing is pretty simple.
NOSEK: Before you do the study, you write down how you’re going to do it, how you’re going to analyze your data and what you’re going to try to learn.
KESTENBAUM: You write all that down and then you submit it to an online registry. That makes it impossible to change the rules as you go along. And then when you finish your experiment, you put your results in the registry, too, even if you do not find anything. Those results that would normally go in the file drawer – those get made public so that everybody knows what you found.
GOLDSTEIN: And this has, in fact, happened in other fields. In drug research a while back, they made this mandatory, and these have had a huge effect. According to this one analysis, before the registry was created, more than half of the published studies of heart disease showed positive results. After the registry was created, only 8 percent had positive results – from more than 50 to 8.
I ran across a Scientific American story on this project at http://www.scientificamerican.com/article/massive-international-project-raises-questions-about-the-validity-of-psychology-research/ . It gives some additional good information about the project among which were the following two lines:
In addition, the simpler the design of the original experiment, the more reliable its results. The researchers also found that “surprising” effects were less reproducible.
Regarding the first point, I have thought that about some of the economic research that I’ve seen, especially that which involves complex multivariate regressions. Regarding the second point, I’ve often thought that about “surprising” effects like tax cuts increasing revenue, an increase in STEM workers creating jobs for native workers, open borders creating “trillion-dollar bills on the sidewalk” (see http://gborjas.org/2016/01/28/germany-and-open-borders-2/ ) , or the large influx of low-skilled workers in the Mariel boatlift having no effect on the wages of existing workers. Such surprising effects may be possible in limited cases but the claims of such effects call for careful peer-review and replication. I applaud Borjas for having done this on the Mariel boatlift study.
LikeLike
Regarding that NPR show, this has been a big topic in the last few years (experiments failing to replicate). Much of it is from the bias resulting from only reporting “favorable” results, but it turns out that there are other, more subtle factors, such as where one lab is different from another in certain ways that affect the results. This is why I’ve always been skeptical of regional comparisons, like Card’s, Zavodny’s and so on.
LikeLike