New report argues the link (in New York state at least) is not there.
See also prior posts.
I’m a little slow sometimes, and I tend to read articles fast as a result of my past habit of trying to read as many news articles as possible every morning. But I didn’t get the corruption from this article. Let’s go through it slowly for both our benefits (meaning me and the spambots).
Count 1: the article opens with a “bald expression of transactional expectations” revealed by internal communications from a “politically connected organization.” Persons were complaining that they had given a lot to candidates, but had not gotten enough grant money in return.
My reaction: one-sided expectations do not a contract make. Also, doesn’t the fact that they are complaining about not getting their money’s work tell us that things aren’t as bad as they might be? Bear in mind that the article describes this as “among the most striking examples of possible misconduct” revealed by the state commission investigating corruption.
ok, reading, reading, bla bla bla money in politics is bad, bla bla ethics reforms needed. Finally another instance of alleged corruption:
In one case, they were able to find a company that lobbied vigorously for passage of legislation, and then, after its passage, gave both major political parties large contributions directed through what the report described as “shadowy corporate affiliates with generic names that do not readily appear to have anything to do with the company.”
My reaction: Sounds potentially bad, but unfortunately we can’t infer anything from this. We have no idea whether, as the article seems to imply, the lobbying consisted of making a deal to pass the legislation in exchange for contributions. Also as the article notes, this is all perfectly legal.
The commission also expressed interest in how money influenced a tax abatement program for real estate developers, a wage law exemption for a large retailer and other “custom-tailored laws” that a particularly powerful lobbyist won for a wide-ranging group of high-paying clients.
My reaction: More innuendo, but there’s no information here. The commission is “interested” in these things, but the article has no evidence that money actually influenced these outcomes.
The commission found repeated evidence that money influenced governmental action.
In one investigation, a lobbyist negotiating with a prospective client provided the client with a “fair projection of expenses” that included not only the lobbyist’s fees, but also expensive “political contributions” that the client would have to make to politicians, including the chairmen of the legislative committees that had jurisdiction over a certain bill before the Legislature.
My reaction: OK, this may be crossing the line. But again, an expression of transactional expectations doesn’t convince me that corruption is happening.
Does this mean I don’t think there is any corruption in American politics? No, but I just think this article misses the mark. Incidentally, Rick Hasen posted another article that I think is relatively clear-cut, involving a corrupt county in Kentucky.
Guido Imbens recently reviewed a new book by Charles Manski in the Economic Journal. Manski then responded. The issue with both articles may be found here. I read this a few days ago, but for some reason was recently reminded of this particular back and forth on internal and external validity.
One issue Manski raises is the relative importance of internal versus external validity.
He credits Campbell (Campbell and Stanley, 1963; Campbell, 1984) with the claim
‘that studies should be judged primarily by their internal validity and only secondarily
by their external validity’ (Manski, p. 36). Manski takes issue with that by claiming that
‘from the perspective of policy choice, it makes no sense to value one type of validity
above the other’ (Manski, p. 37). Deaton (2010) makes a similar point in the context of
his criticism of the use of randomised experiments in development economics.
I strongly disagree with the Manski and Deaton view. Without strong internal validity
studies have little to contribute to policy debates, whereas studies with very limited
external validity often are, and in my view should be, taken seriously in such
Manski, in his reply, writes:
Imbens and I diverge sharply on the subject of internal and external validity. To
characterise my perspective, he quotes the opening sentence of a paragraph in my new
book. I think it is important to quote the paragraph in full (Manski, 2013, p. 37):
‘From the perspective of policy choice, it makes no sense to value one type of validity
above the other. What matters is the informativeness of a study for policy making,
which depends jointly on internal and external validity. Hence, research should
strive to measure both types of validity’.
The second and third sentences explain the rationale for the first.
Manski is one of my favorite economists, so no surprise that I agree with his view more than I do with Imbens. “What matters is the informativeness of a study for policy making…” I might change “policy making” to “decision-making,” given that not all social science is (or ought to be) aimed at guiding elites. But otherwise I think this a great way to evaluate research.
Don’t remember where I found this, but here it is: a letter to the editor of the NY Times from economist Eric Maskin on whether economics is a science:
I disagree with Alex Rosenberg and Tyler Curtain’s characterization of science in general and economics in particular. They claim that a scientific discipline is to be judged primarily on its predictions, and on that basis, they suggest, economics doesn’t qualify as a science.
Prediction is certainly a valuable goal in science, but not the only one. Explanation is also important, and there are plenty of sciences that do a lot of explaining and not much predicting. Seismology, for example, has taught us why earthquakes occur, but doesn’t tell Californians when they’ll be hit by “the big one.”
And through meteorology we know essentially how hurricanes form, even though we can’t say where the next storm will arise.
I’ve commented before that I don’t “get” the difference between explanation and prediction. Maskin, a quite reputable economist, seems to make a hard distinction. Not knowing anything about seismology or meteorology, I can only speculate: is the idea that “explaining” earthquakes and hurricanes is a precursor to an ultimate goal of predicting them, someday?
Indeed another letter-writer David Berman makes a similar point at the above link. Berman’s letter begins:
Mr. Maskin’s distinction between prediction and explanation as different but equally valuable goals of science is a false one — the two are inextricably linked. The ability to predict is both what makes our explanations useful, and what confirms that our explanations are correct.
Mr. Maskin’s choice of meteorology as example is a good one. It’s true that in the long run “we can’t say where the next storm will arise,” but we are now very good at forecasting the short-term futures of tropical storms, so good that we can predict landfalls within miles and within hours, and give populations ample time to protect themselves (if people would only listen).
And associate professor of philosophy “SAMUEL RUHMKORFF” makes a similar point at the above link: “In fact, economists make predictions, and their status as scientists should be judged by their willingness to revise their theories when these predictions do not bear out.” I like that — judged based on willingness to revise theory, not whether the predictions are “right”. And reader “JOHN DOUARD” also makes this point.
Maskin also has a response to these letter-writers at the end of the page at the above link, but he seems to avoid the point about whether the explanation-prediction distinction is a valid one.
In a post aptly titled “A Cacophony of Opinions, Left, Right and Center,” David Brunori from the Tax Analysts Blog decries the abuse of empirical studies claiming to show a positive or negative relationship between state income taxes and economic growth. But he himself also takes a rather nihilistic tone. As he writes in the comments regarding one of the authors he criticizes, “given the state of the scholarly research, his guesses are as good as any.” But this comment from one “emsig beobachter” takes the cake:
Let us agree that there is little consensus on whether income taxes reduce; or, for that matter, increase the economic rate of growth. We should just go with our guts on this issue and not bother to back up our prejudices with all these confusing studies.
I don’t know this literature well, and my sense is that Brunori is a leading expert in this field. But surely we can do better than dismissing all empirical evidence and/or “going with our guts”!
One issue with dismissing all evidence is that it might result from simply averaging over a set of conflicting studies. Yet all studies are not created equally, and some estimates are more credible than others. Jeff Milyo made this point at an APSA roundtable yesterday on campaign finance, but it seems like a very general issue. Unfortunately, the public (and probably politicians) are not trained in these nuances.
Neuroskeptic recently wrote about an interesting study involving training subjects in basic causal concepts. In the study authors’ words:
Our cognitive system has evolved to sensitively detect causal relationships in the environment, as this ability is fundamental to predict future events and adjust our behavior accordingly.
However, under certain conditions, the very same cognitive architecture that encourages us to search for causal patterns may lead us to erroneously perceive causal links that do not actually exist.
These false perceptions of causality may be the mechanism underlying the emergence and maintenance of many types of irrational beliefs, such as superstitions and belief in pseudoscience.
We present an intervention in which a sample of adolescents was introduced to the concept of experimental control, focusing on the need to consider the base rate of the outcome variable in order to determine if a causal relationship exists.
How did the training work exactly? In Neuroskeptic’s words:
The intervention was quite elaborate. Stage 1 involved giving volunteers a little rock, and telling them that it was a high-tech neuroenhancing crystal that would make them smarter via magnetic fields. They then got to use this to ‘boost’ their own performance on some tests.
In Stage 2, it was revealed that it was just a little rock after all, and that they’d suffered from a causal illusion, which was then explained to them. They were shown the importance of controls, and base rates etc.
Finally, to test whether it worked, the trained adolescents were compared to a control group who didn’t get the intervention. They were given a task in which they had to judge whether or not a fictional drug worked, which required them to take into account base rates etc. The trained ones did better.
I share Neuroskeptic’s concern that the design is a little odd, but I really like the motivation and the effort. In particular, I like the claim that “sensitively detect[ing] causal relationships … is fundamental to predict future events and adjust our behavior accordingly.” Whether or not this is a result of evolution or is a key part of our “cognitive system”, this seems to me to be the whole point of science, including political science. Thus claims that a renewed emphasis on causality is atheoretical don’t make much sense to me, because scientific theories are by definition causal. Likewise it seems like a fine line between predictive and causal inference — as the quote from the paper suggests, identifying causal relationships is fundamental to predicting future events.
My impression is that others have an impression (I know, already we’re not doing great here) that members of Congress spend too much time raising money, and that this is why “Washington is broken” (which is in itself an interesting descriptive question).
A few weeks ago I looked around to see if anyone had actually collected data on this. It seemed not; instead there were one or two estimates floating around. One popular estimate was allegedly from a PowerPoint presentation given to new House members by the Democratic Congressional Campaign Committee. The slide advised members to spend 4 hours a day (out of an 8-10 hour day).
Now Ezra Klein (who I think was one of the people blogging about the previous statistic) has come up with some newer data. As he writes,
Remember, that’s just a sample schedule. That’s what the DCCC — the campaign arm of the House Democrats — wishes all its recruits were doing. But that doesn’t mean they’re actually doing it. And it definitely doesn’t mean entrenched incumbents are doing it.
This came up during my conversation with Bradford Fitch Monday morning (for more on that talk, see “In defense of Congress’s summer vacation‘). His organization, the Congressional Management Foundation, has done yeoman’s work surveying members of Congress and their staffs about how they really spend their time. “I’ve asked this question of a lot of chiefs of staff,” he says, “and they’re lucky if they can get four or five hours a week raising money.”
In the survey, fundraising falls under the broad category of “political/campaign work”. And both when members are in Washington and when they’re at home, they report that all political/campaign work takes up less than a fifth of their time — and only part of that is fundraising. That’s a far cry from the 40-50 percent fundraising time that the DCCC’s model schedule envisions.
Klein does not seem to be worried that the new statistic is based primarily on a survey of 25 members of Congress.
Is this the best data we have on this question? Maybe we need to start having members file time sheets?
Randomly opened the NYT app the other day and came upon the latest `trend’ piece:
It is by now pretty well understood that traditional dating in college has mostly gone the way of the landline, replaced by “hooking up” — an ambiguous term that can signify anything from making out to oral sex to intercourse — without the emotional entanglement of a relationship.
Several pages later:
For all the focus on hookups, campuses are not sexual free-for-alls, at Penn or elsewhere. At colleges nationally, by senior year, 4 in 10 students are either virgins or have had intercourse with only one person, according to the Online College Social Life Survey. Nearly 3 in 10 said that they had never had a hookup in college. Meanwhile, 20 percent of women and a quarter of men said they had hooked up with 10 or more people.