Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478396 Curiosities served
Share on Facebook

Slinging Horseshit, or Assessing the Signal to Noise Ratio in Language
Previous Entry :: Next Entry

Read/Post Comments (2)

Something I've been thinking about recently, now having read a pretty good heap of scientific journal papers, is the problem of how to assess their quality.

I know a good paper when I read one. The justification is clearly detailed. The work is unnecessarily complex and well-described. The results are presented in a way that makes it clear how they bear upon the question at hand. And the end of the paper ties it all together.

Problem is, that describes about 2 out of every 100 papers. I'm sure it's a problem in other fields of science, and throughout all walks of business, industry, politics, etc.

However, when reading a paper related to cognition, it's especially difficult for me to try to assess whether or not the author is using a necessary level of complexity. That is, is the topic at hand really that complex, or are they making it (intentionally or unintentionally) harder than it really is?

Two reasonably good clues are sentence structure and vocabulary. Are there too many dependent clauses? Do sentences double back on themselves and then triple back? Convoluted prose usually perks up my radar that the author is more interested in seeming eloquent than in helping me understand what the hell they're going on about.

Vocabulary is another good indicator. In science it's important to be as precise as possible, but not to go overboard. Especially in an interdisciplinary field like Cognitive Science, one of the things we should be striving for is using as little jargon as we can get away with. If you really do need to use a term like long-term potentiation, that's cool, as long as you give a reasonably clear and succinct explanation.

Also, the use of non-technical vocabulary can indicate problems. If the author is throwing around $2 words gratuitously, that should set off alarms. Yes, it's nice that you have a large vocabulary, but you're writing for an audience that contains many non-native speakers of English and guess what...you don't get extra points for style.

I think some researchers don't really have much of a core idea to present, so they wrap it up in fluff and nonsense and hope they will stun reviewers into submission with their inscrutability. Back in the 90's, physicist Alan Sokal busted the journal Social Text with what is now called "The Sokal Hoax". Basically he sent them an article which was just a bunch of jargon and silliness, and to his delight, they published it. He then proceeded to mock them mercilessly.

Good for him, but the problem isn't just with journals on the fringe of humanities and sciences. There's plenty of rubbish published in journals that are supposed to be much more scientific.

It would be nice to have an algorithm for determining the signal-to-noise ratio in a message. Every message that you speak or write or hear or read has a certain amount of redundancy (which is sometimes good, but too much is not) and noise (which is the part that has nothing to do with the message and may interfere with actually getting the message).

Unfortunately, there's no easy way to do this. It takes many long hours of wading through choked piles of garbage, occasionally stumbling across a gem, but learning, slowly, along the way.


Read/Post Comments (2)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com