×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Crackpot Scandal In Mathematics

kdawson posted more than 5 years ago | from the kooks-we-have-always-with-us dept.

Math 219

ocean_soul writes "It is well known among scientists that the impact factor of a scientific journal is not always a good indicator of the quality of the papers in the journal. An extreme example of this was recently uncovered in mathematics. The scandal is about one El Naschie, editor in chief of the 'scientific' journal Chaos, Solitons and Fractals, published by Elsevier. This is one of the highest impact factor journals in mathematics, but the quality of the papers in it is extremely poor. The journal has also published 322 papers with El Naschie as (co-)author, five of them in the latest issue. Like many crackpots, El Nashie has a kind of cult around him, with another journal devoted to praising his greatness. There was also a discussion about the Wikipedia entry for El Naschie, which was supposedly written by one of his followers. When it was deleted by Wikipedia, they even threatened legal actions (which never materialized)."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

219 comments

Caught (-1, Troll)

Anonymous Coward | more than 5 years ago | (#26216307)

A while ago, while browsing around the library [goatse.fr]...

I don't get it (2, Insightful)

tsstahl (812393) | more than 5 years ago | (#26216319)

In the immortal words of Tom Hanks, I don't get it.

If the guy is a well known crackpot, what harm is happening? Obviously, I am not a citizen of this sub-world and could use the enlightenment.

Re:I don't get it (5, Interesting)

Daniel Dvorkin (106857) | more than 5 years ago | (#26216377)

The harm, I think, is that he's not a well-enough-known crackpot; a respectable publisher (Elsevier) has given him a journal as his own private playground. This makes it more difficult for non-crackpots trying to enter the field (e.g. grad students) to sort the wheat from the chaff. It also allows other crackpots to come off as more credible by citing crackpot articles which have a veneer of respectability. Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.

Re:I don't get it (5, Insightful)

ocean_soul (1019086) | more than 5 years ago | (#26216461)

In fact, the crackpotness of El Nachies' papers is obvious even to most grad students (you should read some, they are in fact rather funny). The bigger problem is that, by repeatedly citing his own articles, his journal gets a high impact factor. People who have absolutely no clue about math, like the ones who decide on the funding, conclude from the high impact factor that the papers in this journal must be of high quality.

Re:I don't get it (5, Insightful)

dujenwook (834549) | more than 5 years ago | (#26216833)

Yea, I read through a bit of the cited paper and got a few good laughs out of it. Maybe he's being published more for the humorous aspect of it all than for the actual information.

Re:I don't get it (5, Funny)

canajin56 (660655) | more than 5 years ago | (#26216867)

Perhaps it's an experiment: He's a mathematician. Now he's just demonstrating how the Impact Factor is a poor metric, and will soon present a superior measure that correctly ranks the journal poorly. ;)

Re:I don't get it (1)

msouth (10321) | more than 5 years ago | (#26217503)

Perhaps it's an experiment: He's a mathematician. Now he's just demonstrating how the Impact Factor is a poor metric, and will soon present a superior measure that correctly ranks the journal poorly. ;)

And another article on the problem of where to publish the article describing that measure.

Re:I don't get it (1)

nategoose (1004564) | more than 5 years ago | (#26217129)

The low quality of his papers was obvious to me, and I only have a minor in Math. Even though I never read any until just now so I was looking for crack pottedness, I actually wondered if El Nachie may have incurred a brain injury or something. The presentation is pretty poor by academic standards.

Re:I don't get it (1)

eh2o (471262) | more than 5 years ago | (#26217195)

I thought self-citations don't count for impact-factor ratings... if they are then it seems like a fairly obvious solution to the problem.

Re:I don't get it (1)

timeOday (582209) | more than 5 years ago | (#26217647)

The bigger problem is that, by repeatedly citing his own articles, his journal gets a high impact factor.

Since google similarly uses links to pages to compute their pagerank, they combat this problem constantly. People do all kinds of stuff, from buying or swiping the registration of a reputable domain name, to posting spam on forums hosted at .gov domains, to setting up complicated interwoven sets of cross-linking domains to fake "grass-roots" popularity.

It would be great if google could reveal more of their techniques and they could be applied to boost the validity of scientific publications' impact factor. Or maybe we should just rank scientists by the google pagerank of their papers :)

Re:I don't get it (1)

Prof.Phreak (584152) | more than 5 years ago | (#26216519)

Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.

Hmm... so you're saying majority of the CS papers out there -don't- do this?

Re:I don't get it (5, Insightful)

Daniel Dvorkin (106857) | more than 5 years ago | (#26216669)

Oh, snap!

Seriously? There's a lot of high-quality CS research out there in the journals and conference papers; of course there's also a lot of crap. But I'd say most of the crap comes from wishful thinking rather than pure crackpottery. If nothing else, if you try to implement something that doesn't work, you'll know immediately -- thus CS at least potentially has a built-in reality check that pure math lacks. I rather suspect that whether or not a CS journal demands working code from its authors is a strong predictor for the quality of the articles which appear in that journal.

Re:I don't get it (4, Insightful)

BitZtream (692029) | more than 5 years ago | (#26216719)

Much like anyone with a working knowledge of CS probably has the ability to verify the CS research, math is a rather logical science which is often pretty easy to verify. Sure sure, there are things that are hard to confirm based on the amount of calculations that must be performed and irrational numbers and all that (infinity is a bitch to test), but those things exist in CS as well.

Its silly to some how imply they are vastly different from each other, they are in fact almost identical to each other.

Re:I don't get it (3, Insightful)

Daniel Dvorkin (106857) | more than 5 years ago | (#26216787)

Researchers in just about every field build on layers of other researchers' work. There simply isn't time to go back and verify every result in the reference tree of every article you site -- if you did that, you'd never get any original work done! Creating code that compiles and executes properly doesn't guarantee that everything you've based that code on is correct, of course, but it's a good sign. I'm not aware of any equivalent reality check in pure math. Now, I know relatively little about the field (applied CS and statistics is my game, specifically bioinformatics) so I'll happily accept a correction on this point.

Re:I don't get it (1)

slawekk (919270) | more than 5 years ago | (#26217177)

CS at least potentially has a built-in reality check that pure math lacks.

There is such check: writing proofs in a formal proof language so that they can be verified by a machine. This is a barrier high enough that no crackpot can jump over it. Too bad math departments don't teach formal proof langauges.

Re:I don't get it (0)

Anonymous Coward | more than 5 years ago | (#26217663)

I rather suspect that whether or not a CS journal demands working code from its authors is a strong predictor for the quality of the articles which appear in that journal.

Not really. Paperware is quite common.

Re:I don't get it (1, Offtopic)

tsstahl (812393) | more than 5 years ago | (#26216523)

Thank you. That really does help and without a car analogy, to boot.

Acadamia never ceases to amaze me. Note, spelled wrong on purpose BECAUSE OF ALL THE NUTS!

Re:I don't get it (4, Interesting)

Daniel Dvorkin (106857) | more than 5 years ago | (#26216709)

Glad to be of service.

You realize, of course, that the only reason I was able to use a computer analogy is that we're talking about pure math. If we had been talking about CS, I'd have had to go with a car analogy right off the bat.

Re:I don't get it (0)

Anonymous Coward | more than 5 years ago | (#26216765)

Azebo

Ulbous bouffant.

Re:I don't get it (1)

Goaway (82658) | more than 5 years ago | (#26217203)

Acadamia never ceases to amaze me. Note, spelled wrong on purpose BECAUSE OF ALL THE NUTS!

That could be a funny joke, but you really need to work on your delivery.

Re:I don't get it (1)

jcarkeys (925469) | more than 5 years ago | (#26216601)

Ok, so it destroys the credibility of the journal, as well as the credibility of any papers coauthored by this individual, and destroys the credibility of anyone who decided that getting published (by allowing El Naschie to get his name on the paper) was more important than academic rigor. I don't see the long term, lasting harm.

Re:I don't get it (0, Offtopic)

sdpuppy (898535) | more than 5 years ago | (#26217363)

OK, so perhaps you need a Slashdot analogy, and I'll keep it short and sweet.

On Slashdot, if you feed the trolls, it only encourages them and it encourages them to reproduce (you get copycats).

Wow, my first analogy on Slashdot without a car involved...

Re:I don't get it (0)

Anonymous Coward | more than 5 years ago | (#26216657)

But isn't the purpose of the ACM to make computers work more like Hollywood says they do?

Re:I don't get it (5, Interesting)

timholman (71886) | more than 5 years ago | (#26216717)

The harm, I think, is that he's not a well-enough-known crackpot; a respectable publisher (Elsevier) has given him a journal as his own private playground. This makes it more difficult for non-crackpots trying to enter the field (e.g. grad students) to sort the wheat from the chaff. It also allows other crackpots to come off as more credible by citing crackpot articles which have a veneer of respectability. Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.

And it gets worse when money becomes involved. Pseudoscientists and crackpots often try to find "investors" for their schemes, and even a layman who performs due diligence can be fooled when publishers like Elsevier become enablers for pseudoscience. When the paper shows up in an INSPEC or Web of Science search, how is the person being scammed supposed to know that the paper isn't really legitimate?

Many "free energy" scam artists already have patents for their nonsensical inventions, thanks to the laxity of the USPTO. It'll get worse unless these "pseudo-journals" are exposed and publicized to the greater science and engineering community, as well as the public at large. I had never heard of El Naschie before today, because I'm not a mathematician; thanks to this article, more people like me will now keep an eye out for his future "work".

Re:I don't get it (1)

Beardo the Bearded (321478) | more than 5 years ago | (#26216779)

Ah, so his math is wrong, and because the point of a peer-reviewed journal is being missed, the bad papers continue with being consider correct.

That's pretty crapulous.

Re:I don't get it (1)

nextekcarl (1402899) | more than 5 years ago | (#26217441)

I read this article earlier today about Ponzi, and after a major paper pointed out the lies he actually made more money that day! Some people just don't deserve to have money I guess. I mean, what can you do? You try to teach them why someone's wrong (and can't possibly be right) and they just refuse to listen. http://www.cnn.com/2008/LIVING/wayoflife/12/23/mf.ponzi.scheme/index.html?iref=mpstoryview [cnn.com]

Re:I don't get it (0, Offtopic)

Gena5m (1004304) | more than 5 years ago | (#26216811)

IOW it works like a Middle East Peace Process or Global Warming religion, the difference being in a number of crackpots pushing it. Here it is 1+. Stopping this early minimizes crowd effect with all its collateral damage.

Re:I don't get it (0, Flamebait)

msouth (10321) | more than 5 years ago | (#26217579)

The harm, I think, is that it's not a well-enough-known crackpot theory; a respectable publisher (NY Times et al) has given him a plethora of journalists as his own private fan club. This makes it more difficult for the electorate trying to intelligently guide policy (e.g. moveon.org) to sort the wheat from the chaff. It also allows other crackpots to come off as more credible by citing crackpot articles which have a veneer of respectability. Imagine if a "documentary" based on Hollywood's belief about global warming were being played off as if it were real science by the media, and you have some idea of how big a problem this is.

There, fixed that for ya.

Re:I don't get it (1)

king-hobo (1303923) | more than 5 years ago | (#26217833)

Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.

oh dear god, it reminds me of the time a friend of mine though he could "hack" because he saw the movie "the net", which as a side note has Sandra Bullock as the hacker

Re:I don't get it (0)

Anonymous Coward | more than 5 years ago | (#26216741)

I don't know. I don't understand the summary. I mean... I am a CS student and it isn't rare for me to see summaries and articles about some areas of physics that I don't have much knowledge of. But then I know "Well, this is news about new discoveries about the nature of light and useful in proving that the earth is flat. I don't need to know much more about that.".

But this summary? Seems like a ramblings of a madman to me. Apparently someone has felt important to state that some publisher of math related papers isn't very good at what he does? Why is this important? Who states that? Is this relevant to anyone but math students? Or even them? I could check TFA but alas, there is no TFA.

I'm at loss.

Elsevier seems particularly prone to being "gamed" (2, Interesting)

swschrad (312009) | more than 5 years ago | (#26216843)

they were heavily taken by "cold fusion researchers," a canard in three dimensions if ever I heard one, 20 years back. perhaps they occupy the same place in scientific literature as S&P and Moody's does in careful review of bonding and finance? down Illinois' way, they call it "pay to play."

Mathematicians should use more car analogies (2, Funny)

PolygamousRanchKid (1290638) | more than 5 years ago | (#26217189)

Mathematicians use all sorts of funky ancient Greek symbols to express their thoughts. It's like trying to read an APL program.

If mathematicians could represent their concepts in car analogies, maybe ordinary folks would be able to understand what all the fuss is about.

At least, here, on Slashdot, where the car analogy is the lingua franca.

And the mathematicians might have some fun with it. How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?

Re:Mathematicians should use more car analogies (2, Funny)

sdpuppy (898535) | more than 5 years ago | (#26217445)

And the mathematicians might have some fun with it. How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?

Oh that one is too easy - all you have to do is imagine driving on the Cross-Bronx Expressway during rush hour and you have the concept down pact!

The infinite-dimensional corresponds to the amount of time it takes to get to your destination, separable Hilbert spaces are where you are and the space just ahead of the car in the other lane moving faster than you which you can never seem to reach unless you go under the separable space under the truck, and isomorphism is what you think of the other testosterone poisoned drivers who don't know how to drive.

Re:Mathematicians should use more car analogies (3, Funny)

mvdwege (243851) | more than 5 years ago | (#26217583)

How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?

First, assume a perfectly spherical car of uniform density...

Mart

Re:I don't get it (0)

Anonymous Coward | more than 5 years ago | (#26217407)

Someone makes a metric (IF). Someone else games the system (Elsevier, and El Naschie).

Let those who buy these journals stop buying them.
If they even care, really.

It sounds to me like a number of scientists are dismayed that Elsevier publishes this "crap". But they also publish a "homeopathy journal".

Maybe they also publish a "paranormal research" journal: "Ectoplasm: Recent Findings".

WP/toronto

Re:I don't get it (0)

Anonymous Coward | more than 5 years ago | (#26217519)

Well the character of El Naschie gets a lot more fleshed out in the sequels: Differential, and Once upon a time in 4-Manifold. It also helps if you lather the publication in cheese and guacamole.

I get all my science from timecube.com (5, Funny)

Anonymous Coward | more than 5 years ago | (#26216345)

Yeah, you really have to be careful out there... that's why I get all my astronomy and mathematical insight (as well as web design hints) from http://www.timecube.com/ [timecube.com]
And if it ain't there, then I just look it up on wikipedia

Re:I get all my science from timecube.com (1)

abigor (540274) | more than 5 years ago | (#26216629)

For everything else - history, geography, astronomy, you name it - just pop on over to http://www.truthism.com/ [truthism.com].

Re:I get all my science from timecube.com (1)

hachete (473378) | more than 5 years ago | (#26217093)

"First and foremost, this website is not a hoax or joke"

That's the warning there in BIG RED LIGHTS

Re:I get all my science from timecube.com (0)

Anonymous Coward | more than 5 years ago | (#26217253)

I like timecube better because truthism appears to be written by someone facetiously whereas timecube was actually written by a raving loon.

Re:I get all my science from timecube.com (0)

Anonymous Coward | more than 5 years ago | (#26217313)

HAHAHAHAHA

"You can see the Reptilians via meditating, using hallucinogenic drugs, and sleep paralysis. However, these are the fourth-dimensional Reptilians, not the third-dimensional ones--but more on all of this later."

Good one. Lovely!

"The dinosaurs were created by the Reptilians thousands of years ago, before one of the Reptilians' latest re-starts of civilization. The dinosaurs were used in order to scare off other alien races from coming to Earth, and also to kick out the aliens that were already living on Earth. Once the dinosaurs had served their purpose and the Reptilians had Earth under control again, the dinosaurs were disposed of (i.e., made extinct)."

This guy is not crazy, he's a A-grade comedian.

Re:I get all my science from timecube.com (0)

Anonymous Coward | more than 5 years ago | (#26217743)

"Women are ultra-impressionable beings, and therefore are easily controlled by aliens and the elite. The main purpose of women on Earth (that is, what aliens have programmed them to do) is to enslave men via relationships." ... "It's bad enough that aliens and the elite already control us. Now, throw women into the equation, and you have absolute misery."

Phew! Thank God that we have the gathering of the great slashdot crowd. We at least are not enslaved by these beings, these women! Thank you thuthism.com for opening my eyes!

whoa. (1)

flipmack (886723) | more than 5 years ago | (#26216653)

the time cube. wow. I haven't seen that in a long time. I didn't think it still existed. thanks for the flashback.

Re:I get all my science from timecube.com (0)

Anonymous Coward | more than 5 years ago | (#26216985)

Wait, it's called timecube because there are 4 corners of time? I thought cubes have 8 corners?

Re:I get all my science from timecube.com (1)

dangitman (862676) | more than 5 years ago | (#26217155)

You've made the fatal mistake of applying logic to the timecube. Now I'll just leave you to rock in the corner while your brain melts. By the way, you were educated stupid.

Re:I get all my science from timecube.com (1)

Svippy (876087) | more than 5 years ago | (#26217067)

Maybe it is just me, but I cannot comprehend what the hell the timecube is? Is he suggesting that the Earth is a cube?

And looking at his HTML doesn't it make it better. I guess there is no sane escape!

Err... (4, Funny)

brian0918 (638904) | more than 5 years ago | (#26216363)

Where's the article?

Ohhh! Right right! This is the article. Slashdot is now a primary source!

Re:Err... (1)

astrodoom (1396409) | more than 5 years ago | (#26216403)

It's in the 25th dimension. Oh wait, I mean 29th. Sorry, I was 4 off. Which just happens to be the number of dimensions in spacetime. Eureka!

Re:Err... (0)

Anonymous Coward | more than 5 years ago | (#26216651)

ahhh the standard space-time deviation

Re:Err... (3, Informative)

Artefacto (1207766) | more than 5 years ago | (#26216633)

Nature reported this back in November: http://www.nature.com/news/2008/081126/full/456432a.html [nature.com] The news is Mohamed El Naschie is going to retire. There are some interesting statistics:

Of the 31 papers not written by El Naschie in the most recent issue of Chaos, Solitons and Fractals, at least 11 are related to his theories and include 58 citations of his work in the journal.

And it's actually a theoretical-physics journal, with a relatively high impact factor of 3.025 for 2007.

Re:Err... (2, Interesting)

nodrogluap (165820) | more than 5 years ago | (#26217611)

On a related note, in some fields there is a greater tendency to cite. I would consider an IF of 3 relatively low in biology for example, but it's decent in bioinformatics. The IF is for granting agencies who'd rather judge your work by the journal it's in, rather than actually reading the article or looking up its citation count in Google Scholar (if it's been around a while).

Incidentally, I've noticed that good open access journals in biology/bioinformatics are getting better IFs these days, so that model seems to be working. Our university in fact has started paying for OA processing charges, so we're sticking with OA journals with good IFs. Gotta keep those agencies happy :-)

I want to tag this story "sowhat". (0)

Anonymous Coward | more than 5 years ago | (#26216445)

How do I do that?

Re:I want to tag this story "sowhat". (0)

Drakonik (1193977) | more than 5 years ago | (#26216583)

It is with great sorrow that I admit I read that as "sow hat" and was very confused. Today is not my day.

Prejudice (1, Interesting)

Anonymous Coward | more than 5 years ago | (#26216465)

The funny thing about this article is that it completely fails to mention the discussion about this on backreaction.blogspot.com -- even the bloggers there weren't so ignorant as to claim every paper to be rubbish.

This is the kind of blanket statement that completely self-defeats any argument. Any scientist or mathematician would know that, so what are you doing writing about science and math?

Re:Prejudice (2, Insightful)

Bill, Shooter of Bul (629286) | more than 5 years ago | (#26216597)

Science is built on reputation. If you have a good reputation, then people take your work seriously. If one of your publications is a hoax or fraud, your career is over. This smells like yesterday's fish's sweaty socks.

Hell, expect Blogoavich to issue a statement tomorrow disavowing any association with this guy or his publications. It smells that bad.

Stupid entry (0)

Anonymous Coward | more than 5 years ago | (#26216471)

Where is the source of the info?

Impact Factor (2, Interesting)

JimFive (1064958) | more than 5 years ago | (#26216553)

Wouldn't it be straightforward to adjust the impact factor to only include references to a different journal. That is, a reference to an article that you published doesn't count.
--
JimFive

Re:Impact Factor (3, Interesting)

Ambitwistor (1041236) | more than 5 years ago | (#26217035)

Excluding references to the same journal is too harsh a criterion, since a lot of high quality papers get published in high quality journals. What should be perhaps excluded, though, is self-citation (whether to your own articles in the same or a different journal). Also, papers published in a journal by a journal editor shouldn't count.

How did he get the high impact factor? (4, Informative)

saforrest (184929) | more than 5 years ago | (#26216555)

How did El Naschie game the system?

According to Elsevier, his impact factor is 3.025 [elsevier.com], which does seem high compared to Elsevier titles like Advances in Applied Mathematics (founded by Gian-Carlo Rota, who was a respectable mathematician).

It's clear from the samples that El Naschie's articles are complete garbage, and I'm sure no respectable mathematician would want to publish in what's effectively a crackpot's vanity press. This is obviously the scientific journal version of Googlebombing.

So how did he pull this off? Is he citing himself, and if so, where?

Re:How did he get the high impact factor? (4, Informative)

Anonymous Coward | more than 5 years ago | (#26216707)

Pick any of this recent papers and chances are good that most of the citations are to his own past papers. So, yes, that's how he's pulling it off: he cites himself ten times or so in each of his papers, and because he writes half the papers in each issue, that inflates the impact factor.

a perennial problem in bibliometrics (5, Interesting)

Trepidity (597) | more than 5 years ago | (#26216565)

If you want to automatically determine what constitutes a good journal purely from data, the definition is something like: is frequently cited by other good journals. Obviously, there's a circularity there. Various techniques attempt to mitigate it, but none are perfect, and indeed most are rather simplistic and easy to game. It's basically hard to distinguish, purely from citation data, a vibrant community of legitimate research from a vibrant community of crackpots.

In real life, most academics get around the circularity problem by starting with a set of "known good" journals that are determined by consensus in the field rather than algorithms (though this may sometimes be controversial). That lets them take into account more subjective things such as status of a research community (crackpots or not?). For example, as the linked article points out, the Annals of Mathematics is generally accepted as a top-quality venue for mathematics.

If you wanted, you could then construct an Annals-centric view of mathematical impact automatically by seeing how frequently other journals are cited by papers in Annals. This is what happens informally as journals gain and lose reputation: a promising new venue often first comes to a community's attention because its articles begin to be cited in "known good" journals.

But just taking all journals with no starting point, and attempting to extract from the citation graph which ones are "good" purely from the links, is doomed to failure, because there just isn't enough information in there to make the distinctions people want to make.

Re:a perennial problem in bibliometrics (0)

Anonymous Coward | more than 5 years ago | (#26216721)

So use PageRank (which is gamed to some extent by SEO lads, but much harder to game) or something like Advogato's trust metric (needs a "known good" supersource, but otherwise very robust).

Re:a perennial problem in bibliometrics (1)

TheNarrator (200498) | more than 5 years ago | (#26216759)

The one thing that separates crackpots from "Real Scientists" is who gets grant money. Look at String Theory or Post-modernism. Prestigious journals in both endeavors where both hoaxed. Post-modernism by the infamous Sokal affair (http://en.wikipedia.org/wiki/Sokal_affair) and String Theory by the Bogandov Affair (http://en.wikipedia.org/wiki/Bogdanov_Affair). There are also a lot of dubious things going on in the softer sciences that are heavily politicized. Meanwhile a lot of good fundamental physics and comp-sci research goes unfunded because it doesn't qualify for a DARPA grant.

Imagine a poor inventor working on things like Tesla Motors, SpaceX or EEstore are doing. They'd probably get rejected for grants because those things are too far out of the mainstream.

Re:a perennial problem in bibliometrics (1, Insightful)

Anonymous Coward | more than 5 years ago | (#26216935)

The one thing that separates crackpots from "Real Scientists" is who gets grant money.

No, the one thing that separates crackpots from real scientists is whether their predictions stand up to experimental verification.

Re:a perennial problem in bibliometrics (2, Interesting)

TheNarrator (200498) | more than 5 years ago | (#26216975)

So I guess the string theorists are all crackpots? I might even agree with you :).

Re:a perennial problem in bibliometrics (1)

dword (735428) | more than 5 years ago | (#26216793)

You do realize you've used the word 'Annal' three times in one post, right?

Re:a perennial problem in bibliometrics (1)

Daniel Dvorkin (106857) | more than 5 years ago | (#26216825)

So maybe we need a Bayesian Impact Factor (BIF)? Start with some distribution for journal reputation (say, the results of a survey of university faculty and other researchers working in the area) as the prior, and then calculate a posterior based on observed citation data.

Re:a perennial problem in bibliometrics (4, Insightful)

__roo (86767) | more than 5 years ago | (#26216899)

This turns out to be a problem space with some really interesting conclusions. I spent some time over the last few years working with researchers from MIT, UCSD and NBER to come up with ways to analyze this sort of problem. They were focused specifically on medical publications and researchers in the roster of the Association of American Medical Colleges. They identified a set of well-known "superstar" researchers, and traced the network of their colleagues, as well as the second-degree social network of their colleagues' colleagues among other "superstars".

I built a bunch of software to help them analyze this data, which we released as GPL'd open source projects (Publication Harvester [stellman-greene.com] and SC/Gen and SocialNetworking [stellman-greene.com]). I've gotten e-mail from a few other bibliometric researchers who have also used it. Basically, the software automatically downloads publication citations from PubMed for a set of "superstar" researchers, looks for their colleagues, and then downloads their colleagues' publication citations, generating reports that can be fed into a statistical package.

They ended up coming up with some interesting results. Here's a Google Scholar search [google.com] that shows some of the papers that came out of the study. They did end up weighting their results using journal impact factors, but the actual network of colleague publications served as an important way to remove the extraneous data.

Re:a perennial problem in bibliometrics (2, Insightful)

rm999 (775449) | more than 5 years ago | (#26217071)

I hate to say this because I realize how naive it is, but who cares about the quality of journals? Perhaps It's because I'm interested in a more applied field, but I judge papers by their results, generality, accuracy, clarity, and sometimes author - not what journal happened to publish them.

IMO most journals have been killing themselves off in the recent past. While running themselves as businesses may have worked when they served a useful purpose, all they do nowadays is impede openness and transparency. Want to read Professor A's conclusions? You better pay 100 dollars to some publisher owned by a huge conglomerate, because they own that paper (which was often written with a grant funded by tax-payer funds.) This is unacceptable in the internet age.

IMO, all self-respecting researchers should avoid submitting to journals that do not freely provide all content online.

mixture of people (2, Insightful)

Trepidity (597) | more than 5 years ago | (#26217109)

The people who most directly care about especially quick-to-skim summaries of quality (like impact factor) are people judging the output of professors. If you're not familiar with a sub-field, how do you separate the professor who's published 20 lame papers in questionable venues from the professor who's published 20 high-quality papers in the top journals of his field? You look at some sort of rating for the venues he's published in.

For reading papers, I agree it's not quite as relevant. I still do do a first pass of filtering by using my subjective views of publication quality, though. I'm more likely to give some surprising-sounding claim a thorough evaluation if it was published in a reputable journal than if it was published as a PDF on the internet, or in some obscure conference. You can't read everything, and the well-known conferences and journals in my area provide one level of vetting that I can rely on.

Why assume a single definition of good? (1)

Willbur (196916) | more than 5 years ago | (#26217301)

It seems to me that part of the issue here is that you're trying to form a single ranking of all the papers/journals, and there might not be one. Netflix doesn't try to form a single ranking of all movies, they try to find the ones that a particular individual will like - a personalised definition of good.

This allows the crackpots to have their own definition of 'good', and there is nothing wrong with that.

For individual researchers this approach would probably work very well. Funding bodies would need to specify more constraints than just that they want "good research" to get a useful answer. Figuring out what those extra constraints 'should' be is an interesting question.

yeah (1)

Trepidity (597) | more than 5 years ago | (#26217395)

That always struck me as somewhat funny about the term "impact factor". Having an impact is in normal speech an impact on something. These factors seem to be trying to avoid the question of what you're measuring an impact on by choosing something really broad, like "impact on the advance of science". But it shouldn't be a surprise that that's more or less unmeasurable.

Impact Factors are a Joke (2, Informative)

Anonymous Coward | more than 5 years ago | (#26216567)

The problem with impact factors is that they don't measure the quality of the papers, they just measure the number of times they're referenced. The thought is that the number of times a paper is referenced is proportional to the quality. Sort of like the concept behind Google Page Rank - more inward pointing links means that the site is "better". ... Except that relying solely on incoming links doesn't work to well if people start to game the system. Google, who made it's name with the power of Page Rank, has since demoted it to "one of the factors" in determining result positioning, recognizing that simply counting incoming links leaves them wide open to manipulation. They're also ruthless about plonking anyone who is found trying to game the system. Impact factors don't have this defense - it's a straight sum-and-divide operation, with little to no adjustments or oversight.

As I understand it, this sort of "gaming" is why crappy fringe journals sometimes get huge impact factors. What happens is, deliberately or not, the authors in those journals self-reference like crazy, jacking up the references per article count. It's like a set of websites which all link to each other extensively, but have very few incoming links from outside their clique. IIRC, Google compensates for this now, while impact factors do not.

I've noticed a disturbing trend towards reliance on impact factors in judging the importance of work (say in tenure evaluations, etc.). The more importance people attach to such a flimsy system, the more frequently you'll hear such cases of gaming the system.

Is it really a high impact factor journal? (3, Insightful)

Pinckney (1098477) | more than 5 years ago | (#26216603)

The summary claims Chaos, Solitons and Fractals, has a high impact factor. The blog linked to, however, does not assert this, and I see no source for it. He does also co-edit the International Journal of Nonlinear Sciences and Numerical Simulation, which the blog asserts "flaunt its high 'impact factor'." The link to the IJNSNS praising him is broken, so I can't confirm that.

It looks to me like some crackpot got a journal. However, it doesn't seem particularly devastating. Nobody has based work on his articles purely on the basis of the "Impact Factor." I don't think anyone else is taking him seriously. At worst, libraries have paid to subscribe.

Re:Is it really a high impact factor journal? (1)

canajin56 (660655) | more than 5 years ago | (#26216923)

Some institutions may base funding on your publications weighted by the impact of the journal they are published in. I dunno if any DO, but it's possible. Its certainly not uncommon to determine funding based on number of publications, and I'd hope those numbers are weighted by the MERIT in some way or other, or else you just get people spamming "publication mills" with randomly generated BS, and get more funding to pay for the exorbitantly high application fees ;) I recall a number of years back, it was a front page article here on /., about a well known (and allegedly peer reviewed) conference in CS, where somebody successfully got a randomly generated gibberish paper accepted. They charged something like $500 to go, so it really was a matter of money-for-publication.

Not exactly newest news... (2, Interesting)

Digana (1018720) | more than 5 years ago | (#26216647)

Slashdot is a bit late in reporting these news... I tried to submit them earlier [slashdot.org] when the news was fresher.

The problem at heart is that one of the biggest and evillest academic publishers, Elsevier, has been supporting a crackpot.

This shows that Elsevier isn't doing enough to promote the quality of research, and worse, libraries are paying huge fees with tax money for worthless journals. The problem here is bundling; university libraries have to buy in bundles journals, one of which may contain crackpot ideas as this one did.

Boycott Elsevier! Let's have open access already.

EL Naschie Affair (4, Interesting)

MarcusMoonus (652677) | more than 5 years ago | (#26216681)

This has been a fascinating case of Crackpottery. Read the blog and the subsequent replies. El Naschie seems to make it (Quantum Mechanical babble-speak) up as he goes along ,but unless you are an expert in this area, as Dr. John Baez is, it would be difficult for the casual reader to discern this. This is similar to the Bogdanov affair, another well know scientific scam. ( http://en.wikipedia.org/wiki/Bogdanov_Affair [wikipedia.org] )I'm a little surprised it took this long for Slashdot to discover this one. One other thing: One of Baez's beefs among others is that this bogus El Naschie journal is bundled with more respectable journals and Elsevier profits from the bogus science.

*shakes head sadly* (4, Funny)

Fractal Dice (696349) | more than 5 years ago | (#26216751)

Alas, something I discovered to my sorrow over the years is that sufficiently specialized math is indistinguishable from gobbledygook (and vice versa).

It's Elsevier... (1, Interesting)

Anonymous Coward | more than 5 years ago | (#26217009)

Half their journals are top-of-their-class, the other half are low-quality or almost useless garbage (like the example in the article) that still get cited more than they should because they show up automatically in searches in any of Elsevier's other journals or search databases. Oh, and of course this part:

"The fact that this journal costs $4520 per year would be hilarious, except that libraries are actually buying it - at a reduced rate, bundled in with other Elsevier journals, but still!"

Ah, bundling. It looks like a good deal, until you realize much of what you get in the bundle amounts to the journal equivalent of crapware and simply clutters up the library. Some of those journals I wouldn't pay $100 for, but the library has them wasting space on the shelves.

Job security through obscurity in mathematics (5, Interesting)

grandpa-geek (981017) | more than 5 years ago | (#26217017)

People used to say about a mathematician or physicist that "what he is doing is so important that only a few people in the world can understand what he is talking about."

In a few cases it was actually true.

Also, there were mathematicians who believed that the highest form of mathematics was work that had no practical application. There was a story that the inventor of matrix theory expressed pride that he had invented a form of mathematics with absolutely no practical use. Little did he know how extensively his work could be used. He would have been appalled.

There still seems to be a feeling that the less people are able to understand a paper in a math journal, the more important the paper is likely to be.

At one time I was a subscriber to the Annals of Mathematical Statistics. Papers in math journals usually assume that you know every paper previously written by the author and the others in the field. There is often very little introductory material and no tutorial material in these papers. Even if you have a general understanding of the topic, you can't follow the papers because they are written very concisely, and assume that nothing needs to be explained if it was ever published anywhere else. You may have to backtrack for years of someone's papers and still not be able to understand the paper you are trying to read.

This is probably a combined consequence of "publish or perish" in academia and page limits in journals. It is often hard to tell if a given paper makes any sense or is useful.

I guess you could call it job security through obscurity.

Re:Job security through obscurity in mathematics (0)

Anonymous Coward | more than 5 years ago | (#26217685)

Gee... I wonder if any of this guy's math is being used in current Climate "Change" (formerly Global Warming) Models...

How insulting! (2, Funny)

uberjack (1311219) | more than 5 years ago | (#26217025)

This is an example of the sort of abuse we get all the time from ignorant people. I inherited this science from my father, an ex-used-car salesman and part-time window-box, and I am very proud to be in charge of the first science with free gifts. You get this luxury tea-trolley with every new enrolment. In addition to this you can win a three-piece lounge suite, this luxury caravan, a weekend for two with Peter Bonetti and tonight's star prize, the entire Norwich City Council.

I may be thinking about this as a CS geek .... (0)

Anonymous Coward | more than 5 years ago | (#26217147)

Why not just have a second impact factor that doesn't count same author or same journal cites?
  Maybe one for each I'm not sure which would be better, but it seems like impact factor is being gamed too easily and needs updating.

  Also automatic tools for similarity to previous work IN THE SAME JOURNAL seems like a no-brainer. Not that an editor couldn't get around this, but it would be more obvious.

peer review (0)

Anonymous Coward | more than 5 years ago | (#26217171)

Originally, peer review meant that peers in the field would actually review the work. Researchers knew each other, and the endorsement of a well-known researcher actually meant something. As research communities have grown, journals have established themselves as the authority about what is good research, but this system was bound to deteriorate because there are conflicts of interest. It's not really always in their best interest to give good reviews, and often they are not really capable of doing the job properly.

There's an easy solution. Peers just need to start reviewing each others' works again and put things back the way they were. We read each others' papers anyway, we might as well give a review. Perhaps we could use digital signatures to make the reviews verifiable. Here's a tool that does that [google.com]. If researchers started reviewing each other again, it would naturally create a decentralized social network of linked papers and reviews that we could analyze. Essentially, that network would say everything about who is central in research communities, and who is only connected with a few lucky reviews.

Buzzword Ponzi Scheme (2, Insightful)

sharkette66 (256044) | more than 5 years ago | (#26217207)

When the first questionable but exciting buzzwords come to life, just explain away the doubters with more buzzwords that sound even better!!

Would the wikipedia deletion help (0)

Anonymous Coward | more than 5 years ago | (#26217221)

it seems that eliminating the wikipeida entry would make it easier for him to continue crackpottery - it eliminates one of the most popular sources people could look to find a critical opinion of his work.

Deleting from the library would help (1)

DaveInAustin (549058) | more than 5 years ago | (#26217651)

It might help if everyone asked their school library [utexas.edu] to stop subscribing to this "journal" and perhaps review other journals by this same publisher to see if they are worth keeping. At a time when worthwhile journals are being cut, it's a shame that schools are still paying for this one.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...