Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Study Attempts To Predict Scientists' Career Success

samzenpus posted more than 2 years ago | from the what's-in-the-cards dept.

Businesses 64

First time accepted submitter nerdyalien writes "In the academic world, it's publish or perish; getting papers accepted by the right journals can make or break a researcher's career. But beyond a cushy tenured position, it's difficult to measure success. In 2005, physicist Jorge Hurst suggested the h-index, a quantitative way to measure the success of scientists via their publication record. This score takes into account both the number and the quality of papers a researcher has published, with quality measured as the number of times each paper has been cited in peer-reviewed journals. H-indices are commonly considered in tenure decisions, making this measure an important one, especially for scientists early in their career. However, this index only measures the success a researcher achieved so far; it doesn't predict their future career trajectory. Some scientists stall out after a few big papers; others become breakthrough stars after a slow start. So how we estimate what a scientist's career will look like several years down the road? A recent article in Nature suggests that we can predict scientific success, but that we need to take into account several attributes of the researcher (such as the breadth of their research)."

Sorry! There are no comments related to the filter you selected.

Teaching? (4, Insightful)

theNAM666 (179776) | more than 2 years ago | (#41365251)

Ah, nah, what was I thinking. Whether someone produces future scientists or students who know science, doesn't matter one bit. Let's continue to fetishize publication, and the system of duchies it rests on!

Re:Teaching? (1)

Anonymous Coward | more than 2 years ago | (#41365319)

In my country, lecture attendence is not obligatory. Even when lectures are offered, often the majority of students never come in until the day of the exam. Making a talented scholar spend his precious little time in front of a classroom, and making the students come in every morning when they could otherwise have decided their own schedule to study, is a waste of everyone's time in so many fields. There's really little that an academic can add to the excellent textbooks already published, but there's a lot of work to be done in discovering new things. Yes, teaching can fairly take a back seat to research.

Re:Teaching? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#41365429)

There's really little that an academic can add to the excellent textbooks already published...

You have obviously never:

a) Had to learn from papers, rather than textbooks
b) Had a GOOD lecture

If you need to get an up-to-date view of a field have a "lecture-ised" literature review from someone who knows what they are talking about can save you literal weeks of sifting through papers. All the best lecture courses don't teach textbooks but go through the primary literature and this takes a good academic to do.

Re:Teaching? (4, Interesting)

rmstar (114746) | more than 2 years ago | (#41365657)

b) Had a GOOD lecture

Good lectures make lazy students.

A student must feel truly abandoned, left to his own designs in an unjust game. Only students like that go to libraries, ask around, and make an effort to actually acquire their skills by themselves. Only those students have a long term chance of achieving anything.

The products of "good lectures"? Like fat and complacent castrated cats that never learned to fend for themselves. Useless.

I do teach at a university. I do good lectures, and produce lots of useless, well fed and lazy fat cats. They rate me as a good prof and everybody is happy. I do what I get paid for - but I know better.

Re:Teaching? (4, Insightful)

The Dancing Panda (1321121) | more than 2 years ago | (#41366571)

I'm just going to put this out there: you're wrong. A good lecture should produce students who want to learn more about the subjects on their own. Not because they have to to pass your class, but because they want to because you made the subject interesting. If you're not doing that, but rather just telling the kids all they need to know to pass your test, it's not a good lecture, it's a good study session.

Re:Teaching? (0)

Anonymous Coward | more than 2 years ago | (#41366867)

I tend to agree with rmstar "Good lectures make lazy students".

Some of the best teachers force their graduate students to read the papers. In Science, papers seem to be 5 to 10 years ahead of the books, so, if you are not attending classes, the only way to keep up is by reading papers.

Generally speaking, MS and BS students are too busy to read papers unless the reading is required for the course. If they don't read the papers while in college, they tend to avoid reading the papers after college.

NO! (1)

Dogbertius (1333565) | more than 2 years ago | (#41369239)

No, sir/mme, you are dead WRONG!

If the student takes a sincere interest in the subject matter, and came from a background where he/she knows that working hard will help avoid:
1) A low paying career
2) A meaningless job
3) A lifetime of misery

Then that student either forces his or her own self to either show a committed interest in the field, or finds one where he or she can naturally develop such an interest.

You seem to be of the impression that lectures should instill a DESIRE in students. My background is that excellent teachers challenge students, and expect them to know the practical aspects of a career instead of just the theoretical and the simple facts needed to pass a test. Real profs provide students that know more than the rest. Sadly, in North America, no child left behind means no child gets ahead.

Background info: I've been pulling in $100k+ per year since 24, but went into my undergrad on a basic scholarship from poverty. Now working with my biomed/computer B.Eng. and M.Eng. systems engineering degrees, and making more than my profs since I was IN my undergrad. So no marketing/handout/BS comments are applicable here.

Re:Teaching? (0)

Anonymous Coward | more than 2 years ago | (#41366633)

Your job as a lecturer is to produce lots of good lectures that pique your students' interest. Your student's job is to be proactive and learn more than what you've been teaching them. Granted, most of students, not just yours, will be "lazy fat cats" anyway, but you should never compromise the quality of your lectures. Doing so will only turn down the few true learners amongst your students that seek true enlightenment on their own.

Re:Teaching? (0)

Anonymous Coward | more than 2 years ago | (#41366709)

Holy jesus, BAD lectures do that. Good lectures stimulate and guide. Two points

1.) If you are new to a field, how do you get started? What books are good? If you know nothing of a field, you ask around for help. Then there is a "teacher" teaching you. You lazy bastard!

2.) Written word is relatively recent in human history. Many/most people learn best by watching others do things (think mirror neurons?). Depending on the subject, lectures may be appropriate. Not everyone learns well by reading. In other words, I'M AN AUDITORY LEARNER, YOU INSENSITIVE CLOD.

As stated before, a GOOD lecturer can help bootstrap the process and guide you so that the student is not wasting his/her your time. It's up to the student to take it from there. If your students can take the tests without learning some stuff on their own, then you are failing as a lecturer (although, admittedly, it's probably not by your own choice).

Re:Teaching? (1)

sjames (1099) | more than 2 years ago | (#41368575)

What good then is the University? A good public library is much cheaper.

Re:Teaching? (0)

Anonymous Coward | more than 2 years ago | (#41372099)

Exactly. That's what keeps the university business^H^H^H^H^H^H^H^Heducators awake at night.

Re:Teaching? (0)

Anonymous Coward | more than 2 years ago | (#41372505)

b) Had a GOOD lecture

Good lectures make lazy students.

A student must feel truly abandoned....

I do teach at a university. I do good lectures, and produce lots of useless, well fed and lazy fat cats. They rate me as a good prof and everybody is happy. I do what I get paid for - but I know better.

You aren't a good lecturer. You only think you are. What you actually are is a douche.

Re:Teaching? (1)

excelsior_gr (969383) | more than 2 years ago | (#41367581)

Yet a LOT of textbooks just go over the same things over and over and over again. Sadly, a large book amount does not necessarily mean an equally large diversity.

It is very hard nowadays to write a good textbook. Mostly because the classical subjects have been more or less covered by excellent books (that survived the all-judging lapse of time), and because the new subjects are really, really hard. For my dissertation I had to go through hundreds of papers, which I tried to group and summarize in the early chapters. However, turning that work into a proper textbook would be a sisyphean task: Not one single paper is complete in its presentation, which means that one would either have to dig further into the literature for yet more information or one would have to figure out the details himself, which is also a mountain of work. Furthermore, new papers come out all the time, which would make the book outdated upon release if they are not reviewed and cited properly. Malus points if a paradigm-shifting paper comes out, which would make the whole thing obsolete thus sending thousands of man-hours into the dustbin.

So, do we need another book on classical thermodynamics? I don't think so. On non-equillibrium thermodynamics? Hell yeah! Do we need a book on fluid dynamics? Boooring! On multi-phase fluid dynamics? Bring it on! (and these are not even brand new subjects)

Re:Teaching? (1)

interkin3tic (1469267) | more than 2 years ago | (#41365507)

Why not? Clearly no one involved, students, teachers, universities, value good teaching skills as highly as research grants and name brand recognition. If you want good teaching, look at Khan Academy or a smaller school.

Or better yet, teach yourself. Science classes are for memorizing facts, which is not exactly science. I'm skeptical that one can really teach a roomful of people how to think scientifically in 3 hours a week for a semester.

Re:Teaching? (1, Insightful)

CanHasDIY (1672858) | more than 2 years ago | (#41365977)

Or better yet, teach yourself. Science classes are for memorizing facts, which is not exactly science. I'm skeptical that one can really teach a roomful of people how to think scientifically in 3 hours a week for a semester.

Actual knowledge is not the reason to go to school - that piece of paper you get at the end, which confirms you possess said knowledge (or rather, confirms you paid for said piece of paper), which is essential to getting a job in about 90% of the market, is.

There are a lot of we 'self-taught' scientists, engineers, etc, who get stuck with entry-level wages because we lack the aforementioned written credentials, actual skills and know-how notwithstanding.

Re:Teaching? (1)

theNAM666 (179776) | more than 2 years ago | (#41371965)

Well... my first semester Physics experience, which was quite fortunate, was 10 students for 6 hours of class a week, plus 6-10 hours of lab, in Morely's old basement facilty, taught by a theoritcian and an experimentalist, with a dedicated TA and an undergrad assistant.

Only 3 of us became professional physicists; one of us, works in theatre. But each of us derived E=mc^2 from experiment on our own-- that's great teaching, which you can't do on your own. We learned how to think; we learned that we could solve difficult problems, on our own-- but of course, with a lot of help and guidance over our shoulders in that case.

For me, the ultimate lesson was a sort of arrogance-- the arrogance to take on any problem apace, to beat my head against the brick wall again and again, until a solution was found. It was about what we could do if we applied ourselves-- what we could get done and acheive.

In fact that course often took 30 and sometimes 40 hours in a week. But the lesson-- persistence and that hours are necessary and you get results if you put them in, with a good dash of compentence and knowledge-- has been much of my life since.

Even if that has meant sleeping on a cot in the office for weeks on end, to get the job done, "whatever it takes," as John Walker at AutoDesk succinctly put it.

OK-- back to work! Just wanted to point out, that great teaching, teaching which teaches you to teach yourself, has a place, and furthers scientific endeavor beyond what is measured by just "publication."

Re:Teaching? (1)

Black Parrot (19622) | more than 2 years ago | (#41365765)

Ah, nah, what was I thinking. Whether someone produces future scientists or students who know science, doesn't matter one bit. Let's continue to fetishize publication, and the system of duchies it rests on!

Skill at teaching and skill at researching may not be correlated. And if not, someone who is very good at research should spend their time on discovery, and let someone else do the teaching.

Re:Teaching? (0)

Anonymous Coward | more than 2 years ago | (#41365829)

The purpose of a professor isn't to spoon-feed information to students. You're an adult now, it's your job to acquire the information and absorb it. The professor is provided as a resource, not a teacher. He will give lectures, answer questions, provide information in other ways, and it's your job to learn it.

And if you think it's hard learning from a professor who gives 'boring' lectures, wait until you get to the real world.

Re:Teaching? (1)

theNAM666 (179776) | more than 2 years ago | (#41378253)

All agree on the above, though I've known a few people who were great at balancing both. My point was that 'sucess' should not be measured solely in terms of the research itself, but activities that broadly contribute to the advancement of science/knowledge.

Hirsch (0)

Anonymous Coward | more than 2 years ago | (#41365275)

Not Hurst.

predicting success is hard (4, Interesting)

RichMan (8097) | more than 2 years ago | (#41365279)

I am interested in how anyone would predict the successfull contributions of people who have been hiding in the patent office for several years being denied promotions for their lack of credentials.

Exceptions are exceptionally hard to predict.

Re:predicting success is hard (2)

Sir_Sri (199544) | more than 2 years ago | (#41365669)

Well his problem was lack of credentials as a patent officer. I'm getting a PhD in computer science, in a specific branch of computer science (AI/Games), if I needed money and got a job working at IBM on computer languages I'd be years behind my colleagues who are going straight into it from PhD's in languages, I'd even be behind some undergrads because I've done fuck all with the theory of languages in 5 years.

The other thing to keep in mind with this is that 'success' sometimes means 'can recognize projects that are worth doing, and get the money to pay for them'. Being a professor is a lot of management, I don't think my Supervisor has written a line of code for research in 3 years, not for lack of wanting or capability, he just spends most of his time teaching and managing his grad students and making sure we're getting shit done (and there's coding in teaching). In fact I don't think he's done any research that is entirely his own for 3 years, it's all been supervisions and supporting his minions. Finding people early in their career who have a viable balance between personal talent, management skill, and a diverse enough - but not too diverse set of research interests is tricky.

I do actually think, if you look at the research he published, it was a good indication of his capabilities generally, that he had a fill in job at a patent office is immaterial to the fact that he published several papers in 1905 (age 26), which would be about consistent with a PhD in science today publishing several papers as they approach graduation.

Re:predicting success is hard (1)

radtea (464814) | more than 2 years ago | (#41366497)

I am interested in how anyone would predict the successfull contributions of people who have been hiding in the patent office for several years being denied promotions for their lack of credentials.

I believe the general statement is "Prediction is hard, especially with regard to the future".

As to that famous patent clerk, he couldn't get a job not because of lack of credentials (although he didn't complete his doctoral degree until 1905) but at least in part because he belonged to some religious or ethnic classification that was moderately unpopular at the time.

Finally, based on this metric, I must be enormously successful in academia, because I've published (in good journals) on everything from pure physics to applied physics to genomics, with side-lines into image processing and the psychology of perception. All of that jumping around actually hurt my career a a lot, while people who focused exclusively on one thing and beat the living hell out of it were far more successful than I have been (part of the reason I left academia except for the odd adjunct appointment was that I didn't think a successful academic career was compatible with my broad range of interests.)

Successful Predictions Feedback Loop Overload (4, Interesting)

Githaron (2462596) | more than 2 years ago | (#41365295)

Even if they start successfully predicting individuals careers, wouldn't the system eventually break down since professors would probably change based on the results of the prediction?

Re:Successful Predictions Feedback Loop Overload (1)

interkin3tic (1469267) | more than 2 years ago | (#41365539)

From TFA:

If promotion, hiring or funding were largely based on indices (h-index, the model used here or any other measure), then some scientists would adapt their behaviour to maximize their chances of success. Models such as ours that take into account several dimensions of scientific careers should be more difficult for researchers to game than those that focus on a single measure.

Re:Successful Predictions Feedback Loop Overload (4, Interesting)

drooling-dog (189103) | more than 2 years ago | (#41365659)

It's worse than that. If such an index were used widely in hiring decisions, then its success would be a sef-fulfilling prophecy. It would be guaranteed to work amazingly well, because only scientists scoring highly on it would be allowed to succeed. And if you don't secure a high rating for yourself by the age of 28 or so, then you can just forget it and move on to something else.

Of course, the world pretty much works that way already, without reducing hiring criteria to a single number. The evil is that HR people will use it to minimize risk and simplify decision making, and so every employer will in effect be using the same hiring criteria. There might as well be a hiring monopoly to ensure that no "square pegs" get through all of the identical round holes.

Re:Successful Predictions Feedback Loop Overload (0)

Anonymous Coward | more than 2 years ago | (#41369439)

I don't know how other countries handle the hiring of tenured professors, but at least at my university (Germany), HR has no say whatsoever in hiring. A commission with members from all relevant groups is formed, including currently hired professors, research staff and students. All applications are distributed to be reviewed by all (!) members of the commission and applicants are eliminated based on criteria which are agreed upon before applications even arrive. After first elimination follows a possible second round and the remaining candidates are invited to hold a scientific talk with an open Q&A after and an interview with commission members only afterwards. Any of those steps can be repeated as often as required. In the end a short list is handed up the chain to the faculty and the university afterwards.

That being said, those applications are usually about 60-120 pages long with outliers almost always going above and not below and include, among everything else somebody in research is judged by (i.e. his complete publication history) a detailed cover letter and a CV, which sometimes contains said indices. Those indices are one, maybe two lines out of those 90 pages, which cover topics and offer insight into aspects vastly more interesting and relevant than that one number.

Different Fields Different Standards (1)

Roger W Moore (538166) | more than 2 years ago | (#41365705)

Even if they start successfully predicting individuals careers

That's a very big 'if'. For example in experimental particle physics we publish in large collaborations. My h-index score is better than Dirac's but Dirac was a far greater scientist than I! (and that is just the theory/experiment difference in the same field!) Similarly I have papers in a large variety of journals but so have the majority of us in the field. In fact to accurately assess experimental particle physicists you need to rely more on what they do inside their collaborations than the publishing record so it is hard to see how any system which relies on publication record will work.

Brilliant (1)

Generators Houston (2732535) | more than 2 years ago | (#41365309)

"Publish or Perish" is a great way to put it. In the current world one has to to go above and beyond to reach people through various social media and internet channels to get their information out. People now find the information that they want to read instead of just trusting the media to find it for them. Well said :)

Excluding Patent Clerks (3, Insightful)

strangeattraction (1058568) | more than 2 years ago | (#41365337)

Yes if your had the top thesis advisor, went to the best schools and work in a lab with good funding you do well. What a surprise! This would probably ignore patent clerks that discover Relativity however. I recall one paper that claimed to be able to predict your whereabouts by some kind of cell phone info. I can predict it without any data. %90 of the population spends %90 of the of their time within 1/4 mile of their place of residence or employment/school etc. Wow that was hard. Can I get a grant for that?

Re:Excluding Patent Clerks (0)

Anonymous Coward | more than 2 years ago | (#41365431)

If you use bigger words, you just might.

Re:Excluding Patent Clerks (1)

drooling-dog (189103) | more than 2 years ago | (#41365711)

It is certainly not beyond the realm of the possible, provided that you employ language that is at once more erudite and less accessible.

Re:Excluding Patent Clerks (2)

ThatsMyNick (2004126) | more than 2 years ago | (#41365595)

Sure you can. If read the paper (and any other papers relevant to this) and tell us why you believe you can do better, I am sure you can get a grant for it. Believe it or not, this how the scientific community works. If the author of paper you are quoting made sure no body else has tried this before, and he publishes his results, he has pretty much earned his grant. His results (presented in a conference or somewhere) will inspire other researchers to do better. Someone else like you will do better and publish it. This cycle goes on and on, until very little improvements are possible.

Re:Excluding Patent Clerks (0)

Anonymous Coward | more than 2 years ago | (#41365599)

This would probably ignore patent clerks that discover Relativity however.

The patent clerck did not discover relativity.

Speech given by Henri Poincaré in 1904 in Saint-Louis. [wikisource.org]

Inaccurate (2)

RGuns (904742) | more than 2 years ago | (#41365495)

The summary is pretty inaccurate. The h-index was proposed by Jorge Hirsch, not Jorge Hurst. Rather than give a vague description, why not simply provide an exact definition? The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more papers.This is easier to understand if you look at the picture in the Wikipedia entry for h-index [wikipedia.org] .

Re:Inaccurate (1)

Black Parrot (19622) | more than 2 years ago | (#41365715)

The summary is pretty inaccurate. The h-index was proposed by Jorge Hirsch, not Jorge Hurst. Rather than give a vague description, why not simply provide an exact definition? The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more [citations]. This is easier to understand if you look at the picture in the Wikipedia entry for h-index [wikipedia.org] .

Made a minor clarification to your definition [in brackets].

IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.

If anyone is interested, if you can find an author's profile on Google Scholar it will show their h-index, plus a modified h-index that only counts citations in the past five years. It will also list all their papers, the number of cites for each, and if you click, you can see the actual papers that have the cites.

For example, a certain Jorge G. Hirsch, presumably the proposer of the index, can be seen at http://scholar.google.com/citations?user=R5VYyU8AAAAJ&hl=en [google.com]

Not every author sets up a profile, in which case all the information isn't automagically gathered up for you.

Re:Inaccurate (1)

Black Parrot (19622) | more than 2 years ago | (#41365757)

IMO the definition should be modified to exclude self-citations.

Also it doesn't consider the quality of the venue you publish in. There are journals out there that will publish *anything*.

Re:Inaccurate (1)

RGuns (904742) | more than 2 years ago | (#41365763)

The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more [citations].

Made a minor clarification to your definition [in brackets].

Oops, thanks for the correction!

IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.

Good point. IMO exluding self-citations is good practice for pretty much all citation-based indicators.

Re:Inaccurate (1)

RadioElectric (1060098) | more than 2 years ago | (#41374985)

IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.

That wouldn't work. Where do you draw the line? Do you not count citations from papers with the same first-author? If you do that then savvy scientists will rotate authorship on papers from their lab. Do you make it so that no citations count when there are any common authors between the citer and citee paper? That's even more unworkable considering how much scientists move around and collaborate across institutions. The only smart thing for a scientist to do then would be to strategically omit authors off a paper so that they can then cite it in the future. Even if you implemented this harshest rule, scientists would still pressure their friends to cite their papers when even vaguely related to the friends' research.

the results are not surprising (1)

Geoffrey.landis (926948) | more than 2 years ago | (#41366515)

The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more papers. [nb. excluding self-citations].

And that definition shows that the results of this paper are in fact so trivial as to be meaningless.

h-index is cumulative. Their results were "Five factors contributed most to a researcher’s future h-index: their total number of published articles to date, the number of years since their first article was published, the total number of distinct journals in which they had published, the number of articles they had published in “prestigious” journals (such as Science and Nature), and their current h-index."

Are any of these factors even slightly surprising? h-index is about citations. They discover that, wow, the more articles scientists have in more journals, and more widely read journals, the greater the number of times they get cited. That's not news.

They continue "Not surprisingly, the best indicator of a scientist's h-index one year in the future was their current h-index."

"Not surprisingly" is rather an understatement, since the h-index one year in the future is their current h-index, plus a small change for the citations they get this year. It's like saying the best predictor of your bank account next year is your bank account this year (except bank accounts can go down).

It would have been slightly less trivial to predict the change in h-index. But even there, it's prettyobviously the same factors.

Re:Inaccurate (1)

Have Brain Will Rent (1031664) | more than 2 years ago | (#41366535)

That seems like a pretty useless measure.

Researcher 1: 3 papers each cited 700 times, H=3

Researcher 2: 10 papers each cited 10 times, H=10

I'd still be picking researcher number 1.

sounds like a great idea for a scientific paper... (0)

Anonymous Coward | more than 2 years ago | (#41365677)

....wait a minute............ I see what the author is trying to do.......... lol

Young Geniuses Versus Old Masters (2, Interesting)

Anonymous Coward | more than 2 years ago | (#41365693)

This reminds me of some research about artists which found that you could divide the most 'successful' artists into two rough categories: those who made a big splash right away and those whose classic work did not emerge until much later.

http://www.nber.org/papers/w8368

SMBC - gaming the science publishing system (1)

Anonymous Coward | more than 2 years ago | (#41365719)

This reminds me of a relevant comic - where once this system is worked out, scientists will be trying to game the system:

http://www.smbc-comics.com/index.php?db=comics&id=1624#comic

Why bother? (2)

docmordin (2654319) | more than 2 years ago | (#41365743)

It should be noted that the usefulness of h-indices varies from field to field. For example, in various branches of pure mathematics, a heavily-referenced paper is one that, maybe, garners 25 to 100 citations. In applied mathematics and certain subsets of statistics, the threshold would be a factor of magnitude larger.

Also, as a preference, I tend to ignore metrics like h-indices when evaluating a researcher, as they provide very little evidence for his her her capabilities, let alone the quality of the work.

To elaborate, at least from my own experiences, in certain portions of applied mathematics that bleed over into computer vision, machine learning, and pattern recognition, I've seen papers that are relatively mathematically prosaic, but possibly easy to understand or where the code is made available, be heralded and heavily cited for a period. In contrast, I've come across papers that provide a much more sound, but complicated, framework along with better results, possibly after the topic is no longer in vogue, and go unnoticed or scarcely acknowledged.

In a different vein, there are times when a problem is so niche, but nevertheless important to someone or some funding agency, that there would be little reason for anyone else to cite it.

Touching on an almost completely opposite factor, there are times when the generality of the papers, coupled with the subject area, artificially inflates certain scores. For instance, if a researcher spends his or her career developing general tools, e.g., in the context of computer vision, things like optical flow estimators, object recognition schemes, etc., those papers will likely end up more heavily cited, as they can be applied in many ways, than those dealing with a specific, non-niche problem, e.g., car detection in traffic videos. Furthermore, the propensity for certain disciplines to favor almost-complete bibliographies can skew things too.

Finally, albeit rare, are those papers that introduce and find the "best" way to solve a problem that no other discussion is warranted.

Re:Why bother? (1)

hweimer (709734) | more than 2 years ago | (#41372323)

Also, as a preference, I tend to ignore metrics like h-indices when evaluating a researcher, as they provide very little evidence for his her her capabilities, let alone the quality of the work.

Suppose you have a position to fill and get several hundred applications. What do you do?

Re:Why bother? (1)

docmordin (2654319) | more than 2 years ago | (#41373293)

In the (incomplete) hypothetical situation you proposed, there would be a myriad number of paths that I'd take depending on various circumstances.

To elaborate on just one, if I was reviewing candidates for tenure-track junior faculty and research positions in an area that I was familiar with, I'd sit down and read through all of their publications. (Since I am a prolific reader, I can easily go through 300 full-length journal papers, in my spare time, every 2-3 weeks; considering prospective faculty reviews take months, I'd have plenty of time.) Once I had a handle on what each person had done, along with asking about their contribution to each paper, assuming multiple co-authors, I'd then start to weed through applicants based upon factors like venue prestige, publication count, publication rate, topic relevancy (some universities currently have general hiring freezes, due to budget cuts, but will open up positions for those focused on a particular subject area), etc. (To me, the prestige of a journal, or, in some disciplines, a conference, is important, as it shows that a person is willing to put in more effort to succeed. At the same time, however, I would not be hesitant to favor someone who was an industrious scholar and produced a great deal of papers in a mixture of mid- and high-tier venues.)

Why is publishing useful? (3, Interesting)

Okian Warrior (537106) | more than 2 years ago | (#41365753)

Unless you need publishing cred for your job, I can't see why anyone would bother going that route.

It's only really useful for tenure in a teaching position, and *slightly* useful for other job prospects. If you're not pursuing either of those, why bother?

1) Your information is owned by the publisher, you can't reprint or send copies to friends.
2) You make no money from having done the work.
3) The work gets restricted to a small audience - the ones who can afford the access fees
4) It's rife with politics and petty, spiteful people
5) The standard format is cripplingly small, confining, and constrained.
6) The standard format requires jargonized cant [wikipedia.org] to promote exclusion.

A website or blog serves much better as a means to disseminate the information. It allows the author to bypass all of the disadvantages, and uses the world as a referee.

Alternately, you could write a book (cf: Quantum Electrodynamics [amazon.com] by Feynman). There's no better way to tell if your ideas are good than by writing a book and submitting it to the world for review.

Alternately, you could just not bother. For the vast majority of people, even if they discover a new process or idea publishing it makes no sense. There's perhaps some value in patenting, but otherwise there's no real value in making it public.

Today's scientific publishing is just a made-up barrier with made-up benefits. In the modern world it's been supplanted by better technology.

Re:Why is publishing useful? (1)

Anonymous Coward | more than 2 years ago | (#41365989)

I'm not familiar with other fields, but certainly in the realm of computer science research, there is great value in publication, and your assertion that information is locked away just isn't true.

The primary value in publishing, from the world's perspective, is that there is a permanent, cataloged copy of your work that others can find and read. Web pages and blog posts come and go over time, but published research papers are placed in organized repositories that are available for decades (hopefully indefinitely, but hey, the future's uncertain). In computer security research, compare academic conferences to "hacker" conferences like Black Hat. Academic conferences require a written paper and publish their proceedings; you can find years worth of material presented at CCS at the ACM Digital Library, and as long as ACM exists, so will the papers. I've seen white papers that go along with a Black Hat presentation disappear within 5 years because the company for which the authors worked disappeared, and Black Hat doesn't have published papers.

As for some of your assertions about control over work, they aren't so bad in computer science. ACM lets you put a free copy of your papers on your web page even though they own the copyright; I think IEEE does something similar but am not sure. Usenix lets you retain copyright. Only Elsivier seems to be overly restrictive in what you can do when you publish through them. If all you have is a network connection and a PDF reader, you can get pretty much any ACM or Usenix paper without additional cost.

Re:Why is publishing useful? (1)

Eponymous Hero (2090636) | more than 2 years ago | (#41366041)

and soon they'll determine your potential success metric in the womb based on how many papers your parents published and just abort you if you (they) don't measure up. "papers! where are your papers?"

Re:Why is publishing useful? (1)

RGuns (904742) | more than 2 years ago | (#41366091)

1) Your information is owned by the publisher, you can't reprint or send copies to friends.

This is a sweeping generalization at best and wrong in most cases. It is perfectly possible to publish in a 'gold' open acess journal like the ones owned by PLoS, in which case your information is published under, for instance, some CC license. Even most 'traditional' journals nowadays allow self-archiving preprint and/or postprints in an institutional repository like Harvard's [harvard.edu] or in ArXiv [arxiv.org] ('green' open access).

3) The work gets restricted to a small audience - the ones who can afford the access fees

Not necessarily true, for the reasons outlined above.

4) It's rife with politics and petty, spiteful people

The same goes for Slashdot, Wikipedia, the local philately club and most communities I can think of. More seriously, it also happens that a paper is vastly improved thanks to constructive and insightful comments by genuinely concerned reviewers.

5) The standard format is cripplingly small, confining, and constrained.

I can sort of see what you mean by this, although most journals allow authors to post supplementary information (or just add a link to one's web page). Is this really such a problem in practice?

6) The standard format requires jargonized cant [wikipedia.org] to promote exclusion.

There is a reason why jargon exists: it helps specialists communicate. The reason is not to exclude non-specialists, which may be an unfortunate side-effect from time to time. There are other media for that purpose (communicating the results of important research to a wider audience).

Re:Why is publishing useful? (0)

Anonymous Coward | more than 2 years ago | (#41366585)

Alternately, you could write a book

Actually, writing books is often a waste of time. In many fields they are seen as good for publishers, not so good for authors. You make little money, the work is out of date by the time it is published, it takes a lot longer to write... hey, and your example book? -- it contains reprints of... papers!

Often cited? (1)

PPH (736903) | more than 2 years ago | (#41365821)

What if this person and their articles are cited as an example of a moron?

Yeah, I know; peer reviewed articles tend not to drag colleagues through the muck, so to speak. Citations are made to build your own case, not so much to cut others down.

Talk to Google (1)

Solandri (704621) | more than 2 years ago | (#41365875)

In 2005, physicist Jorge Hurst suggested the h-index, a quantitative way to measure the success of scientists via their publication record. This score takes into account both the number and the quality of papers a researcher has published, with quality measured as the number of times each paper has been cited in peer-reviewed journals.

That sounds like exactly what Google does with its pagerank search algorithms. Though I suspect Google is much, much further along in thwarting people's attempts to game the system.

So let me see if I get this right... (2)

EmagGeek (574360) | more than 2 years ago | (#41366019)

... quality is now going to be measured by popularity?

I see the little narcissists are growing up and setting policy now...

Re:So let me see if I get this right... (1)

Have Brain Will Rent (1031664) | more than 2 years ago | (#41366725)

Somebody mod parent up!

Divorce Research from Undergraduate Education (1)

Hangtime (19526) | more than 2 years ago | (#41366577)

This is going to be slightly off-topic, but it something I have been mulling in my head

Rating research of people who supposed to be also teaching puts them firmly in publish or perish mode and that's not good for students. Universities are both research institutions and teaching facilities, really they should choose one. There was a time you needed all three together because of the cost and learning efficiency, however, that time has come and gone as the number of students entering all three segments has increased dramatically over the last 100 years. Many researchers who happen to be professors don't like to teach undergraduates and many undergraduates would rather have a professor who knows them and is open to them rather then learning in a lecture hall from a TA. There is no need to subject these two against each other anymore. Lifting the grunt teaching will free our researchers to explore and continue to push their craft in addition institutions focused on undergraduate learning will deliver a better, more hands-on education. Face reality, very little of research gets into undergraduate education and the longer we hold up this charade the worse this process will get.

Re:Divorce Research from Undergraduate Education (0)

Anonymous Coward | more than 2 years ago | (#41368283)

Research and introductory/survey undergraduate classes are far removed, but that is often not the case for upper level course work and juniors/seniors can be useful contributors to research in some areas (even if it is just being a lab monkey, they learn what a true research environment is like and free the grad students for harder work). As to the TA's, this is basically the apprenticeship model - low pay, learning how to be a professor under the guidance of a full or associate professor (the master craftsman to the grad student's apprentice). There may be merit in having formal 'research' or 'graduate' faculty with a lighter teaching load and more time spent on the research side, but at many universities, there are teaching load reductions when faculty have grants or more graduate students. Similarly there is merit in having 'teaching' faculty whose mission is more undergraduate teaching centered, and this is often reflected in instructor/lecturer positions that teach introductory/survey/service courses for other departments. There is, however, a continuum rather than a sharp divide in many fields and the divide is not always firm. Both future research and teaching faculty come from the same pool of graduate students and if they only study under 'research faculty' they will lack the skills to teach future generations, while if they only study under 'teaching faculty' they will lack the background to modify curricula to support the current direction of the field.

At some schools, there is/was an effort to expose early undergrads to prominent researchers as some complained that it was unfair for undergrads to 'pay for the big names' without gaining from them directly. This has had mixed results - the Feynman Lectures at Berkeley are famous, although his actual students were not as enthused. I've seen senior faculty do great jobs motivating freshmen and others that view teaching intro classes as a ball and chain to drag along. The solution should be to have departments decide how best to utilize the faculty they have and encourage them to hire appropriate faculty to produce strong research and great teaching as a department, be that through faculty great at both or a division of labor that augments the weak teaching of strong researchers with those of great instructional skill.

east anglia (1)

micahraleigh (2600457) | more than 2 years ago | (#41366907)

The East Anglia incident confirmed peer-reviewed studies are squewed by political views.

Article Summary (0)

Anonymous Coward | more than 2 years ago | (#41367361)

In "Future impact: Predicting scientific success [nature.com] ", Acuna, Allesina, Kording predict the future h-index [wikipedia.org] of scientist using their current h-index, the square root ofnumber of articles published, years since first publication, number of distinct journals, and the number of articles in top journals. They vary the coefficients of a linear regression with the number of years in the forecast and note that, in the short term the largest coefficient is (not surprisingly) the scientist's current h-index, but in the long term, the number of articles in top journals and the number of distinct journals become more important for the 10 year h-index forecast. They achieve an $R^2$ value of 0.67 for neuro-scientists which is significantly larger than the $R^2$ using h-index alone (near 0.4).

Additionally, they provide an on-line tool [northwestern.edu] you can use to make your own predictions. (Click here [artent.net] to see this comment rendered in Tex.)

Friggin' paywalls... (1)

DaneM (810927) | more than 2 years ago | (#41367645)

There should be some rule on /. about posting articles that you can only see if you pay someone money...grrr.

depressing (1)

Khashishi (775369) | more than 2 years ago | (#41368015)

It's too damned hard to succeed as a scientist. These men and women are already the best of the best, and they've chosen a field where you need to be the best of the best of the best to succeed. They've already gone through trials to get where they are, but it's still not enough to guarantee a permanent position or a decent wage. There's so much pressure and the competition for tenure is so tough. How is one supposed to distinguish oneself when everyone is a genius and a workaholic? It's true that competition can sometimes bring out the best in people, but at some point, people are just going to say, "I'm fed up with this game and I'm not playing anymore," and switch to a more lucrative job in the finance industry or a simpler cozy job which gives them time to spend with their family.

Re:depressing (1)

_8553454222834292266 (2576047) | more than 2 years ago | (#41370383)

Yeah. This probably cuts down on our national scientific output quite a bit.

I for one... (1)

nerdyalien (1182659) | more than 2 years ago | (#41372005)

...cynical about a career as a Scientist/Academic Researcher.

IMHO, there is absolutely no legitimate way of quantifying "success of a scientist". It is down to: 1) how a particular study stands the test of time; 2) extended studies that reassures the accuracy of original results, will make the original investigating scientist a true success. Best example I can provide is, Prof Higgs... even Prof Einstein.

All these 'publish-and-perish' claptrap will only do is: dilute the quality of academic research, discourage collaborations, proliferation of academic malpractices/dishonesty, and perhaps drive-away all the truly passionate scientists/researchers-alike from active research in to obscurity.

I finished my PhD last year in EE/CS. Personally, I did enjoy the pain/pleasure of doing research and the campus life in large. However, about half way through my graduate school, I increasingly felt hopeless being a researcher in academia. I went with the good intention of becoming a down-to-earth true-blue scientist/researcher. But the environment I worked was too toxic to keep to my humble wishes. I just couldn't stay there and keep doing research with a clear conscience knowing the academic dishonesty going around, and wrong-doers getting ahead in the "academic rat race" while I am getting scrutinised constantly for not being productive as them. So I did the bare minimum to defend my thesis, and got out on time with a sane mind to start a career in the industry as a software developer.

I regret about my decision in many ways. But I am happy that I do not have to sell-my-soul to cling on to my current position. Plus, I foresee a much better career path now compared to academia (promotions, ability to move to different institutions/career paths); and finally, got decent pay-cheques to enjoy life like I never did before.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?