Chris Leonard and Jeff Erickson have recently posted on the "h-index", an idea proposed by the physicists for ranking researchers in the same research area: your h-index is at least \( h \) iff at least \( h \) of the papers you've written each have at least \( h \) citations. But replacing "researchers" and "citations" in this definition makes the idea a lot more general.

So: say that a research area has meta-h-index at least \( h \), iff at least \( h \) of the researchers in the area each have h-index at least \( h \). This would give some indication of the size of a community, how tight-knit it is, and how heavily people in that community tend to cite other work.

I wonder what this would tell us, e.g. about different subcommunities of the algorithms and theory research areas?

I'd be wary of broader comparisons using this measure, though, e.g. between CS and physics, because different citation patterns between the fields could overwhelm any other differences.

ETA: Suresh suggests normalizing the citation counts to reduce the differences between fields. And points out big differences in citation patterns even within areas of computer science that are close enough to theory for him and others to work in both.





Comments:

ephermata:
2005-08-30T19:38:24Z

It would be interesting to see how this applies to the cryptography community. I get the feeling sometimes that there are really a few different sub-communities that all attend the same conference and superficially talk about similar things, but maybe don't say a lot to each other. I wonder if the meta-h index bears this out.

11011110:
2005-08-30T20:23:13Z

I imagine the effect of these internal subdivisions would be to make the meta-h index lower than for similarly sized but more cohesive areas?

The ones I was interested in looking at were computational geometry and graph drawing. I have the impression that graph drawing is smaller but more inclusive and close-knit, while computational geometry has a few people who are cited a lot and a larger number of people who are off doing their own thing, closer to a star topology, but this is a vague idea and it would be interesting to be able to quantify it.

erniepan: meta-meta-h
2005-08-30T22:19:40Z

Why not rank departments this way? A department has meta-h-index at least h iff at least h of its researchers have h-index at least h. Then a university has meta-meta-h-index at least h iff at least h of its departments have meta-h-index at least h, and a state has meta-meta-meta-h index h iff at least h of its universities have meta-meta-h-index at least h.

What would California's meta-meta-meta-h-index be?

11011110: Re: meta-meta-h
2005-08-30T22:50:13Z

I'm not convinced you meant this seriously, but there's some merit to this: we can't really compare h numbers across disciplines, because of different citing patterns, but that's not as much of a problem for comparing similar departments in different universities. On the other hand, the h-numbers of different departments within the same university are still not comparable, so the meta-meta-h level is where it starts to become too meaningless for my taste.

11011110: Re: meta-meta-h
2005-08-31T02:04:44Z

On second thought, I think evan the departmental meta-h is broken. The problem is that (unlike large-enough research areas) departments typically have too few people in them, relative to the typical size of a department member's h-number. So, at least for smaller departments, you'd just end up counting the department size instead of anything more informative.

None: Here is a try for CS
2007-05-06T18:07:33Z

http://www.cs.utah.edu/~shirley/hindex

11011110: Re: Here is a try for CS
2007-05-06T18:19:03Z

Thanks for the link!

Here it is again for those too lazy to copy and paste. The meta-h rankings look pretty reasonable, with a few surprises — how did USC get so much higher than their other rankings?