Dissociating the good literature from the bad literature is an endeavor we all do individually(but not for long: ChemRank (http://www.chemrank.com)). If only there was some website where we could tell whether that syn. prep. is accurate or that physical model is valid. The current methods to determine the validity of the literature are: to perform the same experiment, try to determine this from how many people cite that article, go ask around the department for someone who did something similar, try to relate the quality of the paper from the h-index (http://en.wikipedia.org/wiki/H-index) of the author. | (http://blog.chemicalforums.com/blog-chemranklogo.png) (http://www.chemrank.com) |
the basic idea is very cool. However: with the current system it is just too much fun to click on the red/green flags of articles without any further action. No wonder people are just messing with this. As of now I would consider the whole thing to be untrustworthy. And since you are doing the voting via form anyone could, if mean enough, just write a little bot that POSTs random votes. Well enough you check IP addresses (cookies? havent checked it) that noone votes twice. But this way you will not really get any number of comments – and basically this would be the place where comments are essential. i.e. the rating system right now does not work as intended, right?Trust in what sense? Perhaps the goal wasn't stated to clearly. What I wanted to see, eventually, was recent good chemical literature rise up on ChemRank so more people could see it and then after a week on the front page it would go into the archive. It really was just meant as a Digg.com clone not a definitive algorithm for telling everyone what good papers are, necessarily. In a way, if a paper in that week creeps up to the top, that probably means it is of some quality, but that would be about as much information you could get from it.
What chances do you have? Force people to add a comment on every vote? Will not increase votes, but it will keep scores from getting messed up by "vandals".A comments section was added so there would be a record of why a person voted a paper up or down if they elected to give a reason. I don't believe in forcing people to do this. Most won't, and that is just the nature of things.
I strongly believe that in the end we will have to settle for a different system: anyone willing to comment on a paper will do so in her/his blog, utilizing microformats (see structuredblogging) -- which would then be aggregated by a site that collect all those reviews and summarizes/averages the scores. In this way one could even provide a nice template for those reviews incorporating scores for creativity, reproducibility, good writing, etc.I've tried to understand microformats, but they make no sense to me. If I can't get it, most bloggers probably won't adopt it anytime soon either.
decentralized voting. centralized reporting.Sounds fine. ChemRank can be made to aggregate just as well as any other site.
Whattaya think? ???
Trust in what sense? Perhaps the goal wasn't stated to clearly. What I wanted to see, eventually, was recent good chemical literature rise up on ChemRank so more people could see it and then after a week on the front page it would go into the archive. It really was just meant as a Digg.com clone not a definitive algorithm for telling everyone what good papers are, necessarily. In a way, if a paper in that week creeps up to the top, that probably means it is of some quality, but that would be about as much information you could get from it.
A comments section was added so there would be a record of why a person voted a paper up or down if they elected to give a reason. I don't believe in forcing people to do this. Most won't, and that is just the nature of things.
I've tried to understand microformats, but they make no sense to me. If I can't get it, most bloggers probably won't adopt it anytime soon either.
decentralized voting. centralized reporting.
Sounds fine. ChemRank can be made to aggregate just as well as any other site.
Well, digg and the like can work with economies of scale. If you only get 5 votes - what is that going to say? There aren't that many scientists out there rating papers, much less chemists and much less organic chemists or any other even more specific subgroup. And thus, as you can already see, random voting vandalism becomes even more grave than in those cases. It is just a much more diverse group of topics to vote on, as well as potential voters are rare. At least on digg you need to be registered ;) to vote. (oh, btw. see florida).
Yes, you are right it is the nature of such things. But if you cannot count on getting rid of vandals by massive voter numbers, as well as if you really don't care about thumbs up or down, but real criticism, comments and rants on a paper (I would, else I wouldn't even look at a system like that but rather ignore that article. My approach to this is much more a "if there are people saying good things about the article, I might read it if it is of general interest or someone points out a specific hilight in the way the article is written").The comments section is there for you to see what comments people have put out about the article, so we are both in agreement?
You gotta admit: if the article somehow relates to your area of interest you read it anyhow, no matter what the head count is going to be.Yes, and now you have a place to put your views about it without the need for a blog. ;)
The blogger per se doesn't need to understand this at all from the technical viewpoint. Just look at chemical blogspace and how it pulls out liks to papers, molecules and conferences. THAT I would love to see for comments on papers: pull out the corresponding data that gets put into the comments, all posted in their individual blogs. Make it easy like HTML markup, provide templates for wordpress, typepad (you name it, see structured blogging again). And then on chemrank you can just display a paper, with links to all comments on blogs and a summary (maybe in some numerical way or via tags or whatever). But the actual evaluation is done on a place where the author has some known credibility, personality, reputation -- identity. THEN I can start trusting an aggregated/summarized ranking of papers. THERE is valuable data. But, sorry, just a heads up or down like chemrank has it at the moment does have little value for me ;(
Mitch, I am sorry to sound a bit too critical and pessimistic about this thing as it is. Maybe my usage scenario is different from yours, maybe I am projecting much more into it than is intended?Its okay. I like pessimistic people, they keep all of us optimists grounded in reality. Here is my personal take on the matter. When I made Chemical Forums, there already existed many-many other chemistry forums on the internet. But, I knew I could do it better or I wouldn't of made it. People always said, "yeah Chemical Forums is nice, but why use it when I frequent these other more populated chemistry forums." The reason why chemical forums became the premier chemistry forum, is because of the community of people that helped answer questions and helped make this place a great site. The eventual success of ChemRank will be on building a community. I agree it is difficult, but you can't argue that I haven't had some success in doing just that in the past...