The open access movement has one major bottleneck, it is called missing peer review. In the classical academic publishing system a manuscript is send to a ghostwriter, the ghostwriter provides peer review, then the journal is send to the journal and the journal publishes the paper. Other content is referencing to the peer reviewed paper and this produces the overall gutenberg galaxy. It is an unsolved question how to replicate this workflow without using a paid peer review service like Enago and without using a quality journal which accepts only peer reviewed papers.
The answer to the problem can be explained in a single sentence. A higher instance is need to evaluate the content which can be realized with a reddit clone which is working with the wiki principle. But let us sort the information a bit and describe the workflow step by step.
At first the academic author needs his own blog. On the blog a short paper is posted which contains of a single page. It can be formatted in the normal wordpress syntax which is markdown or as a pdf document. All authors are doing so, if we have 100 authors, each of them his its own blog and is posting papers regularly.
The second step is to evaluate the content with a higher instance. A common misconception is that a Google like search engine can do this task. If the content of the 100 blogs was indexed the public is able to search and retrieve the content. The problem is, that if somebody doesn't understand the paper he is not able to use the fulltext search engine the right way. The better alternative is to a use a reddit like blog aggregator for content evaluation. In the famous wikinews project this was demonstrated a while ago for evaluating newspaper articles. The same principle can be adapted to any other topic. The best way in doing so is not reddit but a normal wiki is great for the task. The idea is, that the users of the wiki are scanning the 100 blogs with the papers and put the URL to the wiki page.
This collaborative content filtering results into an edit war, because user1 will add a url to a paper which is not wanted by a second user. The conflict is solved within the wiki. That means, on the talk page, with edit wars and with the admin. If the wikigroup has made a decision the wiki acts as a content aggregator. It doesn't contains the fulltext of the papers, but only the url to the external blogs of the authors.
The most important aspect is, that the content creation is separated from the evaluation process. That means, the authors can create new content and flood their individual blog no matter what the users in the Wiki are doing. Also the wiki-users doesn't need to create content by it's own, but their job is to identify high quality URLs to existing content and write a small description teaser for it. A second task of the wiki users is to do edit wars with other users if a certain URL fits into the wiki or not.
The overall structure of 100 wordpress blogs plus 1 wiki provides a great hierarchy which is open to new contributors and establish a high quality standard at the same time. Some aspects of this idea are realized in the today's internet. For example the WIkinews project is known, blog aggregators are sometimes realized (the famous example is planet.gnome.org). Using this technology for the Open Access to realize a distributed peer review system is not realized yet.
The key concept is to understand what Reddit and WIkinews are doing in the current internet. They are not providing new content but they are peer reviewing existing content. In case of Reddit this is done by million of people while on Wikinews a high end Wiki system is used which is superior to Reddit's proprietary website. Unfortunatly, the number of users within Wikinews is small and the teaser texts before a URL are too long.
No comments:
Post a Comment