Is Webmedcentral a reputable scientific publisher?

Is Webmedcentral a reputable scientific publisher?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am a student, and I have written a paper (in drug discovery) which I want to publish. While searching for possible journals I came across Webmedcentral.

Is Webmedcentral a reputable scientific publisher? What is the impact factor for this journal?

I checked this site and its journals in based on its "ISSN 2046-1690" but nothing was there. they are fast in submitting and publishing but it is not some thing great by it self. I think maybe its better to find some where better to publish your article(s).

Here we include publishers that were not originally on the Beall’s list, but may be predatory.

    , Prime Publications (also here ) (child of Academic Publications Ltd. and Dynamic Publishers, both on the Beall’s list) (ARSSS) (website is not working, so I have listed all their journals in the Standalone Journals section) (previously Penerbit Akademia Baru) (Antjournals) (related to Nobel International Journals) ​ (BookPI) (linked to ScienceDomain International, another predatory publisher) (British BioMedicine Publishers, BBM Publishers) (BSCint) (CSP) (CAR) (linked to RIPK) (CCR) (linked to RIPK) (CPER) (CRIBFB) (new website here) (also Cloud Publications) (inactive) ​ (EAS Publisher) (see also International Scholarly Open Access Research) (ENGII) (eSciRes) (EUROPA) (FM Journals, Frontiers Meetings) (G J Publications) (GPH) (the group’s journals are already on the Beall’s list, but the group didn’t have a website before) (Global Publication House International Journals) (connected to EPH Journal) (Hilaris SRL) (connected to OMICS) (related to ABC Journals) Incessant Nature Science Publishers (INSPublishers) (ISI) (Innovationinfo) (inactive) (IPRPD) (IARAS) (IAENG) (IASTED) (IBAC) (USA) (ICPK) (IIERD) (IISES) (IJMRND) (IPRJB) (ISJ) (links to Bosal) (ITS) (IMM) (IPN Publishing, related or a rebrand of International Postgraduate Network) (Innovative Research Developers and Publishers) (InformTechCell)
  • Leon Publications (publisher of the Chemistry Research Journal, the Pharmaceutical & Chemical Journal, and the Journal of Scientific and Engineering Research) (see also Open BioMedical Publishing Corporation) (MMC) (also here) (connection to Statperson Publishing Corporation)
  • ​Modern Business Press (see also OMICS) (relation to SERSC) (NSSEL Publishing) (publishes IJNTR) (Onomy Journals) (PRADEC) (RCES) (RNP) (SOAOJ) (Scientific and Academica Editores Publication house, SAEP) (new website of The Scientific Pages) (SAS Society) (SciAccess Publishers) (UTM Publication) (SPG) ( (Scimaze Publishers) (SciVision Publishing Group) (a website created by three predatory publishers) (TAPH) (VSR, see also Universe Scientific Publishing)

Last updated March 7, 2021

Nova Science Publishers

Nova Science Publishers is an academic publisher of books, encyclopedias, handbooks, e-books and journals, based in Hauppauge, New York. [1] It was founded in 1985 in New York by Frank Columbus, former senior editor of Plenum Publishing, [2] whose wife, Nadya Columbus, took over upon his death in 2010. [3] While the firm publishes works in several fields of academia, most of its publications cover the fields of science, social science, and medicine. As of February 2018, it listed 100 currently published journals. [4] In 2018 NOVA was ranked on the 13th place global main publishers of political sciences during the last 5 years. [5]

Nova Science Publishers
FounderFrank H. Columbus
Country of originUnited States
Headquarters locationHauppauge, New York
Key peopleNadya Gotsiridze-Columbus (President) Donna Dennis (Vice-President)
Publication typesAcademic journals, books, encyclopedias, handbooks
Nonfiction topicsScience and Technology, Medicine and Biology, Social Sciences
Fiction genresAcademic STM
ImprintsNOVA, NOVA Biomedical, Novinka
No. of employees55 in-house employees
Official website novapublishers .com

Nova Science Publishers is included in the Book Citation Index. [6] In terms of number of books published from 2005 to 2012, Nova ranked 4th. They ranked in the top three in 8 of 14 scientific fields (engineering, clinical medicine, human biology, animal and plant biology, geosciences, social science medicine, health, chemistry, physics, and astronomy), [7] and ranked as the 5th most prolific book publisher from 2009-2013, ranking 3rd in Engineering and Technology and 2nd in Science by numbers of books published. [8] They were also ranked as the 6th most productive publisher, according to a University of Granada study. [9] However, Nova had the lowest citation impact among the five most prolific publishers in both fields. [8]

Nova has been criticized for not always evaluating authors through the academic peer review process [10] and for republishing old public domain book chapters and freely-accessible government reports at high prices. [10] [11] These criticisms prompted librarian Jeffrey Beall to write that in his opinion Nova Science Publishers was in the "bottom-tier" of publishers. [12]

A recent survey of national and international databases of scholarly book publishers (including the Book Citation Index, Scopus, CRIStin, JUFO, VIRTA and SPI) identified Nova as one of a "core of publishers that are indexed in all five" of the information systems surveyed. This "core" contained 46 out of the 3,765 publishers identified. [13]

In a 2011 report of twenty-one international social-science book publishers that determined penetration on international markets and mention of books in international science index systems such as Cambridge Scientific Abstracts, Nova Science Publishers ranked 17th out of 21. [14] In a 2017 ranking study of book publishers, Nova Science Publishers was ranked high on number of books published, but near the bottom for number of citations per book. [15] A 2018 report by the same author now indicates that Nova ranks among the top half among 51 for 1.overall global standing of the company as a factor on the market impact on the global political and economic debate 4.successfully distributing best-sellers 5. impact on the scholarly community 6.successfully distributing production to more than 50 global Worldcat libraries 7.output during the last 5 years 8. outstanding academic quality. [16] But also as of 2018, Nova has been removed from the Norwegian list of approved scientific publishing channels (Norwegian Register for Scientific Journals, Series and Publishers). [17]

The way forward

Get started early. While it’s often an afterthought, consider where to submit your manuscript early on, says Andy Pleffer of Macquarie University in Sydney, Australia. “Think about it up front so you’ve got a longer lead-in time and you can create a longer list of where you might publish. Especially if you’ve got a particular journal on your radar, they might have a special issue coming up that ties in quite neatly with your particular expertise.”

Scan the TOC. Are there any familiar names in the journal’s table of contents? Do you recognize any members of the journal’s editorial advisory board? If the answers to both are no, it’s probably worth looking into alternative titles, says Chad Cook of Duke University.

Read the journal’s policies. Familiarize yourself with the publication’s peer-review process, author fees, and policies pertaining to copyright, access, and conflicts of interest. All should be clearly outlined on the journal’s website.

Beware of “Contact us.” While not always a sign of a suspect publication, journals that do not list editorial staff phone or email contact information—instead, offering only a “contact us” form—is “usually a red flag,” says Pleffer.

Check DOAJ. Look to see if the publication is listed in the Directory of Open Access Journals and other scholarly databases, and is indexed on PubMed or by the Institute for Scientific Information. If it’s not, proceed with caution.

Have you published in the journal? If yes, how was the overall experience? If no, have any of your colleagues or your collaborators’ colleagues?

Email overload. “If you get an invitation through email, be extremely suspicious,” says Jeffrey Beall, a librarian at the University of Colorado Denver. “Most high-quality journals don’t go looking for editorial boards through email. It’s usually the other way around: people want to serve on a particular journal’s editorial board, and they will send an email to the journal.”

Standing members. Examine the journal’s existing board. Do you recognize any names? Are any of the board members senior scientists? “What I noticed from the beginning was that there were really no well-known people [on the board]. A lot of the people were junior people, like myself,” the University of Kentucky’s Björn Bauer says of his experience with Pharmacologia. Additionally, do the board members list their participation with the journal on their CVs or biosketches? “If they back that up on their profile, that’s generally a good sign,” says Pleffer.

At Macquarie University in Sydney, Australia, administrator Virginia Barbour, chair of the nonprofit Committee on Publication Ethics (COPE) until the end of May 2017, says the organization has not extensively examined predatory publishing practices, but has partnered with the “Think. Check. Submit.” initiative, which aims to help researchers identify legit journals. “Our approach has been much more towards building a positive culture rather than attempting to catalog the behavior of the suspect journals,” Barbour told The Scientist in an email in April.

To that end, several groups have replaced what many considered a so-called blacklist—Beall’s—with “whitelists” highlighting reputable publishers. In addition, COPE, the Directory of Open Access Journals, the Open Access Scholarly Publishers Association, and the World Association of Medical Editors have outlined “principles of transparency and best practice in scholarly publishing.” Among these standards are that the journal clearly state its peer-review policies, APC fees, and editorial board membership.

See “On Blacklists and Whitelists”

These guidelines can help researchers as they decide where to publish their work or whether to join a journal’s editorial board, Pleffer says, but it’s ultimately up to the scientist to judge for herself whether associating with a particular publication is likely to be beneficial. Any individual journal-evaluation system, he says, may be subject to potential inaccuracies, and could quickly become outdated.

“We encourage people to look beyond a single identifier there is no single indicator of quality research out there,” Pleffer says. “More than anything,” he adds, “it’s important for individual researchers to be actively engaged in this space so that they have a good level of understanding and control about where they are publishing their research.”

Still, predatory publications have found myriad ways to mimic legitimate journals. The best defense, says Bauer, is open communication. “Whenever I make mistakes,” he says, “I always tell people around me, [so they] don’t make the same mistakes.”

He and his wife Anika Hartz, also an associate professor at the University of Kentucky and co–principal investigator of their blood-brain barrier research lab, have used Bauer’s personal experience of being associated with a predatory publisher as a teaching tool for students and trainees. “I would assume you find this with more-junior people, who are eager to demonstrate acceptance into the field [and] have something to put on their CV in the early stages of their academic career,” says Bauer. “That’s exactly the situation I was in.”

Correction (July 18): This story has been updated from its original version to indicate that Virginia Barbour’s term as chair of the nonprofit Committee on Publication Ethics (COPE) ended in May. The Scientist regrets the error.


Publishing platforms

Author-driven dissemination in the life sciences already exists on publishing platforms. The defining feature of publishing platforms is that they empower authors to make publishing decisions. Preprint servers share author-posted articles before they undergo peer review at a journal, with no delay to dissemination. The arXiv preprint server has been used for decades in the physics community. Preprints enjoy increasing popularity in the life sciences, with bioRxiv as the biggest server [17]. Publishing platforms powered by F1000Research infrastructure go further: authors publish preprints, orchestrate the peer review process for that preprint, and publish the revised article [18].

Open-access publishing platforms are positioned to increasingly complement—and perhaps eventually replace—journals as major publication venues for primary research articles. High-volume publishing on these platforms allows primary research to be published faster, because authors decide when to publish original and revised articles. The platforms are typically cheaper than journals because they don’t include expensive editorial curation associated with high rejection rates. Overall, the combined cost of publication on platforms and post-publication curation can be significantly lower than the current cost of journals because many primary research articles may not need post-publication curation in fact, some fraction of specialized articles may not even need formal peer review, because the few scientists who access these articles on preprint servers can quickly evaluate rigor and quality of the data themselves.

We envision a platform infrastructure that enables different providers to offer diverse services—publication of versioned articles from preprints to the final version of record, quality controls before publication, peer review, copy editing, post-publication curation, etc. One business model would enable service providers to charge a fee for service. Competition among service providers could create an environment of experimentation on publishing platforms that would, over time, identify the most valuable and cost-effective services. There is an argument for research funders to financially support publishing platforms and the services that run on them, at least until publishing volume has increased to levels that can sustain the platforms through service fees. Current leaders in this effort include the Wellcome Trust and the Bill and Melinda Gates Foundation, which support their own open research platforms [19,20], and the Chan Zuckerberg Initiative, which supports bioRxiv [21].

Some journals could become publishing platforms over time, shedding editorial gatekeeper roles. A publishing trial at eLife is exploring the impact of forgoing editorial rejection after peer review [22]. The editorial gatekeeper role before peer review could be replaced when it becomes feasible and culturally acceptable to use community approaches or algorithms to allocate peer reviewer resources wisely. The F1000Research platform uses authors instead of editors to select reviewers [18]. Bioverlay uses academic editors to select preprints for peer review. Alternatively, scientists could self-select to review preprints [23]. Further experimentation with different peer review models could improve efficiency and quality of peer review on publishing platforms.

Curation journals

As high-volume publishing platforms continue to grow, we’ll need curation services that select articles of interest for specific target audiences. Today’s selective journals and scientific societies could be well positioned to provide such services. Future curation journals could retain many of their current features, including subscription income, independent editors and editorial boards, and nontransparent evaluations. They could exploit the above-mentioned advantages over the curation at traditional “publishing” journals in the following ways.

Post-publication curation could be multidimensional, with articles selected based on different criteria.

Some selection criteria could be similar to those journals use today, including “of broad interest,” “of unusual significance,” “potentially groundbreaking but controversial,” or “rigorous and elegant.” Alternatively, curators could flag articles that are personal favorites, as F1000Prime does. A particularly valuable curation service would be to identify significant claims in published articles that are questionable or could not be validated in the community. Today, such judgments are largely restricted to the privacy of expert circles. If discoverable through reputable curation services, these judgments could motivate authors to publish rigorous research. The multidimensional nature of curation after publication means that it could capture nuances and complexities much better than traditional journals, in which both “positive” and “negative” curation typically boil down to simple decisions—whether to publish or whether to retract.

Post-publication curation should take full advantage of the internet and community input.

To illustrate what this input could look like, consider the following process: Every few months, an advisory board of experts nominates articles for a given set of categories. The selection that follows the nomination process can be informed by a mix of crowdsourcing, i.e., tallying the votes of board members, and editorial judgment, i.e., weighing comments from board members. The combination of community input and independent editorial oversight ensures that the selection process is not a simple popularity contest. Selection of an article is signaled by tagging the article with a badge (see below) and can be justified with a short review.

Post-publication curation journals confront at least two significant challenges.

Peer review burden.

Established scientists are already overburdened with peer review requests from journals. On the upside, scientists wouldn’t be asked to conduct a full technical peer review but to select articles they (and their trainees) have already dissected in detail. And starting post-publication curation with early-career group leaders and trainees may further mitigate this challenge: early-career scientists are not yet overburdened with peer review duties, have the most to gain from building an alternative evaluation and reward system in academia, and are still directly involved in hands-on research. Trainees could be engaged in curation through mentored journal clubs, similar to their engagement in the review of preprints through preprint journal clubs [24]. Ultimately, if publishing on platforms becomes mainstream and post-publication curation is recognized as a critical service for science, the burden on scientists and editors will shift from deciding what to publish to curating what is already published.

Business model.

Even if curation after publication is deemed highly valuable, it remains an untested business model. Research funders could support curation services initially, but academic libraries would ultimately have to subscribe to them to make them sustainable. It may be difficult to monetize the selection outcome itself, but scholarly reviews that justify the selections could be subscription worthy. There are opportunities to develop and experiment with new models of curation to explore what is most valuable for the scientific enterprise and sustainable for providers.

Alternatives to the JIF

One way to dissuade the use of journal-level metrics like the JIF in the evaluation of scientists is to develop better proxies that reflect quality features of articles. Post-publication curation offers an opportunity to develop such proxies in the form of “badges.” Badges would capture the selection of articles in a shorthand form and would be attached to papers in a discoverable and searchable manner. Like post-publication curation, badges could be multidimensional. Authors could contribute to badging through structured citations—e.g. by flagging research and methods articles that were foundational for a given article, similar to the practice at the Current Opinions journals, which mark references that are “of outstanding interest” and “of special interest.” (For more ideas on making citations more useful, see [25,26].) Curation journals could contribute to badging by signaling that the selected paper satisfies the journal’s editorial standards, conveying the same information as current publishing decisions. Other badges could be generated directly on publishing platforms by aggregating peer reviewer scores or by including short summary statements that distill key aspects of the paper—originality, significance, key findings, remaining reviewer concerns, target audience, etc.

Finally, some badges could take full advantage of internet capabilities and be generated automatically through crowdsourcing and analytics over time (citations, readability, data usage, etc.). Altmetrics represent one existing example of article-specific metrics that aggregate citations, downloads, social media mentions, etc. [27]. A disadvantage of usage and citation metrics is that they are lagging indicators that take a long time to acrrue. To compete with a leading indicator like the JIF, it would be important to start post-publication curation and badging soon after publication while still taking advantage of community input. Because curation and badges are not “written in stone” like a publishing decision, they can be revised over time, providing a mechanism for modifying or correcting earlier judgments.


We envision a publishing model in which dissemination and curation of scientific work are separated, making both processes more efficient and effective [28,29]. To achieve a transition to open publishing platforms and post-publication curation, the scientific community needs the self-awareness and courage to make a significant cultural shift.

Successful scientists have a vested interest in the current system. The notion that publication itself serves as a trusted “stamp of quality” is deeply ingrained, and scientists adhere instinctively to a more or less agreed-upon hierarchy of journals for judgment, reading preferences, and evaluations. As authors, they chose journals based on prestige and quality but are shielded from associated publishing costs because their libraries pay for subscription licenses. As peer reviewers, they provide critical and time-consuming services and, in exchange, gain privileged access both to unpublished work and to the “right” editors. Scientists may also interpret a call for change as an implicit criticism that they don’t execute their role well enough despite investing significant effort. It is easy to blame individuals, like editors or the “third reviewer,” for mistakes in publishing decisions. But the problems we describe here are systemic and not resolved by tackling individual misjudgments that will always be part of scientific evaluations.

Over time, however, we believe that the scientific community will come to support a progressive open publishing model that accelerates discovery and empowers scientists. Authors would spend less time and resources on getting their work published, and peer reviewers might need to review less often. Even post-publication curation could turn out to be effort neutral if it grows at the expense of pre-publication curation. To move forward, we encourage the community to push for progress on core issues such as the following.

How can we optimize the structure of peer review and the selection of peer reviewers for platform publishing? How do we determine what level of peer review an article needs in the first place—none, basic, premium? How can the peer review reports be structured—with scores and short statements of key features—to contribute most effectively to subsequent post-publication curation and badges?

How do we set up an infrastructure and culture for post-publication curation? How do we decide on suitable categories for selection? How should we identify work that didn’t stand the test of time?

Finally, what business models are best suited to support sharing of primary research articles on platforms and post-publication curation?

Publishers, scientific societies, academic institutions and their libraries, and funders can play critical roles in addressing these issues. Publishers can experiment with publishing platforms. Scientific societies can use the expertise of their members to orchestrate fee-for-service peer review on publishing platforms and subscription-based curation services. Libraries may be able to support curation journals when publication of primary research articles shifts towards cheaper publishing platforms, liberating funds that are currently spent on traditional subscription journals.

Research funders are uniquely positioned to promote change because we sit at the nexus of two interconnected functions—the sharing of research outputs and the evaluation of scientists. Many funders see it as our responsibility to support practices that disseminate research outputs openly and efficiently and evaluate scientists’ work based on intrinsic merit. The evaluation of scientists in academia places heavy emphasis on where and how much they publish, rather than what they publish. If these incentives don’t change, scientists will continue to publish in a manner that perpetuates the current problems. Changes in academic incentives cannot come from publishers. Funders and academic institutions need to commit to evaluate science and scientists independent of the publication venue [1]. Developing and sharing principles on how to evaluate scientists and learning from each other how to implement them will set us on a path to better incentives and rewards for rigorous and enduring research. One example of work in this area is the Open Research Funders Group, a community of practice.

In addition to supporting changes in the academic incentive system, funders can catalyze changes in publishing by encouraging and supporting publishing platforms, pilot studies on peer review, and new forms of post-publication curation. Such pilots should measure their impact on authors, reviewers, and readers and should be scalable. Their outputs should contribute to the evaluation of scientists and scientific work.

By fostering an environment for experiments in publication and evaluation and continuously assessing and building on effective practices, we can together develop services that best support science in the digital age. We stand to gain fairer, more effective ways to communicate findings, share data, and develop the next generation of scientists. At Howard Hughes Medical Institute, we believe this is the future of publishing. We are moving toward it.


WebmedCentral is a post-publication review biomedical web portal launched in July 2010. It aims to "eliminate bias, increase transparency, empower authors, improve speed and accountability, and encourage free exchange of ideas." There is no pre-publication screening, although the instructions for authors imply some oversight for issues such as patient consent. Authors may submit revised versions. Articles can be read for free on the website, where they may be reviewed both by reviewers solicited by the authors and by readers. There is a list of "Scholarly Reviewers" on the site. Readers may also rate articles. Biomedical videos are also published. The journal has ISSN 2046-1690, but articles do not appear to have DOIs. It is not indexed in PubMed, but the articles are indexed on Google Scholar. The site aims to host other open access, open peer reviewed journals.

Content: Primary scientific research, case reports, and reviews make up the bulk of the articles, alongside opinion, hypotheses, and outright fringe science. None have been peer reviewed before publication.

Usability: The site has a category listing, browse by date, featured articles, popular articles, most reviewed articles, RSS feeds, basic and advanced search, latest reviews. The PDF is only available via a Javascript link.

Cost: Free to read and publish, unless the author pays the US$50 Premium Upload fee.

Licensing: Authors retain copyright. Personal non-commercial use, digital archiving and self-archiving are allowed, though no standard license is used and details are confusing

Free to read and publish, the journal aims to receive income from advertising and sponsorship. They offer a "premium upload service" for $50 per article that allows authors to simply email their submission to the journal. Scholarly Reviewers who post three reviews can obtain a free "premium upload".

The simplest formulation is that "Authors keep copyright to the article but our readers will be freely able to read, copy, save, print and privately circulate the article." However, the details are less clear. At one point they say authors "are free to publish it elsewhere" but also say elsewhere that "we require . an exclusive license". They also say that users have a "free, irrevocable, worldwide, perpetual right of access for personal non-commercial use, subject to proper attribution of authorship and ownership of rights" but then say users may "view or download a single copy of the material on this website solely for your personal, non-commercial use". But they allow self-archiving: "WebmedCentral allows the final version of all published research articles to be placed in any digital archive immediately on publication. Authors are free to archive articles themselves." The precise freedom all this gives to users to reproduce the text is unclear, but calling WebmedCentral "open access" would be misleading.

The approach of WebmedCentral is reminiscent of Google Knol, which is where PLoS Currents is hosted, or of a preprint server, except there is an active post-publication peer review system.

Open peer review and community peer review are not new ideas. A similar approach to that of WebmedCentral was tried by Philica in recent years without great success the site rapidly filled with crank publications. Another was 'E-Biomed', which was stifled and instead became PubMed Central. Although anticipated a decade ago, biomedical publishing has been wary of preprints and other proposals to remove or reduce pre-publication peer review. BioMed Central's Genome Biology had a preprint server, but it closed in January 2006. A humanities institute is experimenting with community review on Shakespeare Quarterly , though they are using a hybrid model rather than abandoning invited pre-publication review. More generally, MediaCommons argue for community peer review in their book "Planned Obsolescence". They are far from naïve, noting that

The journal requires appropriate ethical approval for human and animal studies and will remove studies if they find that they fail to meet ethical standards. Articles may also be removed in cases of scientific misconduct or plagiarism. They suggest that authors use statistical advice, and ask authors to adhere to reporting standards such as CONSORT. They ask authors of clinical trials to adhere to the Good Publication Practice guidelines, but do not specifically mention trial registration. They endorse the ICMJE criteria for authorship and the use of medical writers should be declared. Funding and competing interests should be declared, though there is no definition of a competing interest. They ask authors to suggest at least three reviewers and to not only pick "friendly reviewers", and say they may invite further reviewers. How these policies are enforced and who enforces them is not clear.

Technical issues:
Previous versions of an article should be linked to, but this fails. The journal allows digital archiving and digital preservation by LOCKSS members. Some test articles can be found as Word documents that are not visible via the search, which raises questions about site security. The presentations of figures is in a sidebar and sometimes without even a thumbnail, though the pop-up view is user friendly. The referencing could be improved, with clearer formatting and hyperlinks down to the references. Some of the formatting of the reviews is poor, with changes in font and font size, and several reviews are double posted.

Publication volume:
There are 366 published articles as of 30/01/2011. Submission rates appear to have peaked following publicity in August, and have since declined (see figure).

There is currently no indication on the articles that they have received no pre-publication review. As might be expected given the lack of pre-publication review, some of the articles are fringe science: aliens, homeopathy, prayer, and telepathy are all represented. There is an account of chiropractic care of a patient with fibromyalgia, an opinion article on the evidence for homeopathy in acute upper respiratory tract infections by Peter Fisher and colleagues, a study linking 'emotional quotient' and telepathy that has the obligatory mention of quantum theory, an article on the hunt for alien life that takes in the Higgs Boson, the Bermuda Triangle, and alien implants , a virtually content-free account of acupuncture in rats, and an intercessory prayer study . The latter is, thankfully, a deliberate satire.

When you get this kind of opportunity of publishing without a filter, sex always seem to come to the fore: step forward, a hypothesis on why women don't sleep with the first man they see when they ovulate, two case reports of priapism, an institutional review of Peyronie's disease, and a case report of penile fracture. As pointed out by two reviewers, it contained the unfortunate typo in the title of the corpus callosum (in the brain) rather than the corpus cavernosum , hence it was republished (demonstrating that the article version system is not working).

Many of the articles are unpublishable in any biomedical journal: a rant about academic exploitation a review of the biological activities of a herb that the author seems to have forgotten to write an account of a trauma registry that is confused and sketchy a review of oral health and inequality for which the recommendations section appears to be lifted verbatim from Nunn et al. 2008, who are not cited. How many more of the articles will contain plagiarism would be interesting to see.

On the more positive side, there are a series of interesting articles by three authors: Leonid Perlovsky has published a series of mainly hypothetical papers, e.g. on language and cognition William Maloney, a New York dentist, has published a series of overviews and historical accounts, e.g. the medical legacy of Babe Ruth Uner Tan has published a series of articles of his observations and theories of quadrapedal locomotion in humans, e.g. these two cases.

Other interesting reads are a survey of the role of hairdressers and bartenders as informal emotional support following the 9/11 attacks and their responses to this role, a study by Robert Dellavelle on how journals don't require ethics approval for meeting abstracts, and a series of witty anecdotes by an Israeli psychiatrist of cases of "curing demons" in his patients.

Around a quarter of the articles are case reports. The insatiable demand of hospital doctors to publish case reports has clashed with a reluctance of medical journals to publish what are often "me too" publications offering little generalisable insights, and which are often poorly presented and incomplete. The recent trend of open access case report journals - BMJ Case Reports , Cases Journal , Journal of Medical Case Reports , Clinical medicine insights. Case reports , Case Reports in Ophthalmology etc. from Karger, Case reports in Medicine from Hindawi and the American Journal of Case Reports (free, not OA) - doesn't appear to be matching demand.

There are also 58 reviews, 31 opinion articles, and at least 15 of the "original articles" are not research articles less than half of the articles on WebmedCentral are primary research.

Some of the reviewers are published researchers, but they usually have only a handful of publications and they would be unlikely to be selected as peer reviewers by a mainstream biomedical journal editor – this could be seen as a positive or a negative. There are pages listing reviewer details, but the reviews by a single reviewer are not listed.

Relatively few articles have received an insightful review or comment. Around 55% (201 articles) have received a review of some kind, and the most any article has received is six reviews (see right hand panel of the figure). 138 reviews were unsolicited and 211 were solicited by the authors. The quality of the reviews is usually low. Just over half of both solicited and unsolicited reviews contain critical analysis, i.e. at least some mention of improvements the authors could make to their article, meaning that probably less than 25% of all articles receive any degree of critical analysis. Many reviews are sycophantic, for example one case report is said to be "the best ever article publishe[sic] so far". Many merely state what the articles is about - one author-invited reviewer spends 358 words reiterating what the article says and telling us that it is a "must read" - or give the views of the reviewer on the subject rather than the article - another reviewer devotes a mere 23 words of a 430 word review to even mentioning the paper. Most reviews are very short: the average is only

115 words for both author-suggested and unsolicited reviewers the longest is just over 1500 words (see left hand panel of the figure for the length distribution). Comments with critical analysis are much longer (

175 words) than those without (

50 words). If I were to see reviews like most of those on WebmedCentral during standard peer review, I would never use that reviewer again.

Some of the reviews include comments such as "this is suitable for publication" or "I hope it is accepted", which indicate a lack of awareness of the publishing model. One author has even reviewed his own paper. An article I consider unpublishable received the reviews, and I quote them in full, "good" and "No comment".

There are some examples where robust review has taken place. The concerns raised by the reviewers on this paper, including a lack of mention of ethics or consent, would lead most editors to reject such a paper – but WebmedCentral has no routine mechanism for doing this. Authors responded to reviews only on a handful of papers. A lively debate developed around a physician's self-case report, but this was a rare exception. I found one example of what appears to be functional peer review, with the authors revising their work and the reviewer stating that they are happy with the revisions.

In Bambi , Thumper's parents taught him that "If you can't say something nice. don't say nothing at all", but I think that the opposite applies in peer review. If you can't come up with critical comments about a paper, you're probably missing something: every paper has something wrong with it. The sycophantic nature of many of the reviews in WebmedCentral might be inherent to open (named) peer review, but in my experience and according to published studies, open peer review increases the length of reviews and makes them more polite, but has no effect on review quality. Another factor may be that many of the authors and reviewers of WebmedCentral are from India: R. A. Mashelkar argued in Science that "India must free itself from a traditional attitude that condemns irreverence", and Nikhil Kumar and Shirish Ranade argued in Current Science that "it is a preponderance of obsequious reverence and sycophancy that has placed the science in the country on a downhill slope." Are we seeing this unwillingness to criticise in action?

Overall assessment:
This is an interesting experiment in post-publication peer review, which both indicates the possibilities – instant publication, open community review – and the perils – unsound science, unbalanced opinion, and substandard writing being presented as part of the scientific literature.

Building a functioning publishing platform from scratch is no easy matter, and several hundred publications in seven months is an impressive figure. There has been a noticeable engagement from the community, with over 365 submissions and a total of nearly 350 reviews in seven months, 40% of them by reviewers not suggested by the authors. However, the submission rate is declining and the coverage and quality of reviews is not nearly high enough to functionally replace pre-publication review.

The onus is on the authors to obtain reviews: the journal states that it will obtain reviews, but this is not in evidence - just under half of the papers have no reviews, and 30% have only one review. More effort needs to be put into gaining reviews from qualified experts.

Reviews are essentially worthless if nobody pays any attention to them, be that an editor, the authors or the readers. Pre-publication peer review is not merely a filter, but it also acts to improve articles. On WebmedCentral there is no pressure for articles to be revised in accordance with any critical reviews, perhaps other than author embarrassment. As reviewers see a lack of response to their comments, they may lose enthusiasm.

Without a clear indication that reviewers have criticised an article and no indication that the articles are not peer reviewed, readers may view the work uncritically. If reviewers state for instance that the work is not sound, this should be clearly flagged up to readers near the top of the page, and articles should be sortable based on the answers given in the review from and the rating given by reviewers and readers. Another layer should be added, allowing articles to be promoted by agreement from their 'Scholarly Reviewers' to a "publication standard" level, giving authors an incentive to revise their work. "Featured articles" do exist, but the criteria used are not revealed. WebmedCentral are forming an "Advisory Board" of "eminent scientists" perhaps this board will increase the rigour of the site.

Without the oversight of an editor choosing diverse reviewers and because most scientists are unaware of the site, it may become a closed community of the same authors positively reviewing each others' work – the precise opposite of the aim of the journal. Unless the process is reformed, WebmedCentral is likely to remain a "Cargo Cult science" journal, which in the main publishes articles that only superficially resemble the peer-reviewed literature, and that are reviewed in a manner that is only a pale imitation of pre-publication peer review.

How to Get Your Book Published

If you’re not sure how to go about getting your book published by a reputable publisher, make sure to follow these steps.

1. Know Your Genre

You need to know the specific genre and subgenre of your book. Some publishers only publish romance novels, while others may only publish nonfiction business books. By understanding your genre, you’ll know which publishers to pursue and which to ignore.

2. Find an Appropriate Publisher for Your Book

Different publishers focus on different genres, so you should only create your own personal list of publishers who are actively publishing books in your genre. Otherwise, you’re just wasting your time.

3. Research and Prepare

Once you’ve got your own personal list of potential publishers, visit their websites, check out their submission guidelines, read reviews about them online and on author forums, and find out as much as you can to make sure they’ll be a good fit for you to work with.

Some publishing companies accept open submissions from any author, while others only accept submissions from literary agents. If you need or want to work with a literary agent, you should browse our list of literary agents to find one that will be a good fit for you.

4. Submit Your Manuscript

After you identified a specific publishing house that accepts open submissions from authors, make sure you read their submission guidelines and follow their process exactly. If you don’t follow their guidelines, chances are your submission will be automatically rejected or ignored.

If the publisher only accepts submissions from literary agents, you’ll need to speak with your agent about submitting your book to them.


Sexual reproduction often takes a lot of effort. Finding a mate can sometimes be an expensive, risky and time consuming process. Courtship, copulation and taking care of the new born offspring may also take up a lot of time and energy. From this point of view, asexual reproduction may seem a lot easier and more efficient. But another important thing to consider is that the individuals with the highest fitness are more likely to find a mate and reproduce. Therefore, the chances of offspring with a higher fitness increases. The Vicar of Bray hypothesis proposes that sexual reproduction is more beneficial than asexual reproduction, despite the cost of time and effort.

The hypothesis is called after the Vicar of Bray, a semi-fictionalized cleric who retained his ecclesiastic office by quickly adapting to the prevailing religious winds in England, switching between various Protestant and Catholic rites as the ruling hierarchy changed. [7] The figure described was Simon Aleyn between 1540 and 1588. The main work of Thomas Fuller (d. 1661), Worthies of England, describes this man: [8]

The vivacious vicar [of Bray] living under King Henry VIII, King Edward VI, Queen Mary, and Queen Elizabeth, was first a Papist, then a Protestant, then a Papist, then a Protestant again. He had seen some martyrs burnt (two miles off) at Windsor and found this fire too hot for his tender temper. This vicar, being taxed [attacked] by one for being a turncoat and an inconstant changeling, said, "Not so, for I always kept my principle, which is this – to live and die the Vicar of Bray." [9] – Worthies of England, published 1662

The hypothesis was first expressed in 1889 by August Weismann [10] and later by Guenther (1906). [11] Afterwards, the hypothesis was formulated in terms of population genetics by Fisher (1930) [12] and Muller (1932) [13] and with greater mathematical formalism, by Muller (1958, 1964) [14] [15] and Crow and Kimura (1965). [16] The doubts about the validity of the Vicar of Bray hypothesis caused the upcoming of alternative hypotheses such as:

  • Best-Man (Williams, 1966 Emlen, 1973 Treisman, 1976): The Best-Man hypothesis proposes that, on average, sexually produced offspring may be of somewhat lower fitness than the asexually produced offspring, but the much greater diversity of the sexually produced offspring implies that they will include a few individuals of extraordinary high fitness. If these individuals have a high chance on survival and reproducing, their offspring might also be of high fitness. If you consider this going on for multiple generations, the proportion of individuals with a high fitness might increase so rapidly that it will be more than sufficient to offset the cost of sex.
  • Tangled Bank (Ghiselin, 1974 Burt and Bell, 1987 Ridley, 1993): The Tangled Bank hypothesis proposes that sexual reproduction is beneficial when there exists intense competition for space, food and resources. It argues that genetically diverse offspring from sexually reproducing individuals are able to extract more food from their environment than genetically identical offspring from asexually reproducing individuals.
  • Red Queen (Van Valen, 1973 Hamilton, 1975 Levin, 1975 Charlesworth, 1976 Glesener and Tilman, 1978 Glesener, 1979 Bell, 1982 Bell and Maynard Smith, 1987 Ridley, 1993 Peters and Lively, 1999, 2007 Otto and Nuismer, 2004 Kouyos et al., 2007 Salathé et al., 2008): The Red Queen hypothesis proposes that organisms must constantly adapt and evolve in order to survive. If a species does not adapt to its evolving natural enemies and the changing environment, it will go extinct. This hypothesis also proposes that sexual reproduction is beneficial. But in contrast to the Vicar of Bray hypothesis, the Red Queen hypothesis states that sexual reproduction does not only benefit the population as a whole, but it benefits individual genes directly. [7]

Mathematical models have been used in order to try to prove or disprove these hypotheses. However, for a mathematical model, assumptions must be made. Assumptions on the size of the population, the breeding process, the environment, natural enemies and so on. That is why there will always be populations for which the model does not apply. Some models are better in explaining the ‘average’ population, while others better explain the smaller populations or populations that live in a more extreme environment. A good way to decide which model is the best might be to compare the expected result from the model with data from natural observations. [17]

People who criticize the Vicar of Bray hypothesis (and all other hypotheses that propose sexual reproduction has an advantage over asexual reproduction) say that sexual reproduction might be beneficial in some situations, but not always, which is why both ways of reproduction still exist. If either sexual reproduction or asexual reproduction would be much more beneficial, evolution should result in one of the two ways of reproduction to disappear and the other one to persist.


BMC Biology is recruiting new Editorial Board Members

We are looking for Editorial Board Members in all fields of biology. If you are interested in becoming an EBM please see this page.

Portable peer review

BMC Biology supports portable peer review by sharing reviews and evaluating papers based on existing reports. Learn more here.

COVID-19 and impact on peer review

As a result of the significant disruption that is being caused by the COVID-19 pandemic we are very aware that many researchers will have difficulty in meeting the timelines associated with our peer review process during normal times. Please do let us know if you need additional time. Our systems will continue to remind you of the original timelines but we intend to be highly flexible at this time.

BMC Biology is a member of the Neuroscience Peer Review Consortium.

Cogent Education Impact Score 2020-2021

The impact score (IS) 2020 of Cogent Education is 1.24, which is computed in 2021 as per it's definition. Cogent Education IS is increased by a factor of 0.35 and approximate percentage change is 39.33% when compared to preceding year 2019, which shows a rising trend. The impact score (IS), also denoted as Journal impact score (JIS), of an academic journal is a measure of the yearly average number of citations to recent articles published in that journal. It is based on Scopus data.

Cogent Education Impact Score 2021 Prediction

IS 2020 of Cogent Education is 1.24. If the same upward trend persists, impact score of joule may rise in 2021 as well.

Watch the video: What Are Scholarly and Peer Reviewed Sources? (May 2022).