“Open access won’t solve all our problems”

The MDC has asked Prof. Morano to represent the center in the Helmholtz Association’s Open Science working group. We took this as an opportunity to talk to Prof. Morano about open science and open access.

How do you see your role as the MDC’s representative in the Helmholtz Open Science working group?

On the one hand, I represent the MDC’s views and the interests of its scientists. On the other, I bring information from the working group to the MDC so that the MDC and the Helmholtz Association are synchronized in matters related to open science. Within the working group, I am involved in producing joint recommendations on open science. At the MDC, I am one of three contacts for questions relating to open science – the other two are my deputy Wolf Schröder Barkhausen, who works in the library, and Dr. Dorothea Busjahn, who is head of the library.

 

Prof. Ingo Morano represents the MDC in the Helmholtz Open Science working group. Photo: D. Ausserhofer/MDC

The group also tackles the issue of open access. What sort of developments do you see there?

Open access (OA) was defined by the Budapest Open Access Initiative of 2002 and in the Berlin Declaration of 2003, which was signed by the Helmholtz Association among others. OA means that the public should have unrestricted, free electronic access to scholarly literature and information anywhere in the world. OA has since become well established in the research community. Open access publishing operates roughly like this: authors submit their (usually publicly funded) articles as manuscripts and then pay an author processing charge (also usually publicly funded) if the manuscript is accepted for publishing. As a rule, the publisher edits the accepted manuscript version of the article and then immediately makes it freely available online as an original version in PDF format. This is known as gold open access, or as a “pay to publish, read for free” model.

The traditional publishing business model does things the other way around.

Precisely. Authors make their manuscripts available to the publishing companies free of charge – though in some cases they do have to pay. The authors also transfer the rights to their publication to the publishing companies, which means the manuscripts can no longer be so readily reused. The research institutions then have to buy back the articles, either individually or via licensing contracts and subscription fees. Otherwise they can only access the short abstracts, not the entire journal. So, put simply, the business model employed by traditional publishing houses is a “publish for free, pay to read” model. In addition, we often come across “pay to publish, pay to read” – and even “pay twice to read,” which is when a traditional publisher also makes the articles publicly available online for a fee. Other names for this are “hybrid model” or “double dipping.”

Researchers are often highly critical of publishers’ pricing policies.

That’s right. In the past, traditional publishers funded their services by increasing prices for no justifiable reason. This is still happening. The library crisis it caused has already entered the history books in library science. Today’s libraries primarily make articles available to researchers in electronic format. They have unsubscribed from the print versions, which now hardly play any role at all in STM (science, technology, and medicine). Many journals are no longer accessible because institutions can no longer pay their license fees. These conditions are hardly acceptable, but they are still common practice and can be explained by the fact that our research system depends on the publishing system. This is not just an economic paradox – it also runs contrary the legitimate calls from scholars and the public for unrestricted access to data and information that were financed by the public sector.

And open access can help with that?

The model that has developed over recent years solves some, but not all, of the above-mentioned problems arising from the “publish for free, pay to read” system used by traditional publishers. The electronic articles that OA publishers make available can indeed be accessed for free, and can be reused in accordance with the applicable license. However, with regard to the first part of the OA business model – pay to publish – we have seen a totally unacceptable explosion in costs of the kind that also happened in the traditional publishing model. In addition to putting a strain on library budgets, the rise is also a threat to the entire open access movement and even has global implications.

 

Published for open access: One of the most recent publications from the Berlin Institute for Medical Systems Biology at the MDC (BIMSB). Screenshot: MDC

What do you mean by global?

The article processing charges imposed by OA publishers are a financial obstacle for our colleagues in newly industrialized countries and make it harder for them to access the research system. Another disturbing development is the leap in the number of unprofessional publishers who perform little or no quality control via peer review of the submitted manuscripts, and instead merely enrich themselves from the article processing charges. I’d just like to quickly mention Jeffrey Beall here, who keeps a list of these predatory publishers. All that glitters is not gold, after all.

Do you see any alternative to the current publishing industry that has laid claim to a quality-control and “gatekeeper” function via its journals?

There is no question that the current publishing industry, whether traditional or open access, has made an enormous contribution to developing research and disseminating information through its skills and experience as a gatekeeper and through its activities in peer review, archiving, and bibliometry. In my experience, we are very well served in that regard.

Do you think something other than the journals could perform the gatekeeper function (e.g., open peer review)?

As the peer review process currently stands, the authors of a submitted manuscript are known to the reviewers, but the reviewers remain anonymous. This model has been questioned for as long as it has existed.

What are your specific views of open peer review?

When Nature launched a trial for open peer review in 2006, the readership showed a disappointing lack of interest in getting involved. It is also worth noting that elements of open peer review are already being practiced. Many journals allow their readers to comment on published articles, and the comments themselves will be published if they are constructive. Personally, I still think the established peer review model with elements of open peer review is sufficient. I am much more worried about the steady decline in peer-review quality that I and many of my colleagues are observing. I believe action needs to be taken on this issue.

Going back to the publishers’ pricing policies, do you think there is a need for intervention here? And if so, who should do it?

The STM field is dominated by the Big Four, whose market position grew again this year with the merger of Macmillan (the publisher of Nature) and Springer. I can’t say at the moment the extent to which market regulatory authorities can directly intervene in publishers’ pricing policies. But it is possible to regulate basic aspects of publishing and to try and equalize competition in the STM publishing industry. For instance, a bolder revision of copyright law that upgraded the open access movement’s “green route,” which involves publishing the second version of an article in institutional repositories, could be very helpful. At the moment, our copyright law only allows an author to republish his/her article a year after it was first published, and even then it must only be a manuscript version. The original, edited version of the article remains reserved for the traditional publishing company. I fear little of that will change, even after a revision of copyright law. Republishing in manuscript form has several disadvantages for the readership – the main one being if the manuscript version and the original are not exactly the same. A reliable final manuscript version often does not exist because the last corrections are written directly into the manuscript in the publisher’s software. So it would be helpful if we were allowed to publish the original version the second time around. I also think that the one-year wait I just mentioned is far too long, given the speed at which developments happen in modern science. A three-month wait would be more realistic. This would mean that, after a scientifically acceptable amount of time had passed, the original versions could finally be republished in the repositories via the green route and replace the manuscript versions. This type of upgraded green route would effectively strengthen the open access movement, make traditional publishers’ pricing policies more bearable, and give open access publishers a clear signal that they should radically rethink their author processing charges.

What can researchers themselves do?

As customers of the publishing houses, we could effect change – for instance, by introducing appropriate upper limits for author processing charges paid to open access publishers. Some institutions are already defining upper limits, but I think they regularly set them much too high. The German Research Foundation, for instance, financed author processing charges of up to €2,000 per gold open access article. Given that the actual publishing costs do not even reach €300 per article, this is basically paving the way for publishers to charge similarly high fees. An upper limit of, let’s say, €500 per gold open access article would mean that we as customers were sending a clear signal to the publishers and would make for tougher competition in the industry.

 

That's how our repository looks like. You can reach it via http://edoc.mdc-berlin.de/ Screenshot: MDC

The MDC has its own repository for publications by our working groups. Do you think that it is well enough known inside the MDC?

From my own experience, I know that awareness of our repository is low and that our scientists rarely use manuscripts from the green route. There are doubtless many reasons for this. On the one hand, the usual system of researching literature via PubMed still serves us extremely well in every respect, and we only need additional help in exceptional cases. On the other hand, the manuscript versions that are republished in the repositories present practical and scientific problems, as I have already mentioned.

Do you think the big research-funding organizations should insist on open access and make additional resources available for it?

Within the open access community, you often hear people saying that scholars should be obliged to publish gold open access. Quite apart from the fact that I personally find these calls rather off-putting, I also think they are very problematic from a legal perspective. They are encroaching on the freedom of research and teaching, and that includes the freedom to publish, which is set out in Germany’s constitution. This wise provision of the German Basic Law has been supporting the sciences and arts for many decades and has never once stood in the way of their rapid development. I wouldn’t want to start tinkering with it now. I prefer the idea of researchers deciding on their own mix of traditional and open access publishing. Opinions at the MDC are in agreement on this – or, to borrow the words of the head of our library, Dr. Busjahn: “We are keen to distance ourselves from this black-and-white view, and believe that co-existence makes more sense for the time being.” With regard to financial resources for publications, however, there is currently an imbalance in favor of the traditional publishing houses. While institutions provide full funding for licenses, the article processing charges imposed by gold open access publishers receive very limited funding from just a few institutions. This means that the authors mostly cover the charges using their own budgets or third-party funding. I am calling on institutions to make additional money available for gold OA publications via an open access fund. That would also be in accordance with the wishes of the Helmholtz Association’s Open Science working group.

What would you think of a shared Berlin repository for articles (and possibly data too)?

Obviously, I welcome the city-state of Berlin’s 2014 decision to develop its own open access strategy. The recommendation in the motion tabled by the Pirate Party expressly recommends networking Berlin’s repositories. I don't think that setting up a joint Berlin repository, similar to the Helmholtz Association’s, for instance, will bring any added value over and above the repositories that already exist. The only real aim can be to network repositories internationally and make them as easy for scholars to access as is already the case with, say, PubMedCentral; there are already initiatives working toward this goal. Such actions would really support open access and strengthen the position of institutions vis-à-vis the publishing houses.

Biomedical research in particular is generating more and more data that exceed the limits of conventional journal articles (with us, for instance, it is MRI data and data from the omics technologies). How would it be possible to create a digital infrastructure capable of accommodating these kinds of data-intensive publications?

In addition to open access to publications (open access), open access to research data (open data) is a central requirement of open science. The Alliance of Science Organisations in Germany drew up a set of principles for handling research data in 2010. The Helmholtz Association was among the signatories. Open data is an enormous challenge because an intelligent reuse of research data must fulfill highly complex requirements. The data must be standardized and furnished with sufficient metadata, new technical and organizational infrastructure has to be set up, and computer scientists, researchers, and librarians have to work together. This also has to happen in a subject-specific way, protect the scientific and legal interests of the researchers, and safeguard personal data – to name just a few of the challenges. The Helmholtz Association set up the Open Science working group in 2014 to support this process. Elements of open data are already being practiced, for instance, in the supplements that are increasingly being attached to articles and published with them. There are publicly accessible databases, such as that of the National Center for Biotechnology Information, and journals that have specialized in publishing large volumes of primary data. In view of the challenges presented by open science, we at the MDC have agreed on a strategy of small steps. We initially want to set up a data repository for MDC publications. This will improve the quality control of the publications and will also serve as the nucleus for setting up and maintaining much larger volumes of MDC research data.

Are there any areas where you think the MDC needs to take urgent action?

I’d like to kick-start three things: setting up an open access publication fund, adopting a more proactive approach when promoting gold open access publications to scientists, and getting the data repository up and running faster.