The Pace Layers of Scholarly Publishing

I've been working on a post roughly on this topic for a while, and it's been getting long and meandering - so I've split it up. I'm expecting the following to be part one of several.

Scholarly communication as it is now is far from ideal. In what ways could it change to make things better? One way I have been thinking about it recently is in the context of Stewart Brand's "pace layers diagram"; Long Now Foundation just posted an interesting audio recording of a discussion about its origin and uses. The idea is that systems are composed of layers with differing rates of change:

The fast layers at the top are where innovation and experimentation happens, while the slow layers at the bottom bring stability. Changing things too quickly at the bottom isn't safe; forcing the top to slow down is equally harmful. Even our nature as human beings changes slowly over time, but for the most part that layer is given. Let's examine the other layers of the scholarly communication system a little:

Culture. We as a global civilization for the most part value knowledge, scholarship, science particularly. Institutions of many different kinds support it. The public sees benefits from technological and medical advances, and translates those advances to public support for research of all kinds. However there are also significant elements in the world today that dispute authority, including that of scientific institutions. The low barriers to communication today make almost every voice equal; communicators shout louder, the public pays less and less attention. But good science writing is still widely valued, particularly if there's a story to tell.

The culture of scientific research itself is one of strong traditions that have proved highly successful in terms of the growth of human understanding of the world around us. Recognition in science comes from conducting research that proves fruitful - things that others can build on either as research questions or in practical application. Communication and some degree of openness is critical to allowing research to bear fruit. Scientific training is still largely a matter of "apprenticeship" - the undergraduate - graduate student - postdoc "pipeline" that takes about a decade beyond an undergraduate degree to reach proficiency in a scientific discipline. I've written before about the necessity of ineffable knowledge in science; perhaps there is a little less need of this than in the past with our increased powers of representation and communication? But we still can expect only gradual change here.

Governance. At one level, the governing institutions of scholarship are our universities and colleges, along with industrial and governmental facilities of various kinds where research is conducted. The culture of academic freedom insures that at least within universities scholars largely govern themselves in regard to what they focus their efforts on. However any research requiring financial support needs some sort of sponsor; these may be nonprofit, commercial or governmental agencies, and their processes for awarding grants are a central concern. The structure of research funding can change significantly over time. In the United States there was a radical change to strong growth of government-funded research after World Ward II and the creation of the National Science Foundation, National Institutes of Health, the Department of Energy, NASA, etc, which was accompanied by considerable growth in research faculty and staff. Hiring decisions now seek to follow funding prospects, so the most important decision-making at this time is likely that made by funding bodies. What can funding bodies use to help make those decisions? They have certain specific goals in mind - perhaps guided by legislation or corporate interests. But to a large degree they couple those goals with the judgment of other researchers - peer review. And they rely on metrics.

Infrastructure. The predominant metric to measure the fruitfulness of a piece of research is a count of citations. There are variations on this, but at heart each citation adds to the cited author's reputation. Years ago this was hard to gather from all the paper records around, and secondary publishers sprang up to provide this as an essential piece of infrastructure for science - the "Science Citation Index" being the best known. With online publishing now essentially ubiquitous this information is much easier to gather automatically, and services like "Google Scholar" provide something close to an accurate metric now for free. Note that this depends on underlying infrastructure: to be citable, research needs to be communicated in some fixed form at a permanent location.

For centuries scholars have used scholarly journals as the main "permanent location" for communicating their research. The journals and associated standards for referencing and citation are one of the central pieces of infrastructure in modern science. New pieces of infrastructure are being added in the digital era such as DOI's or ORCID identifiers, standards that allow unique identification of research and researchers. New free publication alternatives like arXiv or institutional repositories also allow research to be communicated in a fixed form at a permanent location, and should be considered citable. Google Scholar includes arXiv papers in its index, for instance.

But scholarly journals historically have had a second function - critical evaluation. Not everything submitted to a scholarly jounral gets published. In some cases only a small fraction is accepted. Here is a second layer of decision-making in the scholarly system - either by a journal editor directly, or after consultation with other scholars (peer review). The criteria here are tied to the aim and scope of the journal as well as an evaluation of the quality and novelty of the research presented in the submitted article. And for most scholarly journals, whether owned by commercial companies or nonprofit organizations, the measure of success of that editor/peer decision-making process is simple: journal income greater than expenses.

Commerce. Which brings us to the marketplace. In the old print journal system, journals (and other scholarly publications) were purchased by researchers either directly or through their libraries. Scholars need to read the work of others for their own research, so there was great value in those subscriptions. The greater the value of the content provided, the greater the demand, both via a broad base of subscribers and a willingness to pay higher prices. Editors had a strong incentive to select work for publication that would meet the needs of their readers, work that would be built upon and highly cited (at least within a specialized area of research) in future. This system naturally synchronized market incentives with the value system of scholarship, although there were certainly many instances where work that should have been built upon was long lost in obscurity, and others where what was accepted proved worthless. This was sometimes the fault of editors and reviewers not recognizing value up front, and sometimes the fault of authors, for example in their choice of publication venue.

The scholarly publishing marketplace has changed radically in recent years - though not as much as publishing in other areas (general newspapers and news magazines for example). Journals are no longer essential for communication, they are not needed for scholars to learn of new research in their areas. The result has interestingly been a number of new entrants trying new business models, among them the collection of businesses surveyed by Research Information in this interesting discussion of problems with peer review "scams" and predatory journals..

Researchers should no longer need the journals, but authors overwhelmingly still want to publish in them. In principle, the best journals still have the role of highlighting and introducing a wide audience to the best research. But why do authors want to publish even in low-quality journals? The rise of "predatory" publishers (and other "scams") is a strong indication of a market responding to perceived needs. I suspect all this ferment is only a temporary, troubled, transition period, a result of our last layer, "fashion".

Fashion. Fields of research fall in and out of favor over time, and some of these changes happen very quickly. A new discovery or some other kind of groundbreaking research can lead to a wide range of new developments as the work is extended. New journals may also be formed to focus on research areas experiencing rapid growth, satisfying the need for identifying the best research in this new area. Journals themselves, even long established ones, go in and out of favor over time, and it becomes "fashionable" to attempt to publish in certain highly favored ones - choices that vary from field to field. The new "social" internet is subject to waves of fashion in communication styles and methods: blogs, blog networks, twitter, and other social media have all played varying roles in science communication in recent years. "Open science" in one form or another is widely talked about; author-pays open access journal publishing seems to still be the most widely adopted of these, but even there it has been slow in uptake and is still ignored by the majority of working scientists.

So here we find a puzzle: we have new modes of communication that bypass research journals, they are essentially free, they have permanent locations and can be cited and have those citations indexed. They've been promoted and hyped in all the fashionable ways that should attract researchers' attention. Post-publication peer review is available in a variety of comment and social media systems if really needed. Why do authors still insist on publishing in essentially traditional scholarly journals? The problems are not limited to the issues with scams and predatory journals. When the main incentive is to please the author, referees become increasingly over-burdened, editors fight to do more with less, and mistakes naturally become more common. Trust in journals and scientific authority is harmed in the process.

I'll be returning to this question in a follow-on post or two; my essential point here, in the "pace layer" context, is that the stability of this pattern of behavior strongly suggests its origin in one of the more slowly-moving layers of our system - either a governance or cultural issue. At the governance layer, is there an expectation or insistence on large numbers of journal publications? To receive funding a long publication list was always seen as an advantage, but do those publications really have to be in traditional journals? If citations are what matters now and arXiv-style publication can provide citation metrics, do funders really care if the paper was actually published in a regular journal? Perhaps this is part of the explanation. But I think the real issue lies deeper, at the cultural level, where there is a deep feeling among the bulk of researchers, an intuition perhaps, regarding the need for something like traditional peer review.

If we can better understand the cultural motivations it may suggest some new things to try in the fast-moving layers to satisfy those needs in a more rational fashion. Ideally this would lead to changes in infrastructure and governance to finally address the problems in scholarly communication that have become so plainly manifest lately. I don't think anything that's been tried so far is quite in the right direction, though some may be close. More on all this later...