Open Access: Looking Back

Eli Rabett writes some thoughts on open access for the scientific literature, presumably spurred by the RealClimate discussion on making data and code available. Eli's comments center around the 2004 Wellcome Trust report on costs of academic publishing.

This spurred my own recollections of old discussions on the topic at the still-running American Scientist Forum on Open Access, going back 10 years now. Not much has really changed...

One comment there of mine I thought worth some re-reading now was this one, responding to Andrew Odlyzko:

On Sun, 16 Dec 2001, Andrew Odlyzko wrote:

[On shifting costs back to authors' institutions]
Bringing back secretaries to do basic typesetting does not make sense, as almost all scholars find it easier to do this themselves. On the other hand, I feel there will be increasing pressure to provide expert Web design as well as editorial assistance to make articles easy to access and read. As papers are increasingly accessed in their electronic preprint formats (as is documented in various places, including my paper "The rapid evolution of scholarly communication," which is available, along with other papers, at ), the incentive for scholars will be make those forms attractive for readers.

But the reality is that we have an enormous range of authors who send papers, many of whom may have time and resources and capability to "make articles easy to access and read", but many of whom do not. A look at the statistics on articles we receive:

http://ridge.aps.org/STATS/00geographic.html

shows some of our journals have as little as 21% coming from US authors, less than 35% from authors in even nominally English-speaking countries (a good number of these come from India with rather variable quality of presentation). 15-20% or more come from Asia (mostly China and Japan). Even papers received from US institutions can vary quite widely in consistency. I don't know comparable statistics for arXiv.org, but you can see there quite a variety of presentation styles and skills (a sample paper I just brought up had all the figures upside down, for example) and the range of "raw materials" we receive seems to be even wider than is on display there.

Now one of the things we try to do in copy-editing (along with bringing everything to a common tagged format) is to bring the articles we publish to some minimal "quality" level in the presentation, English/physics terminological usage, etc. I can't say this is done perfectly, but on the other hand I believe the consistency in format and presentation in the final published articles goes a long way to making sure that the relative merits of articles to the readers can be judged primarily on the content, not on enormous differences in presentation. As Andrew notes:

[...] Already [...] scholars in some areas where getting a paper into a prestigioug conference was more important than publishing it (theoretical computer science being the prime example of that) were putting a lot of efforts into making their submissions look nice.

But is this a good thing for science? Should authors with the resources to do so be "selling" their research with flashy presentations, while other authors who invest their resources in actual research get ignored? We need to level the playing field somewhere; doing so at the point of publication through funds extracted from readers (or sponsors, no particular bias on my part there) ensures that authors from less privileged institutions are given equal billing, where the actual research performed warrants it.

In general, as we move towards a continuum of publication, it makes less and less sense to concentrate the copyediting and other costs at the formal publication stage. What I expect scholars will want is provision of "clearly readable research" (in Arthur's words) from the very beginning. It really is a "war for the eyeballs," in scholarly publishing as well as in more commercially-oriented areas, as my papers and those of Steve Lawrence demonstrage

My argument is simply that going in that direction is a bad idea for scholarly research, because it misdirects the resources and attention of scholars into issues of presentation, when their real focus should be the content of their scholarly research, and it penalizes researchers who focus on the latter at the expense of the former, or who may have no resources or skills to devote to it. Let a third party take care of the presentation aspects; perhaps not a publisher doing peer review, though peer review seems to me like an ideal way to judge whether an article warrants "equal billing" with other good research, or not.

Now it can be argued how well we are actually doing in this area. Actual changes to the text of a manuscript are often very minimal. However, even steps such as getting the figures right-side up and positioning them logically among the text, making sure acronyms and uncommon terms are clearly spelled out somewhere, and of course our tagging efforts at linking citations etc., can make a huge difference to the reader, so time devoted to understanding the article is well-spent.

Is this really something we want to lose, in favor of all-out "war for the eyeballs"? My imagination conjures up images of physicists plastering their results on billboards in an escalating war of presentation over content - but maybe there's an equilbrium "detente" point that doesn't actually take that much effort on the part of the author? The prospect does make me uncomfortable, but as Andrew points out, in some areas it seems to be already happening. What does experience teach us there? How is "science" actually faring under these conditions? Has anybody analyzed this sort of thing?

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'd suggest that this fits

I'd suggest that this fits the normal S-curve of effort versus reward:

X: effort
Y: benefits

1) First, if very little effort is put into the communication part, an article may not communicate very well, and it's almostt not worth doing.

2) Then, as X increases, there is an inflection period improvements are really worthwhile.
[This is a good place to be, with the distance up being traded off versus the cost.]

3) Finally, there is a second inflection point, where there are diminishing returns, and one is just polishing for no benefit. [One rarely wants to go beyond that second inflection.]

I've certainly seen papers at all positions, including ones where I thought:

"There might be something good in here, but it just takes too much work to tell"

and at the other extreme, people using complex 3D-graphics to show something trivial.

Hi John - first comment here,

Hi John - first comment here, thanks :-) I'm still trying to figure out this blog software (drupal-based) so let me know anything you see that doesn't work!

So do you think publishers serve some purpose in "normalizing" papers to a roughly equivalent level of understandability, or is it really all on the authors in the end?

Seems to work OK. Re:

Seems to work OK.

Re: normalizing papers
That's not the way I'd put it.

1) I assume one has some audience in mind, meaning a distribution of knowledge.

2) A paper can either pick some range within that distribution, and it is always easiest to pick a narrow range and target that. Of course, if the target is "expert" means that many readers may not get beyond the title, as in the current issue of Science, in which I find:

The Hallucinogen N,N-Dimethyltryptamine (DMT) Is an Endogenous Sigma-1 Receptor Regulator

3) Or it may be that at least the abstract and intro cover a wider audience, and then it gets deep.

4) Finally, (and I think this is hard, and probably only appropriate for longer works), if one is writing for a broader audience range, the most disconcerting thing is to be reading along at a general level, and then suddenly, without warning dive to deeper level that seems hard to skip, as later (but still general sections) refer back to it

A good counter-example, where it's done well, is Hennessy & Patterson's "Computer Architecture - A quantitative approach". There one reads each chapter until the going gets hard, skips to the chapter summary, and then goes to the next chapter. Hence, a book used by serious computer architects can be recommended to some financial analysts.

5) I would think journals ought to be able to add value to this in terms of setting expectations for authors, and depending on the publication, helping the authors a little on this. Brian Kernighan & I once did a paper for IEEE Computer that we thought was already in pretty good shape, but one of their editors definitely helped improve it.