Most popular ESE articles in December

In our monthly look at the top five most read papers from our journal European Science Editing, we have three from our current issue, published in November; Ksenija Baždarić‘s editorial on questionnaire structure, the viewpoint on medical writers Joint Position Statement, and an original article addressing journals’ guidelines about title, abstract and keywords.

In addition to those, Andrew Woods’ study of terminology, usage and style remains a popular paper, and finally, the ‘Basket of Metrics’ article from 2015. Following publication of that paper, we spoke to author Lisa Colledge, from the research metrics team at Elsevier, about the details in it and other work Elsevier were doing at the time.  Readers interested in that paper might like to read the interview blog post here.

Questionnaire structure – how much do editors need to know?
Ksenija Baždarić
44(4) November 2018. Editorial

Conducting a survey, ie collecting data with a questionnaire, is a very popular method of research, particularly in medicine and the social sciences, with the results frequently published. The data may be collected by self report, ie the participant fills in the data alone, now often online, or by a researcher. The use of a questionnaire as a quantitative method was presented by the Statistical Society of London in the first half of the 19th century. From that time, the methodology of using questionnaires (development, measuring and validation) was mainly established by psychologists. New questionnaires, often called scales, are still being created and described in the literature.

As an editor, author and reviewer, I have noticed that the method is widely (mis)used and the results are frequently misinterpreted. I would like to address some typical issues of which journal editors (with or without a statistical editor or reviewer) should be aware when reading a manuscript describing a (novel) questionnaire.

The development and uptake of the Joint Position Statement on the role of professional medical writers

44(4) November 2018. Viewpoint
Art Gertel, Christopher Winchester, Karen Woolley, Yvonne Yarker

Members of the European Association of Science Editors (EASE) and professional medical writers share a common goal – to publish research that readers trust and value. Medical writers can help EASE members as they strive to follow the EASE Golden Rules for Scholarly Journal Editors.1 Various guidelines are available regarding the role of professional medical writers,2 and adherence to these guidelines is best accomplished if the fundamental points are presented concisely and widely endorsed. This has been achieved with the release of the Joint Position Statement (JPS) on the Role of Professional Medical Writers,3 which has been endorsed by leading medical writing associations from around the world and other key stakeholders. The purpose of this article is to highlight to editors the development and uptake of the JPS, and how this guideline aligns with and supports the EASE Golden Rules.

3D or 3-D: a study of terminology, usage and style
39(3) August 2013. Original article
Andrew J. Woods

The terms “3D” and “3-D” are two alternative acronyms for the term “three-dimensional”. In the published literature both variants are commonly used but what is the derivation of the two forms and what are the drivers of usage? This paper surveys the published stereoscopic literature and examines publication-style policies to understand forces and trends

Journals’ guidelines about title, abstract and keywords: an overview of Information Science and Communication Science areas
Mariângela Spotti Lopes Fujita, María-del-Carmen Agustín-Lacruz, Ana Lúcia Terra
44(4) November 2018. Original article

Objective: The purpose of this exploratory study was to observe and analyse guidelines for authors on writing their papers’ title, abstract and keywords.

Methods: The sample consisted of 64 journals indexed in the Journal Citation Reports (JCR): 32 (50%) Library & Information Science (LIS) journals and 32 (50%) Communication Science (CS) journals. A spreadsheet was used for data collection, containing 36 items grouped into four categories: identification data, guidelines for the title, the abstract, and the keywords of the scientific article. Then, in each category, the LIS journals were compared with CS journals, to verify how specific aspects of knowledge organisation and representation are reflected in editorial policies.
Results: Majority of CS journals (27, 84%) and less than a quarter of LIS journals (7, 22%) referred to a style guide on their website. Specific guidelines for the title were presented in 17 (53%) LIS journals and in 23 (72%) of CS journals, mainly concerning the word number. Twenty three (72%) LIS journals and 31 (97%) CS journals included guidelines for writing abstracts, focusing on word number and the structure of abstracts. Instructions for keywords were presented in 21 (66%) LIS journals and 28 (88%) CS journals, defining the number of keywords and the use of controlled vocabulary.
Conclusion: There is a tendency to standardise general indications and criteria about titles, abstracts and keywords. Guidelines on writing abstracts, titles and keywords have smaller presence in editorial policies of LIS journals, than of CS journals.

A “basket of metrics”—the best support for understanding journal merit
41(3) August 2015. Original article
Lisa Colledge, Chris James

Aim: To survey opinion of the assertion that useful metric-based input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.
Methods: A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.
Results: “Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them. There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one metric approach to merit is insufficient.
Conclusion: This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.