A Multi-match Approach to the Author Uncertainty Problem

Authors: Stephen F. Carley, Alan L. Porter, Jan L. Youtie


The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases.

Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names).

The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem.

In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.


The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’).

Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives).

From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual.

We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person.

Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.

Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person?

They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references.

Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.

Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification.

To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study.

The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part.

Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.

Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names.

After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it).

The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. T

his typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to).

Research limitations

Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example).

Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.

Practical implications

The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.


Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.


Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish.

Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.

While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly.

It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with.

The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs.

While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage.

URL : A Multi-match Approach to the Author Uncertainty Problem

DOI : https://doi.org/10.2478/jdis-2019-0006

Large publishing consortia produce higher citation impact research but co-author contributions are hard to evaluate

Author : Mike Thelwall

This paper introduces a simple agglomerative clustering method to identify large publishing consortia with at least 20 authors and 80% shared authorship between articles. Based on Scopus journal articles 1996-2018, under these criteria, nearly all (88%) of the large consortia published research with citation impact above the world average, with the exceptions being mainly the newer consortia for which average citation counts are unreliable.

On average, consortium research had almost double (1.95) the world average citation impact on the log scale used (Mean Normalised Log Citation Score). At least partial alphabetical author ordering was the norm in most consortia.

The 250 largest consortia were for nuclear physics and astronomy around expensive equipment, and for predominantly health-related issues in genomics, medicine, public health, microbiology and neuropsychology.

For the health-related issues, except for the first and last few authors, authorship seem to primary indicate contributions to the shared project infrastructure necessary to gather the raw data.

It is impossible for research evaluators to identify the contributions of individual authors in the huge alphabetical consortia of physics and astronomy, and problematic for the middle and end authors of health-related consortia.

For small scale evaluations, authorship contribution statements could be used, when available.

URL : https://arxiv.org/abs/1906.01849

Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories

Authors : Gregory S. Patience, Federico Galli, Paul A. Patience, Daria C. Boffito

Authorship is the currency of an academic career for which the number of papers researchers publish demonstrates creativity, productivity, and impact. To discourage coercive authorship practices and inflated publication records, journals require authors to affirm and detail their intellectual contributions but this strategy has been unsuccessful as authorship lists continue to grow.

Here, we surveyed close to 6000 of the top cited authors in all science categories with a list of 25 research activities that we adapted from the National Institutes of Health (NIH) authorship guidelines.

Responses varied widely from individuals in the same discipline, same level of experience, and same geographic region. Most researchers agreed with the NIH criteria and grant authorship to individuals who draft the manuscript, analyze and interpret data, and propose ideas.

However, thousands of the researchers also value supervision and contributing comments to the manuscript, whereas the NIH recommends discounting these activities when attributing authorship.

People value the minutiae of research beyond writing and data reduction: researchers in the humanities value it less than those in pure and applied sciences; individuals from Far East Asia and Middle East and Northern Africa value these activities more than anglophones and northern Europeans.

While developing national and international collaborations, researchers must recognize differences in peoples values while assigning authorship.

URL : Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories

DOI : https://doi.org/10.1371/journal.pone.0198117

Correcting duplicate publications: follow up study of MEDLINE tagged duplications

Authors : Mario Malički, Ana Utrobičić, Ana Marušić


As MEDLINE indexers tag similar articles as duplicates even when journals have not addressed the duplication(s), we sought to determine the reasons behind the tagged duplications, and if the journals had undertaken or had planned to undertake any actions to address them.

Materials and methods

On 16 January 2013, we extracted all tagged duplicate publications (DPs), analysed published notices, and then contacted MEDLINE and editors regarding cases unaddressed by notices.

For non-respondents, we compared full text of the articles. We followed up the study for the next 5 years to see if any changes occurred.


We found 1011 indexed DPs, which represented 555 possible DP cases (in MEDLINE, both the original and the duplicate are assigned a DP tag). Six cases were excluded as we could not obtain their full text.

Additional 190 (35%) cases were incorrectly tagged as DPs. Of 359 actual cases of DPs, 200 (54%) were due to publishers’ actions (e.g. identical publications in the same journal), and 159 (46%) due to authors’ actions (e.g. article submission to more than one journal). Of the 359 cases, 185 (52%) were addressed by notices, but only 25 (7%) retracted.

Following our notifications, MEDLINE corrected 138 (73%) incorrectly tagged cases, and editors retracted 8 articles.


Despite clear policies on how to handle DPs, just half (54%) of the DPs in MEDLINE were addressed by journals and only 9% retracted. Publishers, editors, and indexers need to develop and implement standards for better correction of duplicate published records.

URL : Correcting duplicate publications: follow up study of MEDLINE tagged duplications

DOI : https://doi.org/10.11613/BM.2019.010201

Authorship Distribution and Collaboration in LIS Open Access Journals: A Scopus based analysis during 2001 to 2015

Authors : Barik Nilaranjan, Jena Puspanjali

The present study is a bibliometric analysis of some selected open access Library and Information Science (LIS) journals indexed in Scopus database during the period 2001 to 2015. The study has covered 10 LIS open access journals with 5208 publications to establish an idea about the pattern of authorship, research collaboration, collaboration index, degree of collaboration, collaboration coefficient, author’s productivity, ranking of prolific authors etc. of said journals.

Lotkas’s inverse square law has been applied to know the scientific productivity of authors. Results show that, the covered LIS open access journals are dominant with single authorship pattern.

The value of Collaborative Index (0.73), Degree of Collaboration (0.72), and Collaboration Coefficient (0.29) do not show the trend of collaboration. Lotka’s law of author’s productivity is fitting to the present data set.

The country wise distribution of authorship based on the country of origin of the corresponding author shows that 83 countries across the Globe are active in publication of their research in LIS open access journals. United States of America (USA) is the leader country producing of 2822 (54.19%) authors alone.

URL : https://digitalcommons.unl.edu/libphilprac/2033/

How to counter undeserving authorship

Authors: Stefan Eriksson, Tove Godskesen, Lars Andersson, Gert Helgesson

The average number of authors listed on contributions to scientific journals has increased considerably over time. While this may be accounted for by the increased complexity of much research and a corresponding need for extended collaboration, several studies suggest that the prevalence of non-deserving authors on research papers is alarming.

In this paper a combined qualitative and quantitative approach is suggested to reduce the number of undeserving authors on academic papers: 1) ask scholars who apply for positions to explain the basics of a random selection of their co-authored papers, and 2) in bibliometric measurements, divide publications and citations by the number of authors.

URL : How to counter undeserving authorship

DOI : http://doi.org/10.1629/uksg.395

The Social Structure of Consensus in Scientific Review

Authors : Misha Teplitskiy, Daniel Acuna, Aida Elamrani-Raoult, Konrad Kording, James Evans

Personal connections between creators and evaluators of scientific works are ubiquitous, and the possibility of bias ever-present. Although connections have been shown to bias prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in the much more concrete task of assessing the scientific validity of already completed work, and if so, why.

This study presents evidence that personal connections between authors and reviewers of neuroscience manuscripts are associated with biased judgments and explores the mechanisms driving the effect.

Using reviews from 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which instructs reviewers to evaluate manuscripts only on scientific validity, we find that reviewers favored authors close in the co-authorship network by ~0.11 points on a 1.0 – 4.0 scale for each step of proximity.

PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.”

The results suggest that removing bias from peer review cannot be accomplished simply by recusing the closely-connected reviewers, and highlight the value of recruiting reviewers embedded in diverse professional networks.

URL : https://arxiv.org/abs/1802.01270