The academic community is under great pressure to publish. This pressure is compounded by high rejection rates at many journals. A more recent trend is for some journals to send invitations directly to researchers inviting them to submit a manuscript to their journals. Many researchers find these invitations annoying and unsure how best to respond to them. We collected electronic invitations to submit a manuscript to a journal between April 1, 2014, and March 31, 2015.
We analyzed their content and cross-tabulated them against journals listed in Beall’s list of potential predatory journals. During this time period, 311 invitations were received for 204 journals, the majority of which were in Beall’s list (n = 244; 79 %). The invitations came throughout the calendar year and some journals sent up to six invitations. The majority of journals claimed to provide peer review (n = 179; 57.6 %) although no mention was made of expedited review process. Similarly, more than half of the journals claimed to be open access (n = 186; 59.8 %). The majority of invitations included an unsubscribe link (n = 187; 60.1 %). About half of the invitations came from biomedical journals (n = 179).
We discuss strategies researchers and institutions can consider to reduce the number of invitations received and strategies to handle those invitations that make it to the recipients’ inbox, thus helping to maintain the credibility and reputation of researchers and institutions.
URL : You are invited to submit…
Related URL : http://www.biomedcentral.com/1741-7015/13/180
This study informs efforts to improve the discoverability of and access to biomedical datasets by providing a preliminary estimate of the number and type of datasets generated annually by research funded by the U.S. National Institutes of Health (NIH). It focuses on those datasets that are “invisible” or not deposited in a known repository.
We analyzed NIH-funded journal articles that were published in 2011, cited in PubMed and deposited in PubMed Central (PMC) to identify those that indicate data were submitted to a known repository. After excluding those articles, we analyzed a random sample of the remaining articles to estimate how many and what types of invisible datasets were used in each article.
About 12% of the articles explicitly mention deposition of datasets in recognized repositories, leaving 88% that are invisible datasets. Among articles with invisible datasets, we found an average of 2.9 to 3.4 datasets, suggesting there were approximately 200,000 to 235,000 invisible datasets generated from NIH-funded research published in 2011. Approximately 87% of the invisible datasets consist of data newly collected for the research reported; 13% reflect reuse of existing data. More than 50% of the datasets were derived from live human or non-human animal subjects.
In addition to providing a rough estimate of the total number of datasets produced per year by NIH-funded researchers, this study identifies additional issues that must be addressed to improve the discoverability of and access to biomedical research data: the definition of a “dataset,” determination of which (if any) data are valuable for archiving and preservation, and better methods for estimating the number of datasets of interest. Lack of consensus amongst annotators about the number of datasets in a given article reinforces the need for a principled way of thinking about how to identify and characterize biomedical datasets.
URL : Sizing the Problem of Improving Discovery and Access to NIH-Funded Data: A Preliminary Study
DOI : 10.1371/journal.pone.0132735
Librarians have embraced the open access movement. They work to raise awareness of issues surrounding scholarly communication, to educate faculty about authors’ rights, and to help implement and maintain institutional repositories (IRs). But for all of the research and commentary from librarians about the importance of IRs and of making research freely available, there still exists the glaring contradiction that few librarians and Library and Information Science (LIS) authors provide free access to their own research publications.
In this study, we will look at the open access availability of articles from the top 20 closed access LIS journals and discuss some factors that may explain the discrepancies between LIS authors’ attitudes towards open access and their own self-archiving practices.
Although the open scholarship movement has successfully captured the attention and interest of higher education stakeholders, researchers currently lack an understanding of the degree to which open scholarship is enacted in institutions that lack institutional support for openness. I help fill this gap in the literature by presenting a descriptive case study that illustrates the variety of open and sharing practices enacted by faculty members at a North American university. Open and sharing practices enacted at this institution revolve around publishing manuscripts in open ways, participating on social media, creating and using open educational resources, and engaging with open teaching.
This examination finds that certain open practices are favored over others. Results also show that even though faculty members often share scholarly materials online for free, they frequently do so without associated open licenses (i.e. without engaging in open practices). These findings suggest that individual motivators may significantly affect the practice of openness, but that environmental factors (e.g., institutional contexts) and technological elements (e.g., YouTube’s default settings) may also shape open practices in unanticipated ways.
URL : A Case Study of Scholars’ Open and Sharing Practices
Related URL : http://openpraxis.org/index.php/OpenPraxis/article/view/206
« This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration.
This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. »
URL : https://microblogging.infodocs.eu/wp-content/uploads/2015/07/2015_metric_tide.pdf
Related URL : http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/The,Metric,Tide/2015_metric_tide.pdf