Open Access. Powered by Scholars. Published by Universities.®
- Keyword
-
- Ablation studies (1)
- Algorithms (1)
- Artificial intelligence (1)
- Classification (1)
- Credibility (1)
-
- Economics (1)
- Integrity in scientific work (1)
- Leadership studies (1)
- Legal studies (1)
- Machine learning understanding of retraction (1)
- Metascience (1)
- Organization development (1)
- Political science (1)
- Psychology (1)
- Public affairs (1)
- Public policy and public administration (1)
- Published work (1)
- Random forest classifier (1)
- Replicability (1)
- Reproducibility (1)
- Retraction watch database (1)
- Retractions (1)
- Scientific papers (1)
- Social sciences (1)
- Sociology (1)
Articles 1 - 2 of 2
Full-Text Articles in Education
Systematizing Confidence In Open Research And Evidence (Score), Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Timothy M. Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, Jian Wu, Et Al., Score Collaboration
Systematizing Confidence In Open Research And Evidence (Score), Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Timothy M. Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, Jian Wu, Et Al., Score Collaboration
Computer Science Faculty Publications
Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts. Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment. The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral …
Understanding And Predicting Retractions Of Published Work, Sai Ajay Modukuri, Sarah Rajtmajer, Anna Cinzia Squicciarini, Jian Wu, C. Lee Giles
Understanding And Predicting Retractions Of Published Work, Sai Ajay Modukuri, Sarah Rajtmajer, Anna Cinzia Squicciarini, Jian Wu, C. Lee Giles
Computer Science Faculty Publications
Recent increases in the number of retractions of published papers reflect heightened attention and increased scrutiny in the scientific process motivated, in part, by the replication crisis. These trends motivate computational tools for understanding and assessment of the scholarly record. Here, we sketch the landscape of retracted papers in the Retraction Watch database, a collection of 19k records of published scholarly articles that have been retracted for various reasons (e.g., plagiarism, data error). Using metadata as well as features derived from full-text for a subset of retracted papers in the social and behavioral sciences, we develop a random forest classifier …