photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
🌈 An inclusive place for your photos, silliness, and convos! 🌈

Administered by:

Server stats:

244
active users

#reproducibility

2 posts2 participants0 posts today

A summary of this years Open Science Barcamp is out! ✨
Check it out here:

zbw-mediatalk.eu/2025/08/barca

🙏 Great work from the @ZBW_MediaTalk team and @GuidoScherp who organized the participant feedback.

💬 Thanks to everyone who joined this #oscibar2025 and made it fruitful and fun! Hope to see many #OpenScience enthusiasts next year (again).

🗓️ Mark your calenders - the #oscibar2026 will take place on 10 June in Berlin @wikimediaDE

Retractions and failures to replicate are signs of weak research. But they're also signs of laudable and necessary efforts to identify weak research and improve future research. The #Trump admin is systematically weaponizing these efforts to cast doubt on science as such.

"Research-integrity sleuths say their work is being ‘twisted’ to undermine science."
nature.com/articles/d41586-025

www.nature.comResearch-integrity sleuths say their work is being ‘twisted’ to undermine scienceSome sleuths fear that the business of cleaning up flawed studies is being weaponized against science itself.

We invite staff and students at the University of #Groningen to share how they are making #research or #teaching more open, accessible, transparent, or reproducible, for the 6th annual #OpenResearch Award.

Looking for inspiration?
Explore the case studies submitted in previous years:
🔗 rug.nl/research/openscience/op

More info:
🔗 rug.nl/research/openscience/op

#OpenScience #OpenEducation #OpenAccess #Reproducibility
@oscgroningen

Replied in thread

Jack Taylor is now presenting a new #Rstats package: "LexOPS: A Reproducible Solution to Stimuli Selection". Jack bravely did a live demonstration based on a German corpus ("because we're in Germany") that generated matched stimuli that certainly made the audience giggled... let's just say that one match involved the word "Erektion"... 😂

There is a paper about the package: link.springer.com/article/10.3 and a detailed tutorial: jackedtaylor.github.io/LexOPSd. Also a #Shiny app for those who really don't want to use R, but that allows code download for #reproducibility: jackedtaylor.github.io/LexOPSd #WoReLa1 #linguistics #psycholinguistics

Continued thread

7/ Wei Mun Chan, Research Integrity Manager

With 10+ years in publishing and data curation, Wei Mun ensures every paper meets our high standards for ethics and #reproducibility. From image checks to data policies, he’s the quiet force keeping the scientific record trustworthy.

🎙"#Reproducibility isn’t just about repeating results, it’s about making the #research process transparent, so others can follow the path you took and understand how you got there."

🎧Listen to our new OpenScience podcast with Sarahanne Field @smirandafield

🔗 rug.nl/research/openscience/po

⏳ In this 10 min episode, Sarahanne reimagines reproducibility for #qualitative research.
She addresses challenges in ethical #data sharing of transcripts, and the importance of clear methodological reporting.

TODAY (Monday) 16-17:30 CEST #ReproducibiliTea in the HumaniTeas goes qualitative! ✨ Nathan Dykes (Department of #DigitalHumanities and Social Studies @FAU) will give a 20-min input talk entitled "Beyond the gold standard: Transparency in qualitative corpus analysis", followed by a 60-min open discussion on applying the principles of #OpenScience to qualitative research. 🤓

Everyone is welcome, whether on-site @unibibkoeln (where you can also enjoy a range of teas and snacks) or online via Zoom. Please join our mailing list to receive the Zoom link: lists.uni-koeln.de/mailman/lis (or DM me if you read this after 14:00 CEST). 🫖🍪

New study: #ChatGPT is not very good at predicting the #reproducibility of a research article from its methods section.
link.springer.com/article/10.1

PS: Five years ago, I asked this question on Twitter/X: "If a successful replication boosts the credibility a research article, then does a prediction of a successful replication, from an honest prediction market, do the same, even to a small degree?"
x.com/petersuber/status/125952

What if #LLMs eventually make these predictions better than prediction markets? Will research #assessment committees (notoriously inclined to resort to simplistic #metrics) start to rely on LLM reproducibility predictions?

SpringerLinkChatGPT struggles to recognize reproducible science - Knowledge and Information SystemsThe quality of answers provided by ChatGPT matters with over 100 million users and approximately 1 billion monthly website visits. Large language models have the potential to drive scientific breakthroughs by processing vast amounts of information in seconds and learning from data at a scale and speed unattainable by humans, but recognizing reproducibility, a core aspect of high-quality science, remains a challenge. Our study investigates the effectiveness of ChatGPT (GPT $$-$$ - 3.5) in evaluating scientific reproducibility, a critical and underexplored topic, by analyzing the methods sections of 158 research articles. In our methodology, we asked ChatGPT, through a structured prompt, to predict the reproducibility of a scientific article based on the extracted text from its methods section. The findings of our study reveal significant limitations: Out of the assessed articles, only 18 (11.4%) were accurately classified, while 29 (18.4%) were misclassified, and 111 (70.3%) faced challenges in interpreting key methodological details that influence reproducibility. Future advancements should ensure consistent answers for similar or same prompts, improve reasoning for analyzing technical, jargon-heavy text, and enhance transparency in decision-making. Additionally, we suggest the development of a dedicated benchmark to systematically evaluate how well AI models can assess the reproducibility of scientific articles. This study highlights the continued need for human expertise and the risks of uncritical reliance on AI.

On #Reproducibility.

I've had a few conversations over the years, one recently, where I've been working with a published model and interacted with others that have used it as well and I was asking questions about their methods. I've received many responses like, "that's x years ago, I don't know", "it's too long ago, I can't remember". Folks, code doesn't change over time. You should be able to go back to what you did and tell me.

I feel like I'm somehow resented for asking these questions.

🎙️"#Reproducibility should be a key factor in all your #research, in all your projects. It costs time, but it's also shifting the time, and in the end it can save time again."

🎧Listen to the latest episode of our #OpenScience Bites podcast with Michiel de Boer of the @Dutch_Reproducibility_Network
🔗 rug.nl/research/openscience/po

⏳Open Science Bites is a series of short #podcast episodes - each around 10 minutes long - focusing on one specific open science practice.

rug.nl/opensciencebites