photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

268
active users

#ref

0 posts0 participants0 posts today

A brilliant new paper in #ResearchEvaluation 📄 compares how the UK, Norway, and Poland implemented research impact assessment - with very different results:

🇬🇧 UK built infrastructure, #REF became a strategic tool.
🇳🇴 Norway took a soft, formative approach.
🇵🇱 Poland copy-pasted, spent big - got confusion, “point-chasing” and no culture shift.

:doi: doi.org/10.1093/reseval/rvaf01

🇺🇦 A lesson Ukraine risks ignoring again.

Continued thread

Update. Here's an unrefereed letter to the editor leaving the false impression that the UK #REF requires researchers to publish in #APC-based #OpenAccess journals. (It doesn't require publishing in OA journals; authors may choose #GreenOA instead; and when they do choose to publish in OA journals, they are free to choose no-APC or #DiamondOA journals.)
nature.com/articles/s41415-025

NatureOpen access and peer review - why do I have to pay twice? - British Dental Journal

It was the first time I've heard of the Research Excellence Framework (REF). On the one hand, I am impressed by this framework, on the other hand, I see many challenges. I'm not so sure every researcher in the UK likes it, apart from the fact that it's very prestigious to be selected as a 4* rated REF project.

Continued thread

Update. This study compared #ChatGPT assessments to human #REF reviews. "Although other explanations are possible, esp because REF score profiles are public, the results suggest that #LLMs can provide reasonable research quality estimates in most areas of science and particularly the physical and health sciences…even before citation data is available…[Note that] the ChatGPT scores are only based on titles and abstracts, so cannot be research evaluations."
arxiv.org/abs/2409.16695

arXiv.orgIn which fields can ChatGPT detect journal article quality? An evaluation of REF2021 resultsTime spent by academics on research quality assessment might be reduced if automated approaches can help. Whilst citation-based indicators have been extensively developed and evaluated for this, they have substantial limitations and Large Language Models (LLMs) like ChatGPT provide an alternative approach. This article assesses whether ChatGPT 4o-mini can be used to estimate the quality of journal articles across academia. It samples up to 200 articles from all 34 Units of Assessment (UoAs) in the UK's Research Excellence Framework (REF) 2021, comparing ChatGPT scores with departmental average scores. There was an almost universally positive Spearman correlation between ChatGPT scores and departmental averages, varying between 0.08 (Philosophy) and 0.78 (Psychology, Psychiatry and Neuroscience), except for Clinical Medicine (rho=-0.12). Although other explanations are possible, especially because REF score profiles are public, the results suggest that LLMs can provide reasonable research quality estimates in most areas of science, and particularly the physical and health sciences and engineering, even before citation data is available. Nevertheless, ChatGPT assessments seem to be more positive for most health and physical sciences than for other fields, a concern for multidisciplinary assessments, and the ChatGPT scores are only based on titles and abstracts, so cannot be research evaluations.