@zazzoo Just say "no" AND reiterate that "no" every visit.
Otherwise, will the in-room components of the system properly be disabled between patients?
@zazzoo Just say "no" AND reiterate that "no" every visit.
Otherwise, will the in-room components of the system properly be disabled between patients?
An excellent article by @Iris on LLMs’ erosion of human knowledge, with particular reference to Elsevier’s ScienceDirect platform:
https://irisvanrooijcogsci.com/2025/08/12/ai-slop-and-the-destruction-of-knowledge/
Does anyone know of any research into how well #LLMs actually re-level or re-word #k12 readings? Every #education oriented #AI tool has re-leveling and re-wording built in, but is it actually any good?
I think I need to do this research, unless there is already some out there.
"I have a hard time seeing the contemporary model of social media continuing to exist under the weight of #LLMs and their capacity to mass-produce false information or information that optimizes these social network dynamics. We already see a lot of actors—based on this monetization of platforms like X—that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information as AI models become more powerful, that content is going to take over. I have a hard time seeing the conventional social media models surviving that."
Study: Social media probably can’t be fixed https://arstechni.ca/zRAc #agent-basedmodeling #socialpsychology #socialsciences #polarization #socialmedia #socialworks #Features #Science #LLMs
A new technique for LLMs has just landed: Explainable training!
Let me *explain*.
Normal supervised training works so that you show ground truth inputs and outputs to a model and then you backpropagate the error to the model weights. All in this is opaque black box. If you train with data which contains for example personally identifiable information (PII) or copyrighted contents, those will plausibly be stored verbatim in model weights.
What if we do it like this instead:
Let's write initial instructions to an LLM for generating synthetic data which resembles real data.
Then we go to the real data, and one by one show an LLM an example of a real data, and an example of the synthetic data, and the instructions used to generate the synthetic data. Then we ask it to iteratively refine those instructions to make the synthetic data resemble real data more, in the features and characteristics which matter.
You can also add reasoning parts, and instructions for not putting PII as such into the synthetic data generation instructions.
This is just like supervised learning but explainable! You'll get a document as a result which has refined instructions on how to generate better synthetic data, informed by real data, but now it's human readable and explainable!
You can easily verify that this relatively small document doesn't contain for example PII and you can use it to generate any volume of synthetic training data while guaranteeing that critical protected details in the real data do not leak into the trained model!
This is the next level of privacy protection for training AIs!
My thoughts on GPT-5. I think it shows that the development of #LLMs has plateaued and that means that we finally have time to be thoughtful and critical about what decisions to make about its use in our lives and workspaces, particularly in #HigherEducation and #Academia #philosophy #AIChatbot #ai
https://matthewbarnard.phd/posts/2025-08-11-llms-have-plateaued/
"AI – the ultimate bullshit machine – can produce a better 5PE than any student can, because the point of the 5PE isn't to be intellectually curious or rigorous, it's to produce a standardized output that can be analyzed using a standardized rubric.
I've been writing YA novels and doing school visits for long enough to cement my understanding that kids are actually pretty darned clever. They don't graduate from high school thinking that their mastery of the 5PE is in any way good or useful, or that they're learning about literature by making five marginal observations per page when they read a book.
Given all this, why wouldn't you ask an AI to do your homework? That homework is already the revenge of Goodhart's Law, a target that has ruined its metric. Your homework performance says nothing useful about your mastery of the subject, so why not let the AI write it. Hell, if you're a smart, motivated kid, then letting the AI write your bullshit 5PEs might give you time to write something good.
Teachers aren't to blame here. They have to teach to the test, or they will fail their students (literally, because they will have to assign a failing grade to them, and figuratively, because a student who gets a failing grade will face all kinds of punishments). Teachers' unions – who consistently fight against standardization and in favor of their members discretion to practice their educational skills based on kids' individual needs – are the best hope we have:"
https://pluralistic.net/2025/08/11/five-paragraph-essay/#targets-r-us
"The incredible demand for high-quality human-annotated data is fueling soaring revenues of data labeling companies. In tandem, the cost of human labor has been consistently increasing. We estimate that obtaining high-quality human data for LLM post-training is more expensive than the marginal compute itself1 and will only become even more expensive. In other words, high-quality human data will be the bottleneck for AI progress if these trends continue.
The revenue of major data labeling companies and the marginal compute cost of training of training frontier models for major AI providers in 2024.
To assess the proportion of data labeling costs within the overall AI training budget, we collected and estimated both data labeling and compute expenses for leading AI providers in 2024:
- Data labeling costs: We collected revenue estimates of major data labeling companies, such as Scale AI, Surge AI, Mercor, and LabelBox.
- Compute costs: We gathered publicly reported marginal costs of compute2 associated with training top models released in 2024, including Sonnet 3.5, GPT-4o, DeepSeek-V3, Mistral Large, Llama 3.1-405B, and Grok 2.
We then calculate the sum of costs in a category as the estimate of the market total. As shown above, the total cost of data labeling is approximately 3.1 times higher than total marginal compute costs. This finding highlights clear evidence: the cost of acquiring high-quality human-annotated data is rapidly outpacing the compute costs required for training state-of-the-art AI models."
https://ddkang.substack.com/p/human-data-is-probably-more-expensive
"Imagine if you had infinite resources to build something that might topple the established world order, and all you could come up with was… an automatic internet troll10."
Plato vs Grok - by M. F. Robbins - The Value of Nothing
https://martinrobbins.substack.com/p/plato-vs-grok?utm_medium=ios
#techbro
#LLMS
#Society
#Psychology
#criticalthinking
#Philosophy
Using generative AI in most capacities is wrong for the exact same reason using steroids in sports or at work is wrong (also for additional bonus reasons, too, of course).
We may one day invent safer tools, but that's not meaningfully an objective of any of the biggest players right now.
Thanks to #LLMs, the public now views "text" as an acceptable means to interact with the computer. We #IT practitioners must seize this rare opportunity and revert back to the #TUI, when implementing internal enterprise applications used only by a few in-house business experts.
Imagine how much time, money, effort, and aggravation we would save by ditching those bloated #GUI frameworks.
OpenAI's GPT-5 seems to be underwhelming with "the dominant reaction was major disappointment".
Reviewers launched multiple posts where performance of GPT-5 on benchmarks was subpar. Reviewers found the new automatic “routing” mechanism to be a mess. "Big promises, stupid errors." https://garymarcus.substack.com/p/gpt-5-overdue-overhyped-and-underwhelming #ChatGPT #LLMs #GPT5 #AI #GenAI #Hallucinations
Found one way around students using #LLMs in an online only micreoeconomics class - ask them about *local* negative externalities. For students mainly on Long Island, the responses have been mostly tirades about traffic, even about particular roads and interchanges. Clearly hit a nerve there! #academicChatter
The worst part of this giant AI push is that folks don't seem to realize this is a skill capture play. Either that or they're too fearful to resist. Developers are slowly growing more dependent on these tools. At some point you will morph completely from a skilled engineer to a dependent consumer. #AI #LLMs #LLM #Commons #Tech
"To borrow some technical terminology from the philosopher Harry Frankfurt, “ChatGPT is bullshit.” Paraphrasing Frankfurt, a liar cares about the truth and wants you to believe something different, whereas the bullshitter is utterly indifferent to the truth. Donald Trump is a bullshitter, as is Elon Musk when he makes grandiose claims about the future capabilities of his products. And so is Sam Altman and the LLMs that power OpenAI’s chatbots. Again, all ChatGPT ever does is hallucinate — it’s just that sometimes these hallucinations happen to be accurate, though often they aren’t. (Hence, you should never, ever trust anything that ChatGPT tells you!)
My view, which I’ll elaborate in subsequent articles, is that LLMs aren’t the right architecture to get us to AGI, whatever the hell “AGI” means. (No one can agree on a definition — not even OpenAI in its own publications.) There’s still no good solution for the lingering problem of hallucinations, and the release of GPT-5 may very well hurt OpenAI’s reputation."
https://www.realtimetechpocalypse.com/p/gpt-5-is-by-far-the-best-ai-system
My early impressions of the ChatGPT Web UI with GPT-5 are pretty negative for an #RStats project I’m working on. Code that doesn’t work, not understanding context & follow-up questions. Am guessing I was routed to the less capable mini or nano models at times.
I still like Claude Opus 4.1, but I bump up against Web limits quickly. Google Gemini 2.5 Pro is promising with a lot of context and instructions. Its context window is 5X larger than Opus.
#GenAI #LLMs
What's the best emoji combination to refer to #LLMs?
The gap between student GenAI use and the support students are offered
I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.
The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.
How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:
Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.