Search engines should let us report AI slop and exclude it from the results.
Search engines should let us report AI slop and exclude it from the results.
Brain rot, the cognitive decline and mental exhaustion experienced by individuals, particularly adolescents and young adults, due to excessive exposure to low-quality online materials - can also impact LLMs negatively causing cognitive decline, reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures. Ouch!
Researchers pretrained LLMs with junk data and the results that data quality is a causal driver of LLM capability decay. The declines in LLMs includes worse reasoning, poorer long-context understanding, diminished ethical norms, and emergent socially undesirable personalities. https://llm-brain-rot.github.io/ #AI #LLMs #BrainRot #CognitiveDecline #Pretraining #SocialMedia
I love how this guy thinks.
I do use llms for work. I have a neutral view of using it. To me I treat it as advanced auto predict. But this is one thing I learned about how people use LLMs. The often do not use it to its most useful or cognitively rewarding abilities.
For example, I learned how to use LLM the way Nate describes it below through much experimentation and thinking. Interestingly, it has taught me so much about the writing craft. Because when you have to give instructions to AI, you need to deconstruct how you create an article or a technical document or even meeting minutes. I start being mindful and more aware of how I put together collaterals. I realize a lot of the things I do are unconscious, but thinking through my writing process and my cognitive processes in putting together the document I realized I could improve on these steps.
It helps me to be even more aware and conscious about my weaknesses in crafting certain collaterals. It helps me to target what areas I need to improve.
For example, in my experimentation using AI to write fiction, I realized that I could do a bit more work to improve the way I describe settings. I could also try add more sensory details to enrichen the scene and to use body movements to show and emotion rather than tell.
However, I also realize the without the expertise that I have built over decades writing both fiction and business writing I wouldn't have been able to tell if an AI is writing sucked or not. Without this instinct, you will end up producing bad work because you just don't realize or recognize what good work looks like.
This is the same for coding. I used to code for fun when I was younger. I built websites using a notepad and writing HTML and CSS on it. Now why didn't I use any of the programs - web builders etc? Because I took a look at the code it produced and realized they were extremely bloated. I preferred to write leaner code. I wouldn't have recognized what bloated code looks like without the expertise I had. Writing is very much the same way.
16/ Bei der bitte um Grammatikfehler kommt einfach noch mal derselbe Text raus. Mit der Behauptung, es seien mehrere Grammatikfehler enthalten.
Ich vermute mal, dass man für diese Aufgabe einen Meta-Blickwinkel einnehmen müsste und das geht eben nicht, weil es nur um Abfolgewahrscheinlichkeiten geht. Damit so was ginge, müsste man die #LLMs mit Lernertexten oder irgendwie Schrott trainieren. Das will man aber nicht, da das Ziel ja nicht die Herstellung von Betrugssoftware ist. So bekommt man als Ausgabe etwas, das dem statistischen Mittelmaß entspricht, also ohne Fehler, weil die im Gesamttrainigsmaterial zu selten sind.
Wenn jemand mit Ahnung hier ist, möge er*sie seine*ihre Theorie dazu kundtun.
"Largest study of its kind shows #AI assistants misrepresent #news content 45% of the time – regardless of language or territory."
https://www.bbc.com/mediacentre/2025/new-ebu-research-ai-assistants-news-content
I just read the study and was struck by this point at p. 60: "A set of 30 'core' news questions was developed, which were used by all participating organizations. These were based on actual audience search queries…Additional prompting strategies were not used to try to improve accuracy…because the questions reflected our best understanding of how audiences are currently asking questions about the news."
https://www.bbc.co.uk/mediacentre/documents/news-integrity-in-ai-assistants-report.pdf
That is, this study didn't test the possibility that some kinds of prompts reduce hallucinations. That's an obvious follow-up question to explore.

The BBC, NRK and 20 other public broadcasters allowed AI assistants to scrape their websites. They were then tested on their ability to answer questions about news content.
Today they jointly published the research and issued a warning around the use of AI assistants to answer questions about news, citing factual errors and the misrepresentation of source material affecting AI assistants.
https://nrkbeta.no/2025/10/22/ki-assistenter-testet-halvparten-av-svarene-inneholdt-feil/

'TildeOpen LLM – an open-source foundational large language model with over 30 billion parameters, built to support all European languages. You can fine-tune it to your own needs and deploy it securely – locally or in the cloud – to build trustworthy AI that actually speaks your language' https://tilde.ai/tildeopen-llm/

Whole video of the 75th anniversary of the Turing test event at the Royal Society, placed to start at @garymarcus 's talk:
https://www.youtube.com/live/GmnBTCKocZI?t=5977s
Comment from yet another "The AI bubble is soon going to burst" video:
"Sam Altman makes Zuckerberg look like a human."
The danger of #AI dependency
The hidden cost of outsourcing human judgment.
This week's @merseyskeptics' Skeptics With a K podcast on the #TuringTest was super interesting!
Have a listen:
https://www.merseysideskeptics.org.uk/podcasts/skeptics-with-a-k/feed.xml
#AI #LLMs
"People worry that not being seen as mindless, uncritical AI cheerleaders will be a career-limiting move in the current environment of enforced conformity within tech, especially as tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers." https://www.anildash.com/2025/10/17/the-majority-ai-view/

"Today, students might use AI to write college-entrance essays so that they can get into college, where they use AI to complete assignments on their way to degrees, so they can use AI to cash out those degrees in jobs, so they can use AI to carry out the duties of those jobs. The best one can do—the best one can hope for—is to get to the successive stage of the process by whatever means necessary and, once there, to figure out a way to progress to the next one. Fake it ’til you make it has given way to Fake it ’til you fake it.
Nobody has time to question, nor the power to change, this situation. You need to pay rent, and buy slop bowls, and stumble forward into the murk of tomorrow. So you read what the computer tells you to say when asked why you are passionate about enterprise B2B SaaS sales or social-media marketing. This is not an earnest question, but a gate erected between one thing and the next. Using whatever mechanisms you can to get ahead is not ignoble; it’s compulsory. If you can’t even get the job, how can you pretend to do it?"
https://www.theatlantic.com/technology/2025/10/ai-cheating-job-interviews-fraud/684568/

"If the strengths of A.I. are to truly be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems. Because, frankly, they’re often more effective.
Until the advent of chatbots, most A.I. developers focused on building special-purpose systems, for things like playing chess or recommending books and movies to consumers. These systems were not nearly as sexy as talking to a chatbot, and each project often took years to get right. But they were often more reliable than today’s generative A.I. tools, because they didn’t try to learn everything from scratch and were often engineered on the basis of expert knowledge.
Take chess. If you ask a large language model (the kind of A.I. that powers a chatbot like ChatGPT) to play a game of chess, it struggles to play well and often makes illegal moves, never fully grasping the rules of the game, even after exposure to huge amounts of relevant training data.
Special-purpose programs for chess, in contrast, are programmed from the outset to follow a built-in set of rules, and structured around core notions such as board structure and a tree of possible moves. Such systems never make illegal moves, and the best special-purpose chess systems can easily beat even the most skilled humans. Remarkably, an Atari 2600, using custom A.I. software built in the 1970s, was recently reported to have beaten a large language model."

"The open source models are a big deal. They're already capable of doing really impressive things, like transcription, image generation, and natural language-based data transformation, running on commodity hardware. I run several models on the laptop I'm typing this on – a computer that doesn't even have a GPU.
What's more, there are a lot of ways to improve these models within easy reach. The US AI companies that threw these models over the transom after irrevocably licensing them as free software had very little impetus to improve their efficiency by optimizing them. Remember, they're spending money as a way to "prove" that AI has a future.
Shipping a model that runs badly – that needs more data-centers and energy to run – is a way to convince investors that it's doing something really advanced (after all, look how much compute and energy it's consuming!). It's a scaled-up version of a scam that Elon Musk used to pull on investors when he was shopping his startup Zip2 around: he put the regular PC his demo ran on inside a gigantic hollow case that he would wheel in on a dolly, announcing that his code ran on a "supercomputer." Yes, investors really are that dumb.
Even modest efforts at optimization can yield incredible performance gains. Deepseek, the legendary Chinese open source AI model, consumes a fraction of the resources gobbled up by the likes of OpenAI."
https://pluralistic.net/2025/10/16/post-ai-ai/#productive-residue
James Gosling
This is going to hurt.... And we're all already hurting. When #Al magical thinking took over the tech senior leadership several years ago, I was astonished and saddened: I was seeing good, solid, valuable projects get shitcanned, which is playing out badly. Now my fear is that the impending backlash will damage the great work underneath the toxic Al umbrella. Even #LLMs have an interesting future, but it will take dozens of PhD theses to replace the ocean boiling brute force techniques that are dooming the concept. With new technologies, it's always a bad idea to massively ramp up teams too fast. It pours concrete on a structure that needs to be nimble.
James Gosling

Real-life Her, but make it Black Mirror.
I find that @bert_hubert is spot on in his article (as always).
AI/LLMs will be around post-collapse. The tech is useful. The hype and the ethic violations are harmful, but not inherent.
But I'd love to call out this quote in particular:
"[...] selling to clever people is not a trillion dollar opportunity."
Because it captures so much about the world at large.

Interesting (and long) article on the history and evolution of #LLMs, going all the way back to the 1980s: https://gregorygundersen.com/blog/2025/10/01/large-language-models/?utm_source=Mastodon
I haven't read it all yet at the time of this post, because it's *long*. I'll probably read it in drips and drabs throughout the day. :P