photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

243
active users

#datapoisoning

0 posts0 participants0 posts today

This new data poisoning tool lets artists fight back against generative AI

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

technologyreview.com/2023/10/2

MIT Technology Review · This new data poisoning tool lets artists fight back against generative AIBy Melissa Heikkilä

Wer sich über die vielen tollen Informationsangebote im Internet freut, sollte wissen:

#KI 🤖 randaliert im Netz – #Admins halten dagegen, damit wir Menschen ungestört surfen können.

Lest mal, wie Admins ihre absolut frustrierende aber unsichtbare Abwehrarbeit gegen KI beschreiben – im Blog von @campact:
👉 blog.campact.de/2025/05/ki-ran

🙏 @flberger

❗Nicht vergessen: 25. Juli ist #SysAdminDay

Campact Blog · Admins gegen KI-BotsKI wird von einigen regelrecht vergöttert, während andere sie zum Teufel schicken – und explizit davor warnen. Friedemann Ebelt trägt im Campact-Blog ein regelrechtes Manifest der Systemadmins zusammen, das die geballte Unmut gegenüber KI widerspiegelt.
The number of questions being asked on StackOverflow is dropping rapidly

Like cutting down a forest without growing new trees, the AI corporations seem to be consuming the natural raw material of their money-making machines faster than it can be replenished.

Natural, human-generated information, be they works of art, or conversations about factual things like how to write software, are the source of training data for Large Language Models (LLMs), which is what people are calling “artificial intelligence” nowadays. LLM shops spend untold millions on curating the information they harvest to ensure this data is strictly human-generated and free of other LLM-generated content. If they do not do this, the non-factual “hallucinations” (fictional content) that these LLMs generate may come to dominate the factual human-made training data, making the answers that the LLMs generate increasingly more prone to hallucination.

The Internet is already so full of LLM-generated content that it has become a major problem for these companies. The new LLMs are more and more often trained on fictional LLM-generated content that passes as factual and human-made, which is rapidly making LLMs less and less accurate as time goes on — a viscous downward spiral.

But it gets worse. Thanks to all of the AI hype, everyone is asking questions of LLMs nowadays and not of other humans. So the source of these LLMs training data, web sites like StackOverflow and Reddit, are now no longer recording as many questions from humans to other humans. If that human-made information disappears, so does the source of natural resources that make it possible to build these LLMs.

Even worse still, if there are any new innovations in science or technology, unless humans are asking question to the human innovators, the LLMs can’t learn about these things innovations either. Everyone will be stuck in this churning maelstrom of AI “slop,” asking only questions that have asked by millions of others before, and never receiving any true or accurate answers on new technology. And nobody, neither the humans nor the machines, will be learning anything new at all, while the LLMs become more and more prone to hallucinations with each new generation of AI released to the public.

I think we are finally starting to see the real limitations of this LLM technology come into clear view, the rate at which it is innovating is simply not sustainable. Clearly pouring more and more money and energy into scaling up these LLM project will not lead to increased return-on-investment, and will definitely not lead to the “singularity” in which machine intelligence surpasses human intelligence. So how long before the masses finally realize they have been sold nothing but a bill of goods by these AI corporations?

The Pragmatic Engineer · Stack overflow is almost deadToday, Stack overflow has almost as few questions asked per month, as when it launched back in 2009. A recap of its slow, then rapid, downfall.
#tech#AI#Slop

Hi #Admins 👋,

Can you give me quotes that explain your fight against #AIScraping? I'm looking for (verbal) images, metaphors, comparisons, etc. that explain to non-techies what's going on. (efforts, goals, resources...)

I intend to publish your quotes in a text on @campact 's blog¹ (DE, German NGO).

The quotes should make your work🙏 visible in a generally understandable way

¹ blog.campact.de/author/friedem

Campact BlogFriedemann EbeltFriedemann Ebelt engagiert sich für digitale Grundrechte. Im Campact-Blog schreibt er darüber, wie Digitalisierung fair, frei und nachhaltig gelingen kann. Er hat Ethnologie und Kommunikationswissenschaften studiert und interessiert sich für alles, was zwischen Politik, Technik, und Gesellschaft passiert. Sein vorläufiges Fazit: Wir müssen uns besser digitalisieren!
heise+ | Sicherheit: Schutz vor Data Poisoning und anderen Angriffen auf KI-Systeme

Fehlerhafte Daten können Machine-Learning-Systeme zu folgenreichen Irrtümern verleiten. Ein Praxisbeispiel zeigt, wie so etwas verhindert werden soll.
Sicherheit: Schutz vor Data Poisoning und anderen Angriffen auf KI-Systeme
heise onlineSicherheit: Schutz vor Data Poisoning und anderen Angriffen auf KI-SystemeBy Mirko Ross
Replied in thread

@mhoye The thought occurs: #chaffing / #DataPoisoning.

If we're going to live in a world in which every utterance and action is tracked, issue and utter as much as posssible.

Wire up a speech-aware-and-capable GPT-3 to your phone, have it handle telemarketers, scammers, and political calls. Simply to tie up their time.

Create positive-emotive socmed bots to #pumpUp your #socialcredit score.

Unleash bots on your political opposition's media channels. Have them call in to talk radio, and #ZoomBomb calls and conferences.

Create plausible deniability. Post selfies from a ddozen, or a thousand, places you're not.

Create #DigitalSmog to choke the #FAANG s.

Fight fire with fire.