LOL apparently one of my daughter's projects in school was to fool an AI/machine learning engine to think a photo of one of the pet rabbits here was a coyote. (which apparently worked great). #cybersecurity #datapoisoning
LOL apparently one of my daughter's projects in school was to fool an AI/machine learning engine to think a photo of one of the pet rabbits here was a coyote. (which apparently worked great). #cybersecurity #datapoisoning
This new data poisoning tool lets artists fight back against generative AI
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
Wer sich über die vielen tollen Informationsangebote im Internet freut, sollte wissen:
#KI randaliert im Netz – #Admins halten dagegen, damit wir Menschen ungestört surfen können.
Lest mal, wie Admins ihre absolut frustrierende aber unsichtbare Abwehrarbeit gegen KI beschreiben – im Blog von @campact: https://blog.campact.de/2025/05/ki-randaliert-im-netz-admins-halten-dagegen/
Nicht vergessen: 25. Juli ist #SysAdminDay
Like cutting down a forest without growing new trees, the AI corporations seem to be consuming the natural raw material of their money-making machines faster than it can be replenished.
Natural, human-generated information, be they works of art, or conversations about factual things like how to write software, are the source of training data for Large Language Models (LLMs), which is what people are calling “artificial intelligence” nowadays. LLM shops spend untold millions on curating the information they harvest to ensure this data is strictly human-generated and free of other LLM-generated content. If they do not do this, the non-factual “hallucinations” (fictional content) that these LLMs generate may come to dominate the factual human-made training data, making the answers that the LLMs generate increasingly more prone to hallucination.
The Internet is already so full of LLM-generated content that it has become a major problem for these companies. The new LLMs are more and more often trained on fictional LLM-generated content that passes as factual and human-made, which is rapidly making LLMs less and less accurate as time goes on — a viscous downward spiral.
But it gets worse. Thanks to all of the AI hype, everyone is asking questions of LLMs nowadays and not of other humans. So the source of these LLMs training data, web sites like StackOverflow and Reddit, are now no longer recording as many questions from humans to other humans. If that human-made information disappears, so does the source of natural resources that make it possible to build these LLMs.
Even worse still, if there are any new innovations in science or technology, unless humans are asking question to the human innovators, the LLMs can’t learn about these things innovations either. Everyone will be stuck in this churning maelstrom of AI “slop,” asking only questions that have asked by millions of others before, and never receiving any true or accurate answers on new technology. And nobody, neither the humans nor the machines, will be learning anything new at all, while the LLMs become more and more prone to hallucinations with each new generation of AI released to the public.
I think we are finally starting to see the real limitations of this LLM technology come into clear view, the rate at which it is innovating is simply not sustainable. Clearly pouring more and more money and energy into scaling up these LLM project will not lead to increased return-on-investment, and will definitely not lead to the “singularity” in which machine intelligence surpasses human intelligence. So how long before the masses finally realize they have been sold nothing but a bill of goods by these AI corporations?
Generar contingut amb IA per contrarrestar l'excés de cerques amb IA. Què pot sortir malament?
Al blog: Bloquejar cerques d'IA embrutant (també) dades
Hi #Admins ,
Can you give me quotes that explain your fight against #AIScraping? I'm looking for (verbal) images, metaphors, comparisons, etc. that explain to non-techies what's going on. (efforts, goals, resources...)
I intend to publish your quotes in a text on @campact 's blog¹ (DE, German NGO).
The quotes should make your work visible in a generally understandable way
Does anyone know of an existing open source project working on AI model poisoning or style cloaking, in the vein of #glaze and #nightshade?
I'm interested in this tech but they both seem to be proprietary, and I'd like to see if there is any work being done on the open source side of things.
Now this is interesting!
"This new data poisoning tool lets artists fight back against generative AI"
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai
This new data poisoning tool lets artists fight back against generative AI (7min) an interesting way to fight back against companies that scrap art online. I wonder how long before AI companies fight this back. Sounds like an eternal battle.
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
#GenerativeAI #DataPoisoning
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
"Data Poisoning" ist der Sand im Getriebe der künstlichen Intelligenz - Edition Zukunft - derStandard.at › https://www.derstandard.at/story/3000000192919/data-poisoning-ist-der-sand-im-getriebe-der-kuenstlichen-intelligenz #KI #DataPoisoning
Hey, I'm putting together a practical guide on personal data pollution. You can find it at the link below.
I'd love suggestions and feedback on what's there—Issues and PRs welcome!
---
#Data #Privacy #DataPrivacy #DataPollution #DataPoisoning #Misinformation #Anonymity
@mhoye The thought occurs: #chaffing / #DataPoisoning.
If we're going to live in a world in which every utterance and action is tracked, issue and utter as much as posssible.
Wire up a speech-aware-and-capable GPT-3 to your phone, have it handle telemarketers, scammers, and political calls. Simply to tie up their time.
Create positive-emotive socmed bots to #pumpUp your #socialcredit score.
Unleash bots on your political opposition's media channels. Have them call in to talk radio, and #ZoomBomb calls and conferences.
Create plausible deniability. Post selfies from a ddozen, or a thousand, places you're not.
Create #DigitalSmog to choke the #FAANG s.
Fight fire with fire.