photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

252
active users

#MetaAI

15 posts13 participants7 posts today
Replied in thread

@sveckert @evawolfangel

Wenn ich das richtig verstehe, nutzt Meta die Inhalte aller öffentlichen Posts auf #Instagram und #Facebook für das Training von #MetaAi - also auch z.B. Fotos, die Person X mit dem eigenen Account gepostet hat. Und wenn auf einem dieser Fotos Person Y abgebildet ist, wird auch dieses Foto, auf dem Y zu sehen ist, für das "KI"-Training verwendet, solange X nicht widerspricht. Und wenn X von Y eine Erlaubnis zur Veröffentlichung des Fotos hat... aber nur dafür...

"At so crucial a political moment we need deep, critical thinkers and powerful writers - we must not let the entire future of human understanding be stolen from us, frozen in time because no one who comes next knows how to write."--Tansy Hoskins.@tansy This blog post is essential reading tansyhoskins.org/my-books-have #Meta #MetaAI #GenAI

Tansy E Hoskins · My books have been stolenI know exactly who did it and why.

A crazy development at the crossroads of government efficiency and AI: The Department of Government Efficiency (DOGE), an initiative led by Elon Musk, utilized Meta's AI model (Llama 2) to review and classify email responses from federal workers. The core aim? To assess job necessity based on responses to the famous "Fork in the Road" email asking employees to justify their work.

💡 This scenario prompts some vital points:
👉 AI in Governance: How are private AI models being deployed to evaluate public sector roles and productivity?
🔐 Data Privacy Concerns: The use of federal worker emails raises significant questions regarding sensitive government data and privacy safeguards.
🔍 Ethical Oversight: What frameworks are in place to ensure fair and unbiased AI analysis in such critical contexts?
🤔 Transparency Imperative: Understanding the full scope and implications of such AI applications is crucial for public and professional trust.

This highlights the complex and evolving relationship between tech, governance, and workforce management. What about the ethical and practical considerations when AI is used to evaluate government functions?

#AI #GovernmentEfficiency #DataPrivacy #ElonMusk #MetaAI #security #privacy #cloud #infosec #cybersecurity
wired.com/story/doge-used-meta

WIRED · DOGE Used Meta AI Model to Review Emails From Federal WorkersBy Makena Kelly
Replied in thread
The number of questions being asked on StackOverflow is dropping rapidly

Like cutting down a forest without growing new trees, the AI corporations seem to be consuming the natural raw material of their money-making machines faster than it can be replenished.

Natural, human-generated information, be they works of art, or conversations about factual things like how to write software, are the source of training data for Large Language Models (LLMs), which is what people are calling “artificial intelligence” nowadays. LLM shops spend untold millions on curating the information they harvest to ensure this data is strictly human-generated and free of other LLM-generated content. If they do not do this, the non-factual “hallucinations” (fictional content) that these LLMs generate may come to dominate the factual human-made training data, making the answers that the LLMs generate increasingly more prone to hallucination.

The Internet is already so full of LLM-generated content that it has become a major problem for these companies. The new LLMs are more and more often trained on fictional LLM-generated content that passes as factual and human-made, which is rapidly making LLMs less and less accurate as time goes on — a viscous downward spiral.

But it gets worse. Thanks to all of the AI hype, everyone is asking questions of LLMs nowadays and not of other humans. So the source of these LLMs training data, web sites like StackOverflow and Reddit, are now no longer recording as many questions from humans to other humans. If that human-made information disappears, so does the source of natural resources that make it possible to build these LLMs.

Even worse still, if there are any new innovations in science or technology, unless humans are asking question to the human innovators, the LLMs can’t learn about these things innovations either. Everyone will be stuck in this churning maelstrom of AI “slop,” asking only questions that have asked by millions of others before, and never receiving any true or accurate answers on new technology. And nobody, neither the humans nor the machines, will be learning anything new at all, while the LLMs become more and more prone to hallucinations with each new generation of AI released to the public.

I think we are finally starting to see the real limitations of this LLM technology come into clear view, the rate at which it is innovating is simply not sustainable. Clearly pouring more and more money and energy into scaling up these LLM project will not lead to increased return-on-investment, and will definitely not lead to the “singularity” in which machine intelligence surpasses human intelligence. So how long before the masses finally realize they have been sold nothing but a bill of goods by these AI corporations?

The Pragmatic Engineer · Stack overflow is almost deadToday, Stack overflow has almost as few questions asked per month, as when it launched back in 2009. A recap of its slow, then rapid, downfall.
#tech#AI#Slop

@privacyfoss
I'd like to have a discussion, to sharpen my thoughts on the news that Meta will be allowed to train its LLM on public user data in EU:
windowscentral.com/microsoft/m

My main question is:
Q1. If you wish to opt out, what are your reasons? Be explicit. Assume I don't know anything about it.

A hypothetical question:

Q2. Would your objection still hold if it wasn't Meta, but a benevolent non-profit that would make the resulting LLM open-source and available to all?

Windows Central · Meta, Facebook, and Instagram AI is coming for EU data — Here's what you need to know (and how to opt out)By Adam Harrison