In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue. However, the study has not yet been peer-reviewed, so it's not clear if the disturbing results can be replicated by other researchers.
Sounds scary, however, further down the article this appear:
The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same — setting up a cycle that could continue indefinitely.
Key point here is that those AIs was programmed to both detect possible shut down and replicate itself if it thought a shut down was imminent, and it was instructed to clone itself and to program the replica to continue the loop. This is not independent "thinking" or AI going rogue, this is commanded and programmed action by humans, "forced" upon the AI. Basically a glorified complicated programmed copy act that activates under certain scenario.
The AI threat will at some point become real and dangerous, I think we won't know when that happen for the AI certainly won't tell us and the AI will act superfast to protect itself before we ever get a chance to react. If it happen on a closed system then we'll be fine, on an open system with www access... you tell me.
#AI #Replica #future #technology
AI can now replicate itself — a milestone that has experts terrified