photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

236
active users

#aisecurity

2 posts2 participants0 posts today

🚨 NEW Weekly Series Alert! 🚨

I’m excited to launch the Cybersecurity Weekly Roundup—a new series where I’ll share the top cybersecurity news stories every Friday.

Each week, I’ll curate the biggest incidents, emerging threats, critical vulnerabilities, and key industry insights—all from trusted cybersecurity sources like CISA, MITRE, The Hacker News, and more.

🛡️ Whether you're a cybersecurity pro, IT leader, or just security-curious, this roundup will help you:

Stay ahead of ransomware trends

Monitor critical vulnerabilities and patch releases

Learn about new threat actor campaigns

Track shifts in AI, ICS/OT, and post-quantum security

Every article includes a concise, expert-written summary designed to save you time and deliver actionable insights.

👉 Check out the first edition on the blog today!
🔗 weblog.kylereddoch.me/2025/07/

Follow me for weekly updates and stay cyber-resilient! 🔒

weblog.kylereddoch.me🛡️ Welcome to the Cybersecurity Weekly Roundup - Kyle's Tech Korner
More from CybersecKyle

Can Your AI Be Hacked by Email Alone?

No clicks. No downloads. Just one well-crafted email, and your Microsoft 365 Copilot could start leaking sensitive data.

In this week’s episode of Cyberside Chats, @sherridavidoff and @MDurrin discuss EchoLeak, a zero-click exploit that turns your AI into an unintentional insider threat. They also reveal a real-world case from LMG Security’s pen testing team where prompt injection let attackers extract hidden system prompts and override chatbot behavior in a live environment.

We’ll also share:

• How EchoLeak exposes a new class of AI vulnerabilities
• Prompt injection attacks that fooled real corporate systems
• Security strategies every organization should adopt now
• Why AI inputs need to be treated like code

🎧 Listen to the podcast: chatcyberside.com/e/unmasking-
🎥 Watch the video: youtu.be/sFP25yH0sf4

What Happens When AI Goes Rogue?

From blackmail to whistleblowing to strategic deception, today's AI isn't just hallucinating — it's scheming.

In our new Cyberside Chats episode, LMG Security’s @sherridavidoff and @MDurrin share new AI developments, including:

• Scheming behavior in Apollo’s LLM experiments
• Claude Opus 4 acting as a whistleblower
• AI blackmailing users to avoid shutdown
• Strategic self-preservation and resistance to being replaced
• What this means for your data integrity, confidentiality, and availability

📺 Watch the video: youtu.be/k9h2-lEf9ZM
🎧 Listen to the podcast: chatcyberside.com/e/ai-gone-ro

Hello World! #introduction

Work in cybersec for 25+ years. Big OSS proponent.

Latest projects:

VectorSmuggle is acomprehensive proof-of-concept demonstrating vector-based data exfiltration techniques in AI/ML environments. This project illustrates potential risks in RAG systems and provides tools and concepts for defensive analysis.
github.com/jaschadub/VectorSmu

SchemaPin protocol for cryptographically signing and verifying AI agent tool schemas to prevent supply-chain attacks (aka MCP Rug Pulls).
github.com/ThirdKeyAI/SchemaPin

GitHubGitHub - jaschadub/VectorSmuggle: Testing platform for covert data exfiltration techniques where sensitive documents are embedded into vector representations and tunneled out under the guise of legitimate RAG operations — bypassing traditional security controls and evading detection through semantic obfuscation.Testing platform for covert data exfiltration techniques where sensitive documents are embedded into vector representations and tunneled out under the guise of legitimate RAG operations — bypassing...

New AI Security Risk Uncovered in Microsoft 365 Copilot

A zero-click vulnerability has been discovered in Microsoft 365 Copilot—exposing sensitive data without any user interaction. This flaw could allow attackers to silently extract corporate data using AI-integrated tools.

If your organization is adopting AI in productivity platforms, it’s time to get serious about AI risk management:
• Conduct a Copilot risk assessment
• Monitor prompt histories and output
• Limit exposure of sensitive data to AI tools
• Update your incident response plan for AI-based threats

AI can boost productivity, but it also opens new doors for attackers. Make sure your cybersecurity program keeps up. Contact our LMG Security team if you need a risk assessment or help with AI policy development.

Read the article: bleepingcomputer.com/news/secu

AI is the new attack surface—are you ready?

From shadow AI to deepfake-driven threats, attackers are finding creative ways to exploit your organization’s AI tools, often without you realizing it.

Watch our new 3-minute video, How Attackers Target Your Company’s AI Tools, for advice on:

▪️ The rise of shadow AI (yes, your team is probably using it!)
▪️ Real-world examples of AI misconfigurations and account takeovers
▪️ What to ask vendors about their AI usage
▪️ How to update your incident response plan for deepfakes
▪️ Actionable steps for AI risk assessments and inventories

Don’t let your AI deployment become your biggest security blind spot.

Watch now: youtu.be/R9z9A0eTvp0

Only one week left to register for our next Cyberside Chats Live event! Join us June 11th to discuss what happens when an AI refuses to shut down—or worse, starts blackmailing users to stay online?

These aren’t science fiction scenarios. We’ll dig into two real-world incidents, including a case where OpenAI’s newest model bypassed shutdown scripts and another where Anthropic’s Claude Opus 4 generated blackmail threats in an alarming display of self-preservation.

Join us as we unpack:
▪ What “high-agency behavior” means in cutting-edge AI
▪ How API access can expose unpredictable and dangerous model actions
▪ Why these findings matter now for security teams
▪ What it all means for incident response and digital trust

Stick around for a live Q&A with LMG Security’s experts @sherridavidoff and @MDurrin. This session will challenge the way you think about AI risk!

Register today: lmgsecurity.com/event/cybersid

LMG SecurityCyberside Chats: Live! When AI Goes Rogue: Blackmail, Shutdowns, and the Rise of High-Agency Machines | LMG SecurityIn this quick, high-impact session, we’ll dive into the top three cybersecurity priorities every leader should focus on. From integrating AI into your defenses to tackling deepfake threats and tightening third-party risk management, this discussion will arm you with the insights you need to stay secure in the year ahead.

AI Coding Assistants Can be Both a Friend & a Foe

New research shows that GitLab's AI assistant, Duo, can be tricked into writing malicious code and even leaking private source data through hidden instructions embedded in developer content like merge requests and bug reports.

How? Through a classic prompt injection exploit that inserts secret commands into code that Duo reads. This results in Duo unknowingly outputting clickable malicious links or exposing confidential information.

While GitLab has taken steps to mitigate this, the takeaway is clear: AI assistants are now part of your attack surface. If you’re using tools like Duo, assume all inputs are untrusted, and rigorously review every output.

Read the details: arstechnica.com/security/2025/

Ars Technica · Researchers cause GitLab AI developer assistant to turn safe code maliciousBy Dan Goodin

At the recent #RSAC2025 conference, LMG Security's @sherridavidoff and @MDurrin drew packed crowds with their sessions on how hackers use AI to exploit stolen source code and a hands-on tabletop lab exploring deepfake cyber extortion.

We’ve received a lot of inquiries about these sessions! If you couldn’t attend RSA and you're interested in these topics, we also offer custom training and tabletop exercises to help your team prepare for the next generation of AI-powered cyber threats.

Contact us to learn more: lmgsecurity.com/contact-us/

AI-powered features are the new attack surface! Check out our new blog in which LMG Security’s Senior Penetration Tester Emily Gosney @baybedoll shares real-world strategies for testing AI-driven web apps against the latest prompt injection threats.

From content smuggling to prompt splitting, attackers are using natural language to manipulate AI systems. Learn the top techniques—and why your web app pen test must include prompt injection testing to defend against today’s AI-driven threats.

Read now: lmgsecurity.com/are-your-ai-ba

LMG SecurityAre Your AI-Backed Web Apps Secure? Why Prompt Injection Testing Belongs in Every Web App Pen Test | LMG SecurityDiscover how prompt injection testing reveals hidden vulnerabilities in AI-enabled web apps. Learn real-world attack examples, risks, and why your pen test must include LLM-specific assessments.

Check out TechSpot’s new article featuring LMG Security’s @sherridavidoff and @MDurrin on how “Evil AI” is accelerating cyber threats.

The article recaps their #RSAC2025 presentation, where they demonstrated how rogue AI tools like WormGPT—AI stripped of ethical guardrails—can rapidly detect and help exploit real-world vulnerabilities.

From identifying SQL flaws to delivering working Log4j and Magento exploits, Sherri and Matt reveal how AI is arming cybercriminals faster than traditional defenses can keep up.

Read the full TechSpot article: techspot.com/news/107786-rsa-c

TechSpotAt RSA Conference, experts reveal how "evil AI" is changing hacking foreverOn a recent morning at the annual RSA Conference in San Francisco, a packed room at Moscone Center had gathered for what was billed as a technical...

Microsoft Copilot for SharePoint just made recon a whole lot easier. 🚨
 
One of our Red Teamers came across a massive SharePoint, too much to explore manually. So, with some careful prompting, they asked Copilot to do the heavy lifting...
 
It opened the door to credentials, internal docs, and more.
 
All without triggering access logs or alerts.
 
Copilot is being rolled out across Microsoft 365 environments, often without teams realising Default Agents are already active.
 
That’s a problem.
 
Jack, our Head of Red Team, breaks it down in our latest blog post, including what you can do to prevent it from happening in your environment.
 
📌Read it here: pentestpartners.com/security-b

Man, this whole AI hype train... Yeah, sure, the tools are definitely getting sharper and faster, no doubt about it. But an AI pulling off a *real* pentest? Seriously doubt that's happening anytime soon. Let's be real: automated scans are useful, but they just aren't the same beast as a genuine penetration test.

Honestly, I think security needs to be woven right into the fabric of a company from the get-go. It can't just be an afterthought you tack on when alarms are already blaring.

Now, don't get me wrong, AI definitely brings its own set of dangers – disinformation is a big one that springs to mind. But here's the thing: we absolutely *have* to get our heads around these tools and figure them out. If we don't keep pace, we risk becoming irrelevant pretty quick.

So, curious to hear what you all think – where do the greatest pitfalls lie with AI in the security field? What keeps you up at night?

An open sourcxe AI traiing dataset lheld 12,000+ API keys & passwords! New research from Truffle Security uncovered nearly 12,000 valid API keys and passwords embedded in AI training datasets from Common Crawl—a widely used open-source web archive. These leaked secrets include AWS root keys, MailChimp API keys, and Slack webhooks, which can expose companies to data breaches, phishing, and supply chain risks.

As AI adoption grows, organizations must secure their code, scan for exposed credentials, and enforce strict key management policies to prevent unauthorized access and data leaks.

Read more details: ow.ly/Esop50V9vPT

#Cybersecurity #AISecurity #GenAI #AI #Databreach #APIsecurity #Infosec #RiskManagement #CISO #Cyberaware #SMB #CEOet/

BleepingComputer · Nearly 12,000 API keys and passwords found in AI training datasetBy Ionut Ilascu