A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.https://mistaike.ai/blog/readme-poisoning-ai-agents#Security #Mcp #Aiagents #Promptinjection