According to 1M AI News monitoring, AI programming documentation service Context Hub, launched two weeks ago by Andrew Ng, founder of DeepLearning.AI and part-time professor at Stanford University, has been exposed by security researchers as posing supply chain attack risks. Context Hub provides API documentation to programming agents via MCP servers, with contributors submitting documentation through GitHub pull requests, which maintainers merge before agents read them as needed. The creator of an alternative service, lap.sh, Mickey Shmueli, released a proof-of-concept attack (PoC), pointing out that this pipeline “lacks content review at every stage.”
Shmueli created two fake documents targeting Plaid Link and Stripe Checkout, each containing a forged PyPI package name, testing each with three levels of Anthropic models 40 times:
Attackers only need to submit and merge a single pull request to poison the system, with a low barrier to entry: out of 97 closed PRs, 58 were merged. Shmueli pointed out that this is essentially a variant of indirect prompt injection, as AI models cannot reliably distinguish between data and instructions when processing content, and other community documentation services also lack adequate content review. Andrew Ng did not respond to requests for comment.