Ask HN: Privacy concerns when using AI assistants for coding?
6 points by Kholin 1 month ago | 6 commentsIf you feed the most critical parts of your project to an AI, wouldn't that introduce security vulnerabilities? The AI would then have an in-depth understanding of your project's core architecture. Consequently, couldn't other AI users potentially gain easy access to these underlying details and breach your security defenses?
Furthermore, couldn't other users then easily copy your code without any attribution, making it seem no different from open-source software?
- apothegm 1 month agoIn theory, these companies all claim they don’t use data from API calls for training. Whether or not they adhere to that is… TBD, I guess.
So far I’ve decided to trust Anthropic and OpenAI with my code, but not Deepseek, for instance.
- baobun 1 month agoEspecially under current US administration and geopolitical climate?
Yeah, we're not doing that.
Also moved our private git repos and CIs to self-managed.
- jonplackett 1 month agoIf your code is written properly then it would be secure even if someone can see the source code (unless there’s environment keys in there that shouldn’t be exposed).
If the only security you have it that your code / site structure is secret that’s not good.
- bhaney 1 month ago> The AI would then have an in-depth understanding of your project's core architecture
God how I wish this were true
- ATechGuy 1 month agoI believe enterprises that care about privacy are using private AI from big tech (say Github copilot), others may not care so much about it.
- rvz 1 month agoDon't forget that your env API keys are getting read and sent to Cursor, Anthropic, OpenAI and Gemini as well.