Guides to LLMs: Your data your terms

LLMs like ChatGPT, Claude, and open source models can make use of your data in ways that aren't always clear. These guides help you adjust key settings to protect your privacy, manage access, and delete your chats.

guides to llms: your data your terms

LLMs and their Chatbots like ChatGPT, Claude, and a growing range of open source models are becoming a regular part of how we work, learn, and communicate. But using them can raise questions about privacy, data use, and control over your interactions.

These guides walk you through key settings and steps you can take to strengthen your privacy and security when using LLMs. This includes

  • protecting access to your account (enabling two-factor authentication)
  • checking whether your chats are private (by managing if and how your chats are shared)
  • limiting how these 'assistants' and 'agents' can access your other accounts (by reviewing how the they have access to your data on third-party apps),
  • stopping these firms' models from learning about you (by preventing your conversations from being used to train models), and
  • limiting access to your prior interactions (by deleting your chat history).

These tools are often designed for easy access, not user control. The settings you need may be hard to find or even missing entirely. Where they do exist, these guides will help you locate and understand them.

We believe it's wrong that companies make protecting privacy seem as a burden that falls on users alone. That is why we push for stronger protections by default, regulation of corporate conduct, and limits to governments' access. These guides offer some immediate steps you can take until platforms do better.