Disclaimer: I am not a lawyer, and this document does not constitute legal advice. It is provided for informational and educational purposes only, based on publicly available information. Always consult qualified legal counsel before making decisions related to privacy, security, or contractual obligations.
This Ai Tool Checklist was created to helps users, leaders, and procurement teams evaluate AI tools responsibly. It is vendor-neutral and designed to surface risk, not promote or discourage any specific product.
Data Collection & Usage
- What data does the tool collect by default (content, metadata, usage patterns, device information, communications, etc.)?
- Is data collection limited to what is strictly necessary to operate the service?
- Does the contract clearly define what “Usage Data” includes?
- Is anonymization clearly defined, or merely stated?
AI Training & Machine Learning
- Can user data be used to train internal AI or machine-learning systems?
- Is such training automatic or optional?
- Does the contract explicitly prohibit training third-party AI models with user data, or is this stated only in marketing materials?
- If AI models improve over time, whose data contributes to that improvement?
Third-Party Access & Subprocessors
- Which third parties process user data (cloud providers, AI model vendors, analytics providers, etc.)?
- Are subprocessors specifically named or referenced only in general terms?
- Can the vendor change subprocessors without notice?
- Are subcontractors required to meet the same privacy and security obligations as the vendor?
Security Practices
- Which security controls are contractually guaranteed (not just described in documentation)?
- Is encryption specified in detail (e.g., in transit, at rest), or described only with general terms such as “enterprise-grade”?
- Who controls encryption keys — the vendor or the customer?
- Are independent audits or certifications current, verifiable, and scoped?
Liability & Risk Allocation
- What happens contractually if data is lost, breached, or accessed without authorization?
- Is the vendor’s liability capped, and if so, at what amount?
- Are indirect or consequential damages (such as reputational harm or regulatory fines) excluded?
- Does the liability cap reasonably reflect the sensitivity and value of the data involved?
Legal & Regulatory Compliance
- Who is responsible for compliance with privacy and data protection laws — the vendor, the user, or both?
- Are data protection roles clearly defined (e.g., controller vs processor)?
- Does the vendor contractually commit to compliance, or merely require the user to comply?
- If recording or monitoring features exist, who is responsible for obtaining lawful consent?
Data Retention, Deletion & Exit
- How long is user data retained during active use of the service?
- What happens to user data after termination of the service?
- Is data deletion automatic, optional, or at the vendor’s discretion?
- Can users retrieve their data before deletion, and are there costs or time limits?
Marketing Claims vs Contractual Reality
- Which assurances appear in the contract, and which appear only in blogs, FAQs, or sales pages?
- In the event of a dispute, which document legally governs?
- Would a regulator or court likely treat a specific claim as enforceable, or dismiss it as marketing?
Fit-for-Purpose Assessment
- Is this tool appropriate for sensitive, confidential, or regulated data?
- Is it intended to be a productivity aid or a system of record?
- Would a data incident involving this tool create legal, financial, or reputational harm?
- Has risk acceptance been explicitly reviewed and documented internally?
If you cannot confidently answer at least 80% of these questions, you do not yet fully understand the risk profile of the AI tool. That does not necessarily mean the tool should not be used — but it does mean it should be used deliberately, with documented guardrails and informed consent.
