Skip to main content
Pepperdine | Community

"Tiered Trust" AI Evaluation Process

AI desktop concept illustration showing multiple AI visual elements hovering over a laptop keyboard.

Overview

Pepperdine University's Information Technology department created the Tiered Trust AI Evaluation Process to guide the University community as they consider the adoption of any artificial intelligence tool (e.g., Generative AI, Agentic AI, etc.) for their work at Pepperdine. This approach categorizes AI tools by the classification of the data they handle, allowing for a more agile vetting process for lower-risk tools while ensuring a rigorous review for high-risk ones.

Before considering any third-party tools, all community members should familiarize themselves with the University's Computer and Network Responsible Usage Policy, Information Classification and Protection Policy, and the Information Classification and Protection Policy Schedules.

For any inquiries and comments about the tool, please contact Jonathan See at jonathan.see@pepperdine.edu and Tim Bodden at tim.bodden@pepperdine.edu.

Stewarding the University's Resources

The University Code of Ethics calls all of us to manage our resources wisely. In addition to data security/privacy considerations and given that Pepperdine already offers Google Gemini for Education to the community, we ask that every employee (faculty and staff) consider carefully the academic and business purpose of any expense, including AI-related tools and services.

 

Meter at Lowest Setting

Tier 1: Low-Risk AI Tools 
(No PII, Public Data Only, No LLM Training)

  • Definition: Low-risk AI tools are tools that process only non-personally identifiable information (PII) or publicly available data, and explicitly guarantee that University data is not used for training any large language models (LLMs).
  • Examples: Using an AI tool for generic report writing, conducting general research on climate change, or basic image, media, or presentation generation, etc.
  • Disclosure Required? Yes. Please use the AI Tool Evaluation Form (Pepperdine login required).
  • Approval Level: Tier 1 only requires supervisory approval. Although it does require disclosure, it does not require review and approval by Information Technology and/or the Office of General Counsel.
  • Vetting Steps:
    1. Explicit No-Training Clause: Verify the vendor's public or contractual statement that no University data (even if public) will be used to train their LLMs.
    2. Basic Data Security: Confirm standard encryption for data in transit and at rest.
    3. Privacy Policy Review: Briefly review the vendor's privacy policy for any red flags.
    4. Purpose Alignment: Does the tool clearly align with a defined, low-risk University use case?

Meter at Middle Setting

Tier 2: Medium-Risk AI Tools 
(Limited PII, Operational Data, No LLM Training)

  • Definition: Medium-risk AI tools are tools that handle limited PII or operational University data (e.g., aggregated attendance data, internal administrative data without direct identifiers), and explicitly guarantee that University data is not used for training any large language models (LLMs).
  • Examples: Gemini for Google Workspace, Zoom AI Companion.
  • Disclosure Required? Yes. Please use the AI Tool Evaluation Form (Pepperdine login required).
  • Approval Level: Tier 2 requires review and approval by Information Technology and/or the Office of General Counsel.
  • Vetting Steps:
    1. All Tier 1 Steps.
    2. SOC 2 Type 2 Report Review: Request and review the SOC 2 Type 2 report for relevant trust principles (Security, Confidentiality).
    3. GDPR Data Processing Agreement (DPA): Ensure a DPA is in place, outlining data controller/processor roles and responsibilities.
    4. Access Controls & User Management: Review how user access is managed and authenticated. Integration requirements with SecureConnect (powered by DUO) and/or University single sign-on.
    5. Data Minimization: Assess if the tool collects only the data absolutely necessary for its function.

Meter at Maximum or Highest Level

Tier 3: High-Risk AI Tools
(Restricted Data, Sensitive PII, Research Data, Core Operations, No LLM Training)

  • Definition: High-risk AI tools are tools that handle RESTRICTED data (e.g., FERPA, HIPAA), sensitive PII (e.g., financial aid data, protected research data), or are critical to core University operations, with an absolute guarantee of no training.
  • Examples: Oracle, Blackbaud, or a third-party vendor developing an AI agent that can be integrated within our PeopleSoft ERP or Raiser’s Edge systems, or any critical University systems.
  • Disclosure Required? Yes. Please use the AI Tool Evaluation Form (Pepperdine login required).
  • Approval Level: Tier 3 poses the highest risk and requires review and approval by Information Technology, the Office of General Counsel, Insurance & Risk, impacted Data Owners, and the AI Oversight Committee.
  • Vetting Steps:
    1. All Tier 1 & 2 Steps.
    2. In-Depth SOC 2 Type 2 Audit: Thoroughly review the entire SOC 2 Type 2 report, potentially involving internal security experts.
    3. Comprehensive GDPR Compliance Check: This includes a mandatory Data Protection Impact Assessment (DPIA), a detailed review of data subject rights mechanisms, and confirmation of data residency.
    4. Signed No-Training Addendum: A separate, legally binding addendum to the contract specifically prohibiting the use of University data for LLM training.
    5. Third-Party Subprocessor Vetting: Require the vendor to provide details on all subprocessors and their compliance/security postures.
    6. Incident Response Plan Review: Detailed review of the vendor's incident response plan and breach notification procedures.
    7. Data Deletion Verification: Request proof or demonstration of secure data deletion processes upon contract termination.

Footnotes

† (Dagger notation)
A SOC 2 Type 2 report is an audit that assesses a service organization's "system and organization controls" (SOC) over a period of time, typically 3-12 months, to determine their effectiveness in meeting specific security, availability, processing integrity, confidentiality, and privacy criteria.
‡ (Double-dagger notation)
The GDPR, or General Data Protection Regulation, is a European Union (EU) law focused on data privacy and protection. It sets strict rules for how organizations handle the personal data of individuals within the EU, regardless of where the organization is located.

Acknowledgements

The Tiered Trust AI Evaluation Tool was prepared for the University by Jonathan See and Tim Bodden on June 4, 2025. It was reviewed and adopted by the Pepperdine AI Oversight Committee in Fall 2025.


See Also:

 

Back to AI at Pepperdine

 

Tech Central

Phone: 310.506.4357 (HELP)

Hours: 24 hours a day, 7 days a week, 365 days a year

Technology Service Request Forms

Have A Suggestion for IT?

Click to share your suggestion, anonymously if preferred, to improve Pepperdine IT.