Kintera AI Policy

December 2025

Purpose

Kintera is committed to using Artificial Intelligence (AI) responsibly to enhance our services in real estate sustainability and climate risk assessment. This policy outlines our principles, governance, and safeguards for AI use.

Scope

This policy applies to:

  • All AI tools and models used internally by Kintera.

  • AI-driven features integrated into client-facing services.

  • Third-party AI solutions adopted by Kintera.

Our Principles

  1. Transparency
    We disclose when AI is used in our services and provide clear explanations of its role.

  2. Fairness & Non-Discrimination
    AI systems must be tested to minimise bias and ensure equitable outcomes for all stakeholders.

  3. Privacy & Data Protection
    We comply with GDPR and other applicable regulations. Personal and sensitive data is never used for AI training without explicit consent.

  4. Accountability
    Human oversight is maintained for all AI-driven decisions. AI does not replace professional judgment.

  5. Security
    AI systems are monitored for vulnerabilities and protected against misuse.

How Kintera Uses AI in Climate Risk Assessments

We leverage AI to:

  • Analyse large datasets on weather patterns, flood zones, and temperature projections.

  • Model climate scenarios to assess potential impacts on real estate assets.

  • Identify vulnerabilities in building design and infrastructure for future climate conditions.

  • Support decision-making by providing insights for resilience strategies and capex planning.

AI outputs are always reviewed by qualified professionals before being included in reports or recommendations.

Acceptable Use

  • AI may be used for data analysis, scenario modelling, and reporting.

  • AI must not be used for decisions that could harm individuals or misrepresent asset performance.

  • Employees must follow internal guidelines when using generative AI tools.

Disclaimer on Generative AI Outputs

Generative AI tools may be used for drafting content or summarising data. These outputs:

  • Are not final and require human review for accuracy and compliance.

  • Should never be presented to clients without validation.

  • May contain errors or omissions; Kintera accepts no liability for unverified AI-generated content.

Governance

  • Kintera’s Responsible AI Committee reviews AI projects.

  • Vendors providing AI solutions must meet Kintera’s ethical and security standards.

Continuous Improvement

We regularly update this policy to align with evolving regulations and best practices, including the EU AI Act and industry standards.

Contact

Questions about this policy? Email joannawhitear@kintera.co.uk.