AI Use Policy

01. Purpose

This policy provides guidance on the ethical, lawful, and effective use of AI at Computools LLC. It aims to protect our clients, intellectual property, and ensure alignment with best practices in the industry. As AI technology evolves, this policy will be reviewed and updated regularly.

02. Scope

This policy applies to all Computools employees and contractors involved in the development, deployment, or sale of AI-powered solutions.

03. Ethical Use of AI

3.1 Prohibited Practices

Computools does not build or deploy AI systems that:

  • Manipulate or deceive users;
  • Exploit vulnerabilities due to age, disability, or socio-economic status;
  • Discriminate against individuals or groups;
  • Use biometric data to classify people based on personal attributes or beliefs;
  • Create facial recognition databases from scraped images or video without consent;
  • Detect emotions in workplace or educational settings (except for safety/medical reasons);
  • Mimic a person’s voice or likeness without their explicit consent;
  • Violate copyright laws.

3.2 Transparency

  • Users must be informed when interacting with an AI system.
  • Information must be provided about the AI system’s purpose, capabilities, limitations, and intended use.
  • Users should be guided on how to interpret AI outputs in decision-making processes.

3.3 Fairness & Non-Discrimination

  • AI solutions must undergo bias and fairness assessments before deployment.
  • Datasets used must be inclusive and representative.
  • Measures must be in place to detect and mitigate any potential bias in outputs.

3.4 Privacy & Data Protection

  • All AI solutions must comply with Computools’ Data Protection and Privacy Policies.
  • Personal and sensitive data must be anonymised before use in AI development or testing.
  • Anonymisation must remove all identifiers, including personal and business-related information.
  • Customer data from one tenant cannot be used to train AI models in another.

3.5 Responsibility & Human Oversight

  • All AI tools must be purpose-driven and aligned with business goals defined by humans.
  • Human oversight must be built into the design, development, and deployment processes.
  • The product team is responsible for assessing the potential impact of AI solutions before release.
04. Review and Continuous Improvement
  • Key impact assessments must be reviewed by the Compliance & Security Officer.
  • High-risk cases may be escalated to the Computools Intelligence Steering Committee.
  • Computools remains committed to continuously improving its AI practices in line with legal requirements and public expectations.
05. Updates to This Document

This document will be reviewed every six months or sooner if required due to regulatory or technological changes.

Last Updated: June 19, 2025

Next Planned Update: December 2025