Disclosure Page

AI Transparency & Responsible Use Policy

1. Purpose & Scope

This policy outlines the governance, limitations, and responsible use standards for the Artificial Intelligence (AI) voice technologies provided by RoshanAI360.

While our Privacy Policy covers data, and our Terms cover contracts, this document defines the operational boundaries of the AI itself. By using our services, Clients agree to adhere to these standards.

2. Nature of the Technology

RoshanAI360 utilizes Generative AI and Large Language Models (LLMs). Unlike traditional “press 1 for sales” menus, our agents generate responses in real-time.

  • Probabilistic Nature: The AI predicts the next best word in a sentence. While highly accurate, it is not “thinking” like a human.
  • Variability: The AI may phrase the same answer differently for different callers.

3. Transparency & Anti-Impersonation

Trust is paramount. We strictly adhere to the principle that AI should not pretend to be human with the intent to deceive.

  • Mandatory Disclosure: We recommend all Agents include a disclosure at the start of the call (e.g., “You are speaking with an AI assistant”).
  • Identity Guardrails: Our system prompts are designed to prevent the AI from falsely claiming to be a specific human individual (e.g., claiming to be “The CEO” when it is an agent).

4. Known Limitations & “Hallucinations”

Clients must understand the technical limitations of Generative AI:

  • Hallucinations: In rare instances, AI may confidently state incorrect information (e.g., inventing a service you do not offer).
  • Context Windows: Extremely long conversations may result in the AI “forgetting” details mentioned at the very start of the call.
  • Nuance & Sarcasm: The AI may misinterpret sarcasm, emotional distress, or highly complex slang.

RoshanAI360 applies “System Prompts” to minimize these risks, but we cannot guarantee 100% accuracy in every interaction.

5. The “Human-in-the-Loop” Requirement

RoshanAI360 is an assistance tool, not a replacement for business oversight.

Client Obligations:

  1. Verification: You must verify all bookings, quotes, and promises made by the AI before acting on them.
  2. Monitoring: You must regularly review call transcripts to ensure the AI is representing your brand correctly.
  3. Intervention: You must provide a way for callers to reach a human in complex scenarios (e.g., call forwarding or callback requests).

6. Prohibited High-Risk Use Cases

To ensure safety and limit liability, RoshanAI360 services must not be used for:

  • Emergency Services: Handling “000” calls or critical dispatch.
  • Medical Triage: Diagnosing symptoms or providing medical advice.
  • Crisis Support: Mental health hotlines or suicide prevention lines.
  • Legal/Financial Advice: Providing binding legal counsel or financial planning.
  • Authentication: Using the voice agent to authorize bank transfers or access sensitive accounts.

We reserve the right to immediately suspend any account using our AI for these high-risk purposes.

7. Adversarial Attacks (Jailbreaking)

“Jailbreaking” or “Prompt Injection” occurs when a caller intentionally tries to trick the AI into saying something offensive, illegal, or outside its instructions.

  • Our Defense: We implement security filters to detect and block these attempts.
  • Liability: RoshanAI360 is not responsible for offensive output generated solely because a caller intentionally manipulated the AI (e.g., a caller forcing the AI to tell a crude joke).

8. Bias & Fairness

AI models are trained on vast amounts of internet data and may occasionally reflect inherent biases.

  • Mitigation: We explicitly instruct our agents to remain neutral, professional, and polite.
  • Continuous Improvement: If you observe biased behavior in your agent, please report it to us immediately for recalibration.

9. Data Training & Privacy

  • Your Data is Yours: We do not use your specific client call recordings to train public foundational models (like GPT-4).
  • Service Improvement: We use anonymized interaction data to improve the reliability of the RoshanAI360 platform itself (e.g., fixing bugs where the AI hangs up too early).

10. Third-Party Infrastructure

RoshanAI360 orchestrates top-tier AI providers (such as OpenAI, Deepgram, or Vapi).

  • We act as a gateway to these services.
  • We continuously monitor these providers for uptime and safety standards.
  • In the event of a global outage of a major AI provider, RoshanAI360 services may be temporarily impacted.

11. Incident Reporting

If you witness the AI behaving dangerously, deceptively, or offensively, you must report it immediately to:

support@roshanai360.com

12. Updates to this Policy

As AI regulation in Australia evolves (such as the proposed mandatory AI guardrails), we will update this policy. Continued use of the service constitutes acceptance of these safety standards.