Skip to main contentSkip to main content
Limited Risk — Art. 50ProviderArt. 50(1)

AI Customer Service Chatbot

How ChatAssist SaaS, an Irish B2B startup, achieved EU AI Act compliance for its GPT-4-based customer service chatbot — covering Art. 50 disclosure obligations, prompt engineering safeguards, and the GDPR intersection for conversation data.

Company

ChatAssist SaaS

Jurisdiction

Ireland (Dublin)

Employees

12

Risk category

Limited risk (Art. 50)

Company Profile

About ChatAssist SaaS

ChatAssist SaaS develops a white-label AI customer service chatbot powered by the GPT-4 API, sold as a monthly subscription to EU e-commerce retailers. The chatbot is embedded directly in retailers' online storefronts and handles customer enquiries about orders, returns, product information, and account issues. End users — the retailers' customers — interact with the chatbot in a live chat interface that, without disclosure, could easily be mistaken for a human support agent. ChatAssist is the provider under the EU AI Act; its retail clients are deployers.

Business model

  • • B2B SaaS — white-label chatbot for EU e-commerce retailers
  • • ~150 active deployer clients across 11 EU member states
  • • Revenue: tiered monthly subscription by conversation volume
  • • ~85,000 end-user conversations handled per month

Technical profile

  • • GPT-4 API (OpenAI) with custom system prompts per deployer
  • • Retrieval-augmented generation (RAG) on client product catalogues
  • • Deployed via JavaScript embed widget or REST API integration
  • • Conversation logs stored for 90 days (configurable per deployer)

EU AI Act — Article 50

Risk Classification: Limited Risk

Legal basis

Article 50(1) of the EU AI Act requires that providers of AI systems intended to interact directly with natural persons must design those systems so that users are informed they are interacting with an AI — unless this is obvious from context. ChatAssist's chatbot interacts with human shoppers in a live chat interface and is not obviously non-human in all deployment contexts. It therefore falls squarely within Art. 50(1). It does not meet any threshold for high-risk classification under Annex III, and presents no prohibited-practice characteristics under Art. 5.

Why this is not high-risk

The chatbot does not make decisions with significant effects on individuals. It does not assess creditworthiness, make employment decisions, perform biometric identification, or operate in any domain listed in Annex III. It handles routine customer service tasks — order status, return requests, product FAQs — where the worst realistic outcome is a mildly unhelpful response. ChatAssist correctly classified this as limited risk, avoiding the far heavier Annex III obligation set.

Art. 50 Obligations

Compliance Checklist — ChatAssist's Status

Art. 50(1) — session disclosureUsers must be informed they are interacting with an AI at the start of each interaction

A mandatory opening message — "You are chatting with an AI assistant. How can I help you today?" — fires at the start of every new session before the user sends their first message. Deployed as a platform-level default; deployers cannot suppress it.

Art. 50(1) — system promptAI must not claim to be human when sincerely asked by a user

System prompt updated with a hard instruction: "You are an AI assistant. If a user sincerely asks whether you are a human or an AI, always truthfully answer that you are an AI." Adversarial prompt injection tests run quarterly across 50+ bypass attempts.

Art. 50(1) — deployer contractsDeployers contractually prohibited from removing or suppressing the AI disclosure

Terms of service updated to prohibit deployers from modifying the disclosure message, configuring a persona that implies the chatbot is human, or disabling the session-start notification. Breach is grounds for contract termination.

Operational — human fallbackFallback-to-human option surfaced after 3 consecutively unresolved queries

After 3 unresolved exchanges (detected via resolution confidence score), the widget surfaces a "Connect me to a human" button. Deployers must configure a live agent channel or email queue at onboarding; ChatAssist validates this before go-live.

GDPR — privacy policyPrivacy policy and cookie notice updated to disclose conversation data processing

Platform privacy policy now specifies: data categories collected (conversation transcripts, session metadata), retention period (90-day default, configurable per deployer), legal basis (legitimate interest for support function), and third-party processors (OpenAI API with zero data retention, cloud hosting provider).

GDPR — data processing agreementDPA in place with OpenAI and with each deployer client

ChatAssist acts as data processor for each deployer (controller). OpenAI API data processing addendum (zero data retention tier) signed. New deployer DPA template includes confirmation of ZDR status and prohibits ChatAssist from using conversation data for model training.

Complete~ In progress Not started

GDPR Intersection

Data Protection Considerations

1. Conversation log retention periods

ChatAssist stores full conversation transcripts on behalf of deployers for support quality monitoring. Under GDPR's storage limitation principle (Art. 5(1)(e)), these logs may only be retained as long as necessary for their purpose. ChatAssist adopted a 90-day default retention period with automated deletion, configurable per deployer down to 30 days. Deployers requiring longer retention — for example, where their sector has specific record-keeping obligations — must document their legal basis separately. At ingestion, logs are pseudonymised: user session identifiers are not linked to names or email addresses at the transcript storage layer.

2. Consent for analytical processing

Where ChatAssist uses anonymised or aggregated conversation data to improve platform quality — for example, identifying common unresolved query patterns — it relies on legitimate interest (GDPR Art. 6(1)(f)). Where any processing is non-anonymised and extends beyond direct service delivery, deployers must obtain user consent via their own cookie or consent banners before the ChatAssist widget activates. ChatAssist provides a consent-gating API parameter that deployers can use to delay widget initialisation until consent is recorded by their consent management platform.

3. OpenAI API — zero data retention

Every conversation turn is transmitted to OpenAI's API for response generation. ChatAssist operates under OpenAI's API data processing addendum at the zero data retention (ZDR) tier, meaning conversation content is not stored by OpenAI and is not used to train OpenAI models. This is a critical compliance control: without ZDR, personal data in conversation transcripts would be retained by a sub-processor outside ChatAssist's control. Confirmation of ZDR status is included in all deployer DPAs as a documented technical and organisational measure.

Compliance Timeline

How ChatAssist Reached Compliance in 2 Weeks

Day 1–2

Gap analysis and obligation mapping

Founder reviewed Art. 50 obligations with external GDPR/AI Act counsel. Two primary action items identified: UI-level session disclosure and system prompt hardening.

Day 3–5

Session-start AI disclosure message deployed

Front-end engineer added the disclosure card to the widget. Deployed to all 150 deployer instances via a platform-level configuration change — no individual deployer action required.

Day 4–6

System prompt update and adversarial testing

System prompt updated with anti-impersonation instruction. Tested against 50 adversarial prompt injection attempts designed to make the model claim to be human. All 50 blocked.

Day 6–8

Human fallback feature built and shipped

Resolution confidence threshold configured. "Connect me to a human" button added to widget. Deployer onboarding updated to require a fallback channel URL before go-live.

Day 8–11

Privacy policy and DPA updated

Platform privacy policy rewritten to cover chatbot data flows and OpenAI sub-processing. New deployer DPA template drafted. Existing 150 deployer clients re-papered via bulk email with acknowledgement checkbox.

Day 12–14

Deployer terms of service updated

ToS clause added prohibiting suppression of AI disclosure or human-impersonation persona configuration. Compliance acknowledged by deployers at next platform login.

Cost of Compliance

Minimal — UI Change and Prompt Engineering

For a limited-risk chatbot, compliance cost was genuinely low. Art. 50 is a transparency requirement, not a technical overhaul. ChatAssist's total spend was approximately three days of part-time engineering work, four hours of external legal review, and one day of documentation drafting. No conformity assessment body, CE marking, or EU database registration applies to limited-risk systems — those obligations are reserved for high-risk AI under Annex III.

Engineering time

~3 days total across front-end and back-end. No new infrastructure required.

Legal fees

~4 hours external counsel for DPA template and ToS review. Low cost for a 12-person startup.

Ongoing burden

Quarterly adversarial prompt tests (1–2 hours). No mandatory reporting or surveillance authority notification.

Lessons Learned

What ChatAssist Would Do Differently

Build disclosure into the product from day one

ChatAssist launched without an AI disclosure message because it was not legally required at the time. Retrofitting the disclosure meant re-papering 150 clients and updating all deployment documentation. Any product that interacts with natural persons should include AI disclosure by default from launch — regardless of what the law requires at that moment.

Treat the deployer as a compliance partner

ChatAssist's Art. 50 obligation is discharged at the platform level — but deployers can inadvertently undermine compliance by configuring personas that imply the chatbot is human. Strong contractual controls and clear onboarding guidance are essential. Compliance is not purely a technical problem for the provider to solve alone.

The human fallback is a product feature

After shipping the fallback-to-human feature, deployer churn dropped noticeably in the following quarter. Retailers valued having a safety net for frustrated customers. Compliance obligations and good product design often align — especially in customer-facing AI.

Monitor system prompts continuously

LLMs can be prompted by end users to bypass safety instructions. A one-time prompt update is not enough — ongoing red-team testing of the anti-impersonation instruction is necessary, particularly after the underlying model is updated by the API provider. ChatAssist now runs adversarial prompt tests as part of every platform release cycle.

Key Articles

Primary Legal References

Art. 50(1)Transparency for AI systems interacting with natural persons — mandatory AI disclosure at session start
Art. 50(4)Deployer obligation to inform users when AI is used — reinforces and extends provider disclosure duty
Art. 3(1)Definition of AI system — confirms chatbot qualifies as an AI system under the Act
Art. 6Classification rules — confirming chatbot does not meet Annex III high-risk thresholds
GDPR Art. 5(1)(e)Storage limitation principle — basis for 90-day conversation log retention policy
GDPR Art. 6(1)(f)Legitimate interest as legal basis for analytical processing of anonymised conversation data
GDPR Art. 28Data processor obligations — DPA requirements between ChatAssist and deployer clients
GDPR Art. 13Transparency information — disclosure obligations regarding chatbot data processing to end users

Classify your AI system

Use the risk classifier to confirm whether your chatbot falls under Art. 50 limited risk or a higher-risk category.

Risk Classifier →

Look up definitions

See how the EU AI Act defines “AI system”, “provider”, “deployer”, and interaction with “natural persons”.

Definitions →