Skip to main contentSkip to main content
High-Risk — Annex III Cat. 5(b)ProviderArt. 9 · 10 · 13 · 14 · 15 · 17 · 43 · 47–49

Credit Scoring AI

How FinScore GmbH, a German B2B fintech, achieved EU AI Act compliance for its credit scoring model — covering risk management, training data governance, explainability, and conformity assessment under Annex III Category 5(b).

Company

FinScore GmbH

Jurisdiction

Germany (Munich)

Employees

180

Risk category

High-risk (Annex III)

Company Profile

About FinScore GmbH

FinScore GmbH develops and licenses a machine-learning credit scoring API to retail banks, credit unions, and buy-now-pay-later providers across the EU. The model ingests an applicant's financial history, payment behaviour, and open-banking data to produce a creditworthiness score and an associated approval recommendation. Lenders integrate the API into their loan origination workflows, with the model's output directly influencing whether credit is extended and on what terms.

Business model

  • • B2B API licensed to ~40 EU credit institutions
  • • ~2.3 million credit decisions per month
  • • Revenue: API call volume + annual licensing
  • • Markets: DE, AT, NL, PL, FR

Technical profile

  • • Gradient-boosted ensemble model (XGBoost + LightGBM)
  • • Trained on 8 years of proprietary lending data
  • • 140+ input features (open-banking, bureau, behavioural)
  • • Output: score 0–1000 + approval/decline + reason codes

EU AI Act — Annex III, Category 5(b)

Why This System Is High-Risk

Legal basis

Annex III, Category 5(b) of the EU AI Act explicitly lists “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score” as high-risk AI systems. FinScore's model falls squarely within this definition: it produces a credit score for individual natural persons and its output directly determines access to credit — a fundamental economic resource. The fact that a human loan officer technically reviews the output does not remove the high-risk classification; the AI system's output is the primary input to a consequential decision.

Common misconception

Some providers argue that if a human makes the final decision, the AI is merely “assistive” and not high-risk. The EU AI Act does not support this view. The classification depends on the intended purpose and the nature of the output, not on whether human review occurs downstream. FinScore correctly concluded that its system is high-risk regardless of deployer workflow.

Provider Obligations

Compliance Checklist — FinScore's Status

Art. 9Risk management system — documented, iterative, covering entire lifecycle

Formal risk register established Q1 2025; updated quarterly

Art. 10Training, validation, and test data governance — bias analysis, data quality measures

Data governance policy adopted; demographic parity testing automated in CI/CD

Art. 11 + Annex IVTechnical documentation — Annex IV compliant technical file

96-page technical file covering architecture, training data, evaluation, and risk management

Art. 12Record-keeping — automatic logging of system operation for post-market monitoring

Immutable audit log of all model outputs and input feature hashes retained for 10 years

Art. 13Transparency — instructions for use provided to deploying institutions

Deployer manual covers intended use, known limitations, required human oversight procedures

Art. 14Human oversight — design measures enabling deployers to monitor, understand, override

API returns SHAP-based reason codes; override logging required by contract

~
Art. 15Accuracy, robustness, and cybersecurity — performance benchmarks maintained

Robustness testing framework 80% complete; adversarial input testing still in development

Art. 17Quality management system — documented QMS covering design, development, and monitoring

ISO 9001-aligned QMS adopted; covers model development lifecycle and change management

Art. 43Conformity assessment — self-assessment (no notified body required for this category)

Internal conformity assessment completed and signed off by CTO and legal counsel

Art. 47EU Declaration of Conformity signed and filed

DoC signed 28 July 2025, retained and available to market surveillance authorities

Art. 48CE marking affixed to product and documentation

CE marking applied to API documentation, product page, and contractual materials

~
Art. 49Registration in EU database (euaiact.eu database)

Database portal not yet operational at time of writing; registration pending portal launch

~
Art. 72Post-market monitoring — systematic collection of data on system performance

Monitoring dashboard built; contractual data-sharing obligations with deployers being negotiated

Complete~ In progress Not started

Key Challenges

What Made Compliance Difficult

1. Training data bias (Art. 10)

FinScore's training data spanned 2015–2023 — a period that included COVID-19 credit moratoria, which disproportionately affected certain demographic groups. Initial bias analysis revealed a statistically significant disparity in approval rates between applicants in eastern versus western EU member states, and between applicants aged under 30 versus those aged 45–55. The team had to implement re-weighting of training samples, remove proxy features correlated with protected characteristics, and adopt a fairness constraint during model training (equalized odds). This reduced predictive performance by ~1.2% Gini coefficient — a trade-off the compliance team documented explicitly in the technical file.

2. Explainability requirements (Art. 13 + 14)

The EU AI Act requires that high-risk AI systems produce outputs that are interpretable by deployers and, where required by other law, by affected individuals. FinScore's original model returned only a raw score — deployer bank staff had no insight into the drivers of any individual score. The team rebuilt the API response to include SHAP (SHapley Additive exPlanations) values for the top five contributing features, presented in plain-language reason codes (e.g., “High number of recent credit enquiries in last 90 days”). This required significant engineering effort — approximately four months of a two-person ML engineering team — and required validating that the explanations were faithful to the model's actual decision logic.

3. GDPR Article 22 — automated decision-making

Credit decisions that rely solely or primarily on automated processing are subject to GDPR Art. 22, which gives data subjects the right to: (a) not be subject to a solely automated decision with significant effects, (b) obtain human review, (c) express their point of view, and (d) contest the decision. FinScore had to redesign its deployer integration to ensure that: (1) the model output is never the sole basis for a credit decision without a human touch-point, (2) deployers' privacy notices disclose the use of automated scoring, and (3) deployers implement a documented human review process for any applicant who requests it. FinScore now includes mandatory Art. 22 compliance clauses in all deployer contracts.

4. Post-market monitoring data flows (Art. 72)

To monitor model drift and detect emerging discrimination, FinScore needs outcome data from deployers — i.e., whether loans approved by the model were repaid. Many deployer banks refused to share this data, citing their own GDPR obligations and competitive sensitivity. FinScore's solution was to develop a privacy-preserving federated monitoring approach: deployers run a local monitoring agent that computes aggregate fairness metrics without sharing individual loan outcomes, and transmits only the aggregated statistics to FinScore. This is still in deployment as of the time of writing.

Compliance Timeline

How FinScore Structured Its Compliance Programme

Q3 2024

Initial gap analysis against EU AI Act obligations

External legal counsel conducted a 6-week gap analysis. Output: 47 action items.

Q4 2024

Data governance and bias remediation

New data pipeline with bias testing integrated into CI/CD. Fairness metrics defined and baselined.

Q1 2025

Technical documentation (Annex IV) drafted

96-page technical file produced in collaboration with legal, ML, and product teams.

Q1 2025

Risk management system established (Art. 9)

Formal risk register with identified residual risks, mitigations, and review cadence.

Q2 2025

Explainability API rebuild (Art. 13/14)

SHAP-based reason codes integrated into API response. Deployer documentation updated.

Q2 2025

QMS established (Art. 17)

ISO 9001-aligned quality management procedures adopted across the model development lifecycle.

Q3 2025

Internal conformity assessment (Art. 43)

Self-assessment completed. Declaration of Conformity signed 28 July 2025. CE marking applied.

Q4 2025

Post-market monitoring rollout (Art. 72)

Federated monitoring agents deployed with first two deployer banks. Broader rollout ongoing.

Lessons Learned

What FinScore Would Do Differently

Start bias testing earlier

Retrofitting fairness constraints into an existing model is far harder than designing for fairness from the start. FinScore recommends building bias detection into the data pipeline before initial model training, not after.

Negotiate monitoring rights upfront

Art. 72 post-market monitoring requires outcome data from deployers. Trying to add contractual data-sharing clauses retroactively with 40 established clients was extremely difficult. These clauses should be in the initial contract.

Explainability is a product feature

Deployer bank staff said the SHAP reason codes were the most useful new feature in years — they could actually explain declined applications to customers. Compliance work created genuine product value.

The technical file is a living document

Every model retrain, feature change, or performance threshold adjustment requires a technical file update. FinScore now treats Annex IV documentation updates as a required step in every model release process.

Key Articles

Primary Legal References

Art. 9Risk management system — lifecycle-wide iterative process
Art. 10Data and data governance — training data quality, bias, representativeness
Art. 11 + Annex IVTechnical documentation — comprehensive Annex IV file
Art. 13Transparency and provision of information — instructions for use to deployers
Art. 14Human oversight — design for effective human control
Art. 15Accuracy, robustness, and cybersecurity requirements
Art. 17Quality management system — documented QMS
Art. 43Conformity assessment — self-assessment for Annex III Cat. 5 systems
Art. 47EU Declaration of Conformity
Art. 48CE marking obligations
Art. 49Registration in EU AI Act database
GDPR Art. 22Right not to be subject to solely automated decisions with significant effects

Build your compliance checklist

Generate a customised checklist for your own high-risk credit scoring AI.

Checklist Builder →

Generate Annex IV documentation

Produce your technical file with pre-filled templates for credit scoring systems.

Doc Generator →