Skip to content
English
  • There are no suggestions because the search field is empty.

MTH AI- DOCUMENTATION

Context

Move To Happiness (MTH) wishes to draw up an information document to inform (existing or potential) customers transparently about the use of AI in the context of MTH's services.

We refer to the annex to this document for the draft. The content can be incorporated into a document with MTH branding.

We recommend expanding the AI documentation in the future with some kind of customer FAQ under AI. Such FAQs may, for example, address the following questions:

  • Are our employees profiled?
  • Does MTH have access to individual conversations?
  • Are AI systems trained on our data?
  • Is AI used for HR decisions?

Annex

1. Purpose of this AI documentation

This document provides a transparent and concise overview of the use of artificial intelligence within the services of Move To Happiness and in particular the Move To Happiness platform ("MTH").

The document is intended to inform customers, as well as their legal, privacy and security teams in the context of due diligence, contractual evaluations and compliance checks.

This document describes:

  • which AI functionalities are operational today;
  • which AI components are still under development;
  • how MTH deals with data, privacy, security and AI governance;
  • what safeguards currently exist, including human intervention, explainability, and risk mitigation.

MTH develops and uses AI in a human-centric and controlled way, with a particular focus on transparency, safety, risk reduction and governance. The AI functionalities within the platform are not designed as high-risk AI systems within the meaning of the AI Act and are not intended to result in autonomous or binding decision-making about individuals.

Although the AI Act does not directly impose all of its obligations on MTH’s AI functionalities, MTH deliberately applies the underlying principles of the AI Act as a best practice reference in the design and deployment of its AI systems. This includes transparency towards users, human-centric design, appropriate safeguards, and responsible governance around risks and use.

This document does not aim to provide a formal legal qualification of the platform under the AI Act, but instead provides a factual and principled framework that allows customers to correctly understand and assess the use, scope and impact of the AI functionalities within MTH.

Where relevant, this document is substantively consistent with the transparency requirements set out in the AI Act, including the principles stemming from Articles 13 and 50, without implying that these provisions are directly applicable in all cases.

This document is confidential and is under no circumstances intended for further dissemination by the customer.

2. AI strategy MTH

Move To Happiness uses AI as a supportive tool to make wellbeing accessible to organisations at scale, without replacing individual human coaches.

The AI components within MTH are designed according to three core principles:

  • Human-centred: AI supports, but does not make binding decisions about individuals.
  • Privacy-first: individual interactions remain confidential; organisations receive only aggregated insights.
  • Functionally limited: AI systems operate within predefined use cases and technical guardrails.

AI is used within the MTH service provision exclusively for wellbeing, guidance and the generation of insights, and not for autonomous or binding decision-making about individuals. The AI components within the platform are functionally limited to support, reflection and preventive guidance, and are not designed to make or enforce individual workforce decisions.

In addition to AI functionalities, the platform also provides non-AI-driven dashboards and reports that operate on the basis of predefined indicators and structured data, such as insights about wellbeing, presence or engagement at the individual, team or organisation level. These dashboards can be used by organisations to support reflection, coaching or broader human performance initiatives, but do not use AI-driven decision-making and do not generate autonomous recommendations or conclusions about individual employees.

Use of AI output for individual personnel measures is not foreseen and falls outside the intended use of the platform. Where insights relate to individual employees, this is done exclusively through transparent, non-AI-based visualisations or indicators, with interpretation and any follow-up remaining the responsibility of human actors.

3. Overview of AI components

3.1. Scope and structure

This chapter provides a brief overview of the AI components deployed or under development within the Move To Happiness platform.

For each component, the following aspects are described:

  • the intended purpose and the functional role within the platform;
  • the type of interaction with users or organisations;
  • the main characteristics and functional limitations;
  • the main technical and organisational safeguards.

This overview is descriptive and does not aim to provide a technical or legal qualification of the systems, but provides customers with a functional framework to understand the deployment of AI within the MTH service provision.

The following AI components are discussed:

  • User-level AI buddies, which function as interactive wellbeing chatbots;
  • North Star Agent (Corporate strategist), which operates exclusively with aggregated insights.

3.2. AI buddies

MTH uses multiple AI buddies that act as digital wellbeing assistants for individual end users.

3.2.1. General functioning

All AI buddies:

  • work on the same technical basis;
  • use the same underlying language model (GPT-4.1 via Microsoft Azure, with planned upgrades);
  • operate within central Microsoft guardrails and additional MTH instructions;
  • are clearly identifiable to the user as AI chatbots, in accordance with Article 50(1) of the AI Act.

The AI buddies function as conversational chatbots that support self-reflection and behavioural change in the context of wellbeing. They do not make binding decisions, do not formulate medical diagnoses and do not replace professional care.

3.2.2. Personalisation and differentiation

The differences between the AI buddies are not in the AI model itself, but in:

  • specific instructions and personas;
  • a defined knowledge base that is relevant to the theme of the buddy;
  • the available tools and contextual input.

Depending on the buddy, the AI can use, within predetermined limits:

  • previous interactions with the user;
  • time context and progress within a trajectory;
  • data voluntarily provided by the user;
  • in certain cases external context such as wearable data or nutrition information.

This personalisation is functional and limited, and takes place exclusively within the framework of wellbeing support. No profiling with legal or similarly significant effects is performed.

All AI buddies:

  • are available 24/7;
  • are judgment-free (see 3) and confidential;
  • do not provide medical diagnoses or therapy;
  • do not issue binding opinions.

The AI Personal buddy does not use subliminal, manipulative or deceptive techniques and does not exploit user vulnerabilities. The user retains autonomy over his or her choices at all times.

3.3. North Star Agent

In addition to individual AI buddies, MTH offers one organisational-level AI component, the North Star Agent (Corporate Strategist).

3.3.1. General functioning

The North Star Agent:

  • works exclusively with aggregated and anonymised signals;
  • cannot generate or reconstruct individual-level output;
  • is technically and organisationally separate from user-level AI buddies.

3.3.2. Purpose and use

The North Star Agent is intended to:

  • support strategic wellbeing insights;
  • identify patterns and trends at organisational level;
  • provide input to general wellbeing policies and preventive initiatives.

The output of North Star is intended for management, policy or C-level use, and is neither suitable nor intended for individual follow-up, evaluation or personnel decisions.

3.3.3. Safeguards

By combining:

  • aggregation and anonymisation;
  • Microsoft technical guardrails;
  • strict instructions and demarcated datasets;
  • lack of individual identifiers,

individualisation or reduction to specific persons is technically excluded. The North Star Agent cannot answer questions about individual employees, nor make recommendations about recruitment, dismissal, promotion or evaluation.

3.4. Supporting AI tools

In addition to the AI components described above, MTH makes limited use of supporting tools such as HumanDesign.ai.

HumanDesign.ai is used within the MTH platform as a supportive personality and reflection tool, based on user-provided data, such as date of birth and place of birth.

The output of this tool provides individual insights into preferences, energy profiles and personality traits, similar to other reflection models such as Big Five or DISC. These insights can be used by the AI buddies as contextual input to help tailor guidance and interactions to the user.

The use of HumanDesign.ai:

  • does not lead to autonomous or binding decisions;
  • is used in a coaching and wellbeing context;
  • does not result in automated HR decisions;
  • is not used by MTH for recruitment, promotion, dismissal or disciplinary action.

3.5. Non-AI functionalities

The MTH platform also includes dashboards and reports that operate in a rule-based manner on predetermined indicators, such as absenteeism or presenteeism. These functionalities:

  • do not use autonomous AI;
  • do not qualify as AI systems within the meaning of Article 3(1) of the AI Act, as they do not perform inference and only apply deterministic rules;
  • are distinguished from the AI components described in this document.

4. Data processing in the context of AI

4.1. Data categories

Depending on the use, AI can process the following categories:

  • textual input from users;
  • contextual wellbeing information;
  • metadata (such as time and type of interaction).

Health data will only be processed if the user gives permission for this, for example when linking to wearables.

4.2. Data flows

High-level

  • User interacts with AI via the app
  • The prompt is processed within the defined AI system (see also our layers of control 1)
  • Output is generated and returned to the user
  • Logging is done within a European hosted environment

There is no automatic transfer of (non-aggregated) personal data of the employee (user) to their employer or third parties.

5. Technical and organisational measures

5.1. AI within an evidence-based framework

Our AI components don't just draw information from the internet. This distinguishes our AI components from AI without frameworks (generative AI). With generative AI, there would be a risk of inconsistent or incorrect answers, a risk of medical claims, or a black box risk.

Our AI components operate within an evidence-based framework that allows us to mitigate the risks of generative AI. In practice, this means that the output of our AI components is limited and guided by science-based methodologies selected by our experts.

For this, we implement three layers of control:

Layer 1

Authorised content as a source

Our AI components base responses solely on MTH's content library. These are videos, articles and exercises developed by psychologists, burnout coaches and vitality experts.

 

 

 

Layer 2

Methodological guidance

The interaction is based on positive psychology with a focus on personality, resilience and growth, according to the methodologies that are central within the MTH platform.

 

 

 

Layer 3

Filtering

Before an answer reaches the user, we check whether the answer is in line with the protocols of MTH. This ensures that every answer is evidence-based and strictly within the preventive frameworks of the MTH platform.

 

5.2. AI technology

MTH uses AI as a controlled tool to promote wellbeing, where privacy and technical integrity are top priorities. MTH uses large language models (LLMs), including:

  • OpenAI GPT-4.1 via European hosting: MTH uses Microsoft Azure infrastructure with hosting within the European Economic Area (EEA);
  • Specialised models: additional algorithms optimised for speed and accuracy are used for specific tasks.

Key safeguards:

  • User data is not used to train OpenAI or Microsoft models;
  • Prompts are used functionally;
  • Each AI agent can only call predefined functions or perform predefined actions within the secure environment of the platform;
  • No autonomous query generation: to mitigate security risks, the AI never generates queries on MTH core systems on its own.

Monitoring and accountability:

  • Application of Microsoft AI Guardrails to proactively filter inappropriate content, bias or malicious output.
  • Monitoring: All AI interactions are logged for safety and quality purposes.
  • Human validation: AI output is primarily intended for in-app guidance and internal insights, and not for public dissemination or external communication without human assessment.

5.3. Monitoring and safeguards

MTH applies a combination of technical, procedural and organisational measures to manage risks associated with the use of AI. These safeguards aim to support users, prevent inappropriate or harmful outputs and ensure safety, without individual AI interactions being continuously monitored by humans.

5.3.1. Human control and supervision

The AI functionalities within the MTH platform do not function autonomously, but within a human-designed and managed framework. The CTO and the product team determine the instructions, knowledge sources, authorised functionalities and guardrails of the AI systems and ensure their correct functioning. AI output is generated within predefined parameters and use cases, keeping the scope and impact of the systems functionally limited.

There is no real-time human validation of individual responses. However, there is human control at the system level, where design choices, updates and adjustments to the AI components are under human responsibility.

5.3.2. Escalation and referral procedures

For situations where AI signals indicate possible serious mental or medical problems, MTH uses a fixed referral protocol per AI buddy. These protocols determine when an AI system should limit its role as a digital coach and actively encourage the user to seek additional or professional support.

For example, when signals indicate severe depressive symptoms, suicidality, trauma or other red flags, the interaction is not limited to coaching within the AI system. In such cases, the user is explicitly reminded of the importance of professional help, such as contact with a general practitioner, psychologist or other suitable healthcare provider. The AI does not take on a diagnostic or therapeutic role and avoids further substantive guidance that could replace professional care.

This approach ensures that safety takes precedence and that AI systems are not deployed outside their preventive and supportive objective.

5.3.3. Explainability and bias mitigation

The AI functionalities are functionally limited and designed to generate understandable, non-technical output. There is no use of black-box decision logic towards customers or organisations. The systems do not carry out profiling with legal or similarly significant effects and do not take decisions that affect users’ rights or obligations.

By combining design choices, guardrails, continuous monitoring at system level and fixed referral procedures, MTH reduces the risk of unwanted bias, use outside the intended context, and unintended impact on users.

5.4. Security measures

MTH applies appropriate security measures, including:

  • Role and access management;
  • logging of AI responses through Microsoft Foundry;
  • separation of individual and organisational data;
  • European hosting.

6. AI governance

6.1. Responsible use of AI

We ensure the security of our AI functionalities by including them in the central MTH security framework. This ensures that the underlying controls are in place, allowing customers to focus on wellbeing delivery.

MTH provides internal procedures and training for employees involved in the development, support and management of AI functionalities, paying attention to responsible use, risk awareness and functional boundaries of AI systems.

Although the AI functionalities of MTH do not qualify as high-risk AI systems and the associated AI literacy obligations of the AI Act do not formally apply, MTH voluntarily applies principles of transparency and conscious use. Customers and administrators therefore receive clear guidelines on the nature and role of AI functionalities within the platform, so that they are deployed in an informed and appropriate manner.

6.2. Innovation with safeguards

MTH strives for continuous improvement of its wellbeing platform. For each new AI application, we use a ‘Safety-by-Design’ process before it is made available:

Step 1

Risk assessment

We carry out a prior risk assessment as standard to ensure ethical and technical safety.

 

 

 

Step 2

DPIA review

If the processing requires it, we carry out a data protection impact assessment (DPIA) to optimally protect your data.

 

 

 

Step 3

Compliance

Each new feature is assessed against the most up-to-date regulations, including the EU AI Act.

 

 

 

Step 4

Transparency

We proactively inform you about new functions, their operation and how they contribute to the wellbeing of your employees.

 

7. Questions

Do you have any questions regarding this document or our use of AI? Please contact Kenneth Van Daele via email on Kenneth.van.daele@movetohappiness.com