The next generation of CCaaS is here
Digital-first customer service, enterprise-scale voice support. Redefine customer service with an AI-powered platform that unifies voice, digital and social channels. Power channel-less interactions and seamless resolution no matter the channel of contact.
Sprinklr and the EU AI (Artificial Intelligence) Act
Sprinklr’s product offering has been powered by Artificial Intelligence (AI) for several years. However, due to technological advancements, enhanced media scrutiny and evolving regulations, Sprinklr recognizes the need for greater transparency and explainability, which is a testament to our continued commitment to Responsible AI.
This article includes an overview of the European Union’s Artificial Intelligence Act (EU AI Act), its impact on Sprinklr services and how we plan to prepare for the EU AI Act’s effective date.
Please note that this document is intended to be used for information purposes only and does not constitute legal advice. Customers should consult with their internal legal teams to assess the impact of the EU AI Act and other similar legislation on their processing activities and use of Sprinklr Services.
- What is the EU AI Act?
- What does the EU AI Act apply to and what is the focus?
- Who does the EU AI Act apply to?
- When does the EU AI Act apply?
- When does the EU AI Act not apply?
- Unacceptable Risk
- High Risk
- Low Risk
- What is Sprinklr’s Position on the EU AI Act?
- What is Sprinklr doing in preparation for the EU AI Act?
What is the EU AI Act?
The EU AI Act is a ground-breaking comprehensive legal framework regulating the usage of AI in the European Union (EU). The intention is to promote innovation whilst ensuring that there is widespread trust in AI systems by advocating for the safe, ethical, human-centric and responsible development, use, marketing and sale of AI Systems. The EU AI Act was formally adopted by the European Council on May 21, 2024. It becomes effective 20 days after its publication in the Official Journal of the EU. Here are some key implementation milestones:
- By 6 months after entry into force: Prohibitions on unacceptable risk AI apply.
- By 9 months after entry into force: Codes of practice for General Purpose AI (GPAI) must be finalized.
- By 12 months after entry into force: GPAI rules apply.
- By 24 months after entry into force: Obligations on high-risk AI systems listed in Annex III take effect.
- By the end of 2030: Obligations apply to certain AI systems within large-scale IT systems established by EU law.
What does the EU AI Act apply to and what is the focus?
The EU AI Act applies to AI Systems, defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The EU AI Act focuses on AI systems in the market, rather than the development of the underlying models or components underpinning AI Systems. AI models require a user interface and additional components to operate while AI systems can be marketed and used. Therefore, while AI models are key components of AI systems, they do not amount to an AI system on their own. At Sprinklr, our products comprise AI systems – but our features can also be used independently and therefore for the purpose of this document, reference to AI systems is inclusive of our product features.
The EU AI Act has four main areas of focus:
⛔ Risk Classification: Classification of AI Systems based on their potential impact on human rights and values and the establishment of a risk-based approach to regulate AI technologies across various sectors.
➕ Development and Use Requirements: Requirements for the development and deployment of AI Systems, focusing on data quality, transparency, human oversight, and accountability.
⭐ Ethical Considerations: Ethical questions related to AI Systems, including clear values and rules to ensure safety whilst promoting innovation and investment in new technology.
✅ Compliance: Establishment of a European Artificial Intelligence Board to oversee implementation of the EU AI Act and ensure compliance.
Who does the EU AI Act apply to?
The EU AI Act applies to “Deployers” and “Providers” of AI systems, as well as “Importers,” “Distributors” and “Operators.”
- Providers are the developers of AI systems or models (who put it in the market), i.e. Sprinklr.
- Deployers are the users of the AI system in a professional capacity, i.e. Sprinklr’s customers.
- Importers are those who put an AI system from a non-EU country in the market in the EU.
- Distributors are those in the supply chain making the AI system available in the EU, other than the Provider or Importer.
- Operators includes any of the above.
While the EU AI Act is applicable to both Deployers and Providers, the primary responsibility for compliance is with the Provider. Customers should consult with their own counsel on the scope of applicability for Deployers.
When does the EU AI Act apply?
The EU AI Act applies if the Provider or Deployer of the AI System is located in the EU, regardless of where the output of the AI system is used or who it impacts. In terms of extra-territoriality, the EU AI Act also applies to Deployer, Provider, Importer, Distributor or Operator located outside of the EU if the AI systems are offered to EU markets, or the output of the AI system is intended or likely to impact EU citizens. If the EU AI Act applies, a risk assessment should be conducted to properly review the level of risk and impact before the AI system is placed in the market. In circumstances where the risk is deemed high, a fundamental rights impact assessment will be mandatory.
When does the EU AI Act not apply?
Provider Applicability – AI systems in the EU market: The EU AI Act does not apply if the Provider does not use the output of the AI system in the EU and does not put the AI system into service/on the market in the EU. However, since all Sprinklr’s AI systems are available in the EU, the EU AI Act will be directly applicable to Sprinklr.
Deployer Applicability – EU Operations: The EU AI Act does not apply if the Deployer is not based in the EU and the output of the AI system is not used in the EU. For example, the EU AI Act may not be applicable to some of Sprinklr’s customers who have no entity or operations in the EU and/or no EU customer base, though customers should always consult with their internal legal teams on questions of applicability.
Use Case Exceptions Potentially Relevant to Sprinklr: The EU AI Act will not apply in the following cases:
- Research & Innovation: The AI system is developed and used for purposes solely for scientific research and innovation.
- Testing & Development: The AI system is in testing, development or prototyping stages for product-oriented purposes, and has not yet been placed on the market. Once the AI system is put in the market, the EU AI Act will apply.
- Free and Open-Source AI Models: The AI Model is made available through free and open repositories. If provision of the open-source model is monetized in any way or integrated into an AI system, then the EU AI Act will apply.
In addition to the use cases above, there are other use cases in which the EU AI Act may not be applicable, such as when AI systems are used for national security or military purposes, by public authorities or law enforcement in cooperation with EU member states, or when AI systems are used solely in the developer’s personal capacity.
How does the EU AI Act categorize various AI Systems in terms of risk?
Unacceptable Risk
AI Systems that pose clear threats to human safety or fundamental rights are prohibited.
Sprinklr does not currently develop and at this time, has no plans to develop, AI Systems which would likely be classified as “Unacceptable Risk.”
AI systems which are classified as “Unacceptable Risk” under the EU AI Act are not permitted, and the ban on such AI Systems will be effective in November 2024.
Examples of Prohibited AI Systems Include:
- AI Systems that deploy subliminal, purposefully manipulative, or deceptive techniques which distort behaviour causing individuals or groups to make decisions they would not have otherwise made in a manner that is likely to cause significant harm.
- AI Systems that expand or create facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- AI Systems that are used for real time biometric identification in public spaces by law enforcement, unless an exception applies, such as a targeted search for exploited or missing persons or the prevention of imminent threat to life or safety.
- AI Systems that exploit vulnerabilities (age, disability, social/ economic status) with the objective or effect of materially distorting behaviour that is likely to cause significant harm.
- AI Systems that make risk assessments of individuals to predict the likelihood of an individual committing a criminal offence based on profiling or assessment of personality traits and characteristics.
- AI Systems that are used for biometric categorisation based on biometric data to infer sensitive characteristics such as race, sexual orientation, political opinions etc.
- AI Systems that are used to classify or evaluate social behaviour or personality characteristics resulting in social scoring leading to unrelated, unjustified, or disproportionate detrimental or unfavourable treatment.
- AI Systems that are used to infer emotions of individuals in the workplace or educational institutions unless for medical or safety reasons.
High Risk
AI Systems that may be considered a threat to human safety or fundamental rights and require completion of an impact assessment.
Based on our current review, it seems unlikely that many of Sprinklr’s AI Systems would be considered high risk. In the development and intended use of Sprinklr’s AI systems, there is no significant risk of harm to the health, safety or fundamental rights of natural persons.
In addition, many of Sprinklr’s AI Systems are intended for the following use cases, which the EU AI Act does not consider be high-risk:
- Perform narrow procedural tasks
- Improve the result of activities previously completed by a human
- Detect decision making patterns or deviations from previous patterns (and not meant to replace or influence the previously completed human assessment)
- Intended to perform a preparatory task to an assessment relevant for one of the high-risk use cases
However, please note that profiling of individuals by an AI System will always be considered high risk. Sprinklr’s AI Systems should not be used for profiling purposes.
Sprinklr will be conducting internal assessments for its AI Systems to determine whether any may fall into the high-risk category and if so, what controls can be put in place. To the extent required for compliance with legal obligations, Sprinklr will share relevant, non-confidential information with customers to the extent available, to support customers in their own impact assessments.
Examples High Risk AI Systems Include:
- AI Systems intended to be used as products or safety components requiring third party conformity assessment under EU harmonization laws.
- Remote biometric identification systems (not including AI Systems for biometric verification where the sole purpose is to confirm an individual is who he or she claims to be).
- AI Systems intended to be used for biometric categorisation according to sensitive characteristics.
- AI Systems intended to be used for emotion recognition.
- AI Systems intended to be used as safety components in critical infrastructure or supply of gas, water, heating, or electrics.
- AI Systems intended to determine access or admission or assignment to educational or training levels or establishments.
- AI systems intended to be used to evaluate learning outcomes and influence the learning process.
- AI Systems intended to be used for assessing appropriate level of education an individual will receive.
- AI Systems intended to be used for monitoring and detecting prohibited behaviour of students in the context of vocational training.
- AI Systems intended to be used for recruitment or selection of natural persons i.e. placing targeted job adverts, analysing candidates, filtering applications, or evaluating candidates.
- AI Systems intended to be sued to make decisions affecting terms of work-related relationships, promotions or terminations, allocation of tasks based on behaviour or personal traits or characteristics or to monitor or evaluate the behaviour of persons in such relationships.
- AI Systems intended to be used by public authorities to evaluate eligibility of individuals to public services and benefits.
- AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, except for AI systems used for the purpose of detecting financial fraud.
- AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
- AI Systems intended to be used by law enforcement, migration, asylum, and border control management or in the administration of justice or democratic processes.
Low Risk
AI Systems that are not high risk but are subject to specific transparency requirements.
Most of Sprinklr’s AI Systems are likely to fall into the low or minimal risk categories based on our current assessment. The EU AI Act does not provide much guidance on low-risk AI Systems but suggests most general-purpose AI systems (i.e., AI Systems based on general purpose models) will fall into this category unless integrated into high-risk systems. There are certain obligations to comply with including model evaluations, documentary evidence and provision of transparency notices.
Low Risk AI Systems may Include:
- AI for direct interaction with people i.e. chatbots.
- SPAM filters.
- AI powered video games.
- Algorithms/recommender systems.
- Predictive text.
- General purpose AI.
What is Sprinklr’s Position on the EU AI Act?
Sprinklr’s product and service offerings have been powered by AI for well over a decade and Sprinklr has always been passionately committed to fair and ethical AI practices. With that in mind, Sprinklr welcomes and supports the proposed regulation of AI systems and champions the EU for leading in this regulatory space.
The security and privacy of individuals’ data and the protection of individuals’ rights is integral to Sprinklr’s business model. Trust is a core value at Sprinklr and adopting legislative principles and guardrails around AI will be critical to cementing that trust in our AI, which we know can harness so many benefits for us and our customers, their businesses, their customers as well as wider society.
Based on our current assessment, Sprinklr does not provide Prohibited AI System, and the majority of Sprinklr’s AI Systems are unlikely to fall into the high-risk category. Sprinklr’s AI Systems are not intended to have a negative or harmful effect on individuals and their rights or safety. Sprinklr develops and deploys customer experience management technology, intended to make our customers happier, and in turn, to make their customers happier.
Sprinklr is a service provider and processor for our customers and a “Provider” under the EU AI Act. Conversely, our customers controllers and “Deployers” of AI Systems and have the sole discretion as to how Sprinklr’s AI Systems are ultimately used. It is imperative that our customers use our AI Systems in accordance with the intended use case, our commercial terms, including Sprinklr’s Acceptable Use Policy, and any specific terms governing use of Sprinklr’s AI powered features or products.
All customers should consult with their own legal counsel to understand the impact of, and obligations stemming from, the EU AI Act. Such consultation should also include a review of the intended use case of the Sprinklr AI System, as well as their proposed use cases to ensure that their proposed use of the AI System complies with necessary requirements of the EU AI Act.
What is Sprinklr doing in preparation for the EU AI Act?
- Sprinklr has established an internal AI Governance Committee, to oversee and promote the fair and ethical use of AI, in accordance with our core values, and regulatory requirements.
- We are actively reviewing our AI Systems to assess any high risks.
- We plan to create model and feature cards following our internal reviews, where appropriate, so we can provide legally required information to customers if requested.
- We are enhancing guardrails for new product development that may include AI Systems to flag and evaluate associated risks under the EU AI Act framework.
- We will consider potential use cases for our AI Systems as well as potential unintended use cases that could turn low risk AI Systems into high-risk AI Systems.
- We plan to carry out internal training on new processes and legislation to promote internal awareness.
- We plan to create customer facing collateral for greater transparency and explainability.
As part of Sprinklr’s ongoing commitment to transparency with our customers, we will also be producing more customer documentation, blogs and thought leadership in this space to help us be intentional in our practices and to provide our customers with resources and tools to support their independent management of the regulatory changes affecting the use of AI Systems.
As always, we welcome the feedback from, and partnership with, our customers as we embark on this exciting journey.
If you have further questions about Sprinklr’s’ compliance with the EU AI Act or other pending regulations concerning the use of AI, please visit Sprinklr’s Trust Centre, or reach out to your success manager.