top of page

Fintech’s AI/ Machine Learning Gamble: A Non-Compliant Technology Embraced for Compliance Catalysis?

Abstract:

The interplay of AI and Machine Learning has equipped the flourishing field of Fintech in matters of ensuring ethical, legal, and regulatory adherence but it has been equally challenging for it to establish a good self-compliance framework for it to survive the storm of data breach regulations. This research paper, titled "Fintech’s AI/Machine Learning Gamble: A Non-Compliant Technology Embraced for Compliance Catalysis?", examines the paradox of burdening a non-compliant mechanism with compliance duties in its infancy. The first section of the paper looks at AI-driven compliance and AI Compliance, with an emphasis on how AI is being used in compliance processes along with exploring and assessing what their outcomes have been compared to traditional compliance methods. The AI Model Core Components thoroughly breaks down the intricate factors at AI's genesis explaining what causes this compliance failure. The paper finally discusses the Concerns Raised Over AI’s Non-Compliance in Fintech on bias, transparency, and data privacy and its compromise with its ethical standing and user reliability and proposes possible adaptable Solutions and strategies in order to improve AI governance in all sectors beyond compliance with evolving regulations and ethical standards. The objective of this paper is to provide a holistic understanding of the risks and opportunities associated with AI in fintech compliance, thereby suggesting actionable insights for better integration of AI technologies in the industry.

Research Methodology:

The current research paper deals with the scopes and drawbacks of using artificial intelligence and machine learning in fintech compliance through the analysis of secondary data it comes across from innumerable sources. It draws material from government resources like legal databases, judgements and academic publications, regulations such as GDPR, CCPA/CPRA, anti-money laundering laws, and the EU Artificial Intelligence Act to gain a full understanding of the research material. This thematic analysis is focused on the compliance of AI, its effectiveness, and the potential risks such as bias and data privacy concerns. The paper will also review existing literature in terms of looking at AI's impact on fintech and will suggest how AI governance and compliance may be improved.

Literature Review:

On AI and Compliance Challenges:  Goodman and Flaxman (2017) throws light into the GDPR restrictions on AI's " black-box" mechanism in EU in matters of compliance and transparency. Calo (2017) captures the regulatory issues faced in keeping up with the problems that regulators are facing in keeping pace with rapidly changing AI and presenting a case for adapting regulatory mechanisms that mitigating risks and taps into benefits of AI.  

 

On Bias and fairness in AI: Barocas, Hardt, and Narayanan (2019) provided an in-depth scholarly treatment of the risk of bias in reinforcing bad AI systems. Their study could, therefore, give critical insight into how socially biased algorithms might affect the outcomes of fintech-based ­systems. Lodge and Mennicken (2020) analyse how AI has substantially transformed the financial landscape, its pros and cons and the regulatory practises that need to be amended with changing times.  

This further pushes the narrative that AI lies in the belly of financial regulation and its effect sways both ways making it extremely necessary to take a new approach to assure regulatory compliance for the unprecedented future.

Introduction:

Lately, the "urge to adopt" AI/ML within fintech is a technological leap that although has radically transformed many financial services, from customer onboarding to fraud detection and personalized financial advice, has also opened a very complex landscape for challenges in needing to comply with a plethora of regulations directed at protecting consumers of inherent bias and privacy breach and in those circumstances, the diligent step-by-step roadmap of AI regulatory compliance becomes the anchor to ensuring ethical use.

Before diving into the solutions, the key regulations AI must adhere to is the General Data Protection Regulation (GDPR), European Union (EU) stipulating the need for the lawful, fair, and transparent personal data handling by an organization, the rights to access and to be forgotten, prioritizing an individual when it comes to one's personal data, "data protection by design and by default" —for integrating AI systems that process sensitive information. Besides, California Consumer Privacy Act, (CCPA/ Amendment act known as California Privacy Rights Act/CPRA) creates an independent enforcement body: the California Privacy Protection Agency (CPPA), introduces the term "sensitive personal information" and establishes strict data protection rules such as Californians having the right to know what kind of personal data is being collected, whom it will be sold or passed to, for deletion in data that is under some entity's custody, and opting out in case at one point they decide to sell this under the Sections 1798.100 - 1798.125. Payment Card Industry Data Security Standard (PCI DSS), contains guidelines to handle cardholder data, mandating encryption, secure storage, and regular monitoring and testing of networks hosting payment information to avoid sensitive date breach. The EU Artificial Intelligence Act, that currently came into force in the beginning of August, drastically transforms the way AI is applied across various industries, including fintech -segmenting AI systems into different categories of risk: unacceptable, high, limited, and minimal. For high-risk systems, which include credit scoring and identification processes, requirements are tighter and fines are soaring high. This paper shall solve the paradox between using AI- and ML-based technologies to enhance the compliance process within the fintech domain and wrestling with the inherent risks of non-compliance posed by these very technologies. In the course of providing a framework for an understanding of the specifics of regulations guiding AI use, the research shall hope to locate a nuanced intersection of technology, law, and, most vitally, ethics in the financial sector. For high-risk systems, which include credit scoring and identification processes, requirements are tighter and fines are soaring high. This paper shall solve the paradox between using AI- and ML-based technologies to enhance the compliance process within the fintech domain and wrestling with the inherent risks of non-compliance posed by these very technologies. In the course of providing a framework for an understanding of the specifics of regulations guiding AI use, the research shall hope to locate a nuanced intersection of technology, law, and, most vitally, ethics in the financial sector.

Discussion:

  1. AI Compliance and AI-Driven Compliance: Fintech has pushed new breakthroughs in technology, especially via blockchain, AI and API to improve financial services and has truly emerged as a phenomenon for the finance industry and has shown exponential growth over the last decade with a market valued at $294.74 billion as of 2024; expected to cash in $340.10 billion at a CAGR of 18.5% during the forecast period of 2023-2030. Zooming into the Fintech industry itself, AI among others is the engine with the greatest potential to drive the former towards innovation and enhanced efficiency. The phenomenon observed under the Deloitte survey in 2022 suggesting finance leads in AI adoption was corroborated by data: 69% of finance-related AI projects thus were owned by finance itself. Among AI-driven functions, Compliance has carved a niche for itself as one of its integrals. Currently standing at 15% AI driven compliance roles, it is expected to rise to 55 % within the next two yearsThus, the analogy is this, fintech is the growing industry leading in the entire finance sector. AI has proven to magnanimously enhance the efficacy of fintech and compliance has been the major tool in which it has shown potential, ensuring that this fast-paced growth is sustainable, ethical, and secure. Although appearing to be a tad of a comical wordplay, AI Compliance as a term is an absolute antithesis to AI-driven Compliance. With the former referring to the practice of ensuring that AI-powered systems operate within the bounds of relevant laws, regulations, and ethical standards, the latter involves using AI to ensure that a company complies with applicable country-specific laws, requirements from the regulatory authorities and internal company directives. Throwing light on both is pertinent to conclude how the interplay between them affects fintech’s overall performance and if AI was a gamble in that respect.


  2. Outcomes of AI-Driven Compliance:  Despite being quite evident that AI has been meddling with the law since its inception, its benefits outdo its drawbacks. The tasks associated with compliance-based prediction, detection, management, and mitigation of existing and future governance risks, irregularities, and fraud by violations of legislation and standards are flexibly addressed and promptly updated to ensure continued adherence before they become material or escalate: by implementing sophisticated algorithms in a manner that facilitates real-time monitoring by scanning, collecting, evaluating and correlating past and present datasets in huge numbers (from internal and external sources) and identifying and learning patterns through Machine Learning, having subcategories of supervised, unsupervised, re-enforcement, natural, deep learning, and neural networks. These frameworks—many of them based on strong machine learning architectures better position AI to keep up with shifting nuanced regulatory landscapes, ensuring continued alignment with changing standards, developing scalable and customizable solutions tailored to meet the particular needs of different industries, and providing transparency and accountability through a clear understanding of how AI algorithms work.


  3. Outcomes of AI Compliance: While AI is increasingly being used for enhancing compliance across sectors, it has been observed to avoid accountability of being non-compliant itself. In Fintech alone: The EC has itself admitted that in preliminary estimates, nearly 60% of AI systems in use fail to live up to the transparency and accountability standards one would expect from its proposed new AI Act. McKinsey estimates that 30% of AI deployed actions fail to attain the value intended out of them because of the its failure to comply, among other reasons. AI Institute report suggests bias exits in 40% already functioning AI-systems and that they propagate unfair treatment. According to Gartner surveys, more than 50% of organizations with AI initiatives had not developed and put into place a fully created Al governance framework. According to Forbes, nearly 45% of organizations that implement AI are unable to remain in compliance, this being the result of changing regulations and internal control. There are a few well-known cases that illustrate the problem with compliance when it comes to AI in fintech: ZestFinance, a fintech startup using machine learning to evaluate credit risk, was under fire for algorithms that used non-traditional data sources and, therefore, potentially led to discrimination inadvertently; such a case raises concerns with respect to fair lending laws and transparency that is inherently linked with AI processes of decision-making. Use of non-traditional sources of data handled fairness in lending laws, with risks of discriminatory outcomes through AI algorithms. Credit Karma is on the receiving end of a backlash over AI-driven ad practices that, by combining users of a personal financial data breach, target them with specific financial products. Questions of consent and compliance with privacy regulations, such as the GDPR and CCPA, were missing with respect to their use. Robinhood was criticized for using AI algorithms that analysed trading behaviour in a bid to offer personalized recommendations. The issues that arose thus pertained to how transparent these algorithms actually were, and whether or not they provided the protection requirements for user privacy and financial regulations. Issues linked to AI algorithm transparency behind personalized financial recommendations brought out compliance questions to protections of customer privacy and ethical uses of their data. This landed Plaid, a fintech firm offering APIs for banking services, into trouble over accusations that it collected user data without proper consent. raising legal concerns over the data-sharing process. There have been many accusations against Chime neobanks for using AI to monitor transactions, mostly for fraud detection. With the aim to be more secure, it has been questioned whether there is the creation of false positives and the possible impact on privacy if such algorithms do not have the accompanying transparency. Transaction monitoring has brought PayPal's privacy and compliance issues to light.


  4. Main Elements of the AI Model: Understanding the fundamental elements of the AI model is necessary to comprehend what could have caused such unfavourable results:

    i)  The input component needs initial data and settings from a variety of sources, such as sensors, databases, and human input.

    ii)  Processing Component: During this stage, the model uses algorithms to analyse the incoming data and generate content, predictions, and judgments. Regular learning improves this processing.

    iii) Output Component: After processing, the data is converted from machine language into a user interface in the form of reports, dashboards, charts, and summaries, among other reports, visualizations, and insights tailored to specific requirements. Although input, processing, and output parts can be grouped, conservatively, it is much more complex when it comes to real-life.

    Complex data being processed, intricate pre-process errors and presumptions, outdating of system versions and erroneous conversion and interpretation of machine language interpreting the system overcoming the pre-existing over-simplification and bias mechanism overall skews the final output.


  5. Concerns about AI’s Compliance in Fintech: Being immensely vulnerable to lapses in legal and ethical concerns, AI presents serious compliance issues for the financial sector. A noteworthy occurrence that highlights this is the theft of US$2 million from 9,000 consumers, believed to be the largest cyber breach in Britain's financial history. Constantly replicating the biased system carries it forward results in further faulty fraud detection and loan approvals making it as pandemic in the entire AI mechanism. Besides, the opacity in the process of decision making, widely known as its "black box" characteristic, raise threat to privacy breach of unapproved private data. In addition, a deficiency of diversity in training data may mistakenly label benign actions as dangerous or completely forgo risky outcomes. Even, the very self-learning and adaptability of AI raises additional ethical issues.


6. Solutions: To leverage AI as an asset for ensuring compliance, a comprehensive regulatory framework is essential. This involves using AI ethically to avoid biases and privacy infractions while reducing legal risks, thus fully capitalizing on AI’s benefits without incurring significant losses. Building consumer trust through transparency and accountability, strengthening data protection, safeguarding sensitive information, and fostering innovation in a secure environment are key to sustainable and responsible AI practices. An effective approach to AI compliance includes:


  • Comprehensive Assessment: Conduct a thorough review of your compliance processes, document management systems, and data sources, identify key documents and workflows that could benefit from AI and determine the types and formats of documents.

  • Understand Regulatory Requirements: Identify and understand the specific regulations and compliance standards relevant to your industry, such as GDPR, CCPA/CPRA, AML regulations, PCI DSS, and upcoming regulations like the EU Artificial Intelligence Act.

  • Define Compliance Processes and Goals: Compliance Processes: Break down compliance processes and identify where AI can be applied and with that information define tailored objectives for compliance required to be made.

  • Data Collection and Pre-processing: Collect, organize and prepare necessary data by digitizing, standardizing, and labelling it and ensure that it meets compliance requirements for AI analysis.

  •  Choose Suitable AI: Evaluate AI platforms based on performance metrics, STP rates, and rule-based validation, machine learning algorithms (supervised or unsupervised) based on your specific needs and/ or generative AI models, considering interpretability and applicability.

  • Integration: Integrate the AI solution with existing compliance systems, ensuring seamless communication and data exchange along with incorporating AI risk assessments into your risk management framework, addressing major issues to be dealt with.

  • Training, Monitoring and Evaluation of the Model: Divide data into training, validation, and test sets to train the model and continuously monitor the AIs performance and assess its efficiency in meeting regulatory requirements.

  • Security and Confidentiality: Implement robust security measures to protect sensitive data and ensure compliance with data protection regulations.

  • Mitigate AI Risks: Evaluate Third-Party Tools, ensure the AI system meets security and privacy requirements, ensure data quality, prevent biases, conduct periodic audits, monitor regulatory changes and update AI models as needed.

  • Obtain Certifications: Acquire AI and Data Protection certifications to ensure compliance with relevant standards.

  • Adopt Pilot Projects and Train Employees/Personnels: Test AI solutions on a small scale to evaluate effectiveness before full deployment and provide training on AI, its integration, functions, and best practises.

  • Establish Policies and Procedures: Develop clear policies and procedures for AI use and compliance.

  • Develop a Compliance and Risk Management Program along with AI Governance Framework: Create a comprehensive program to address regulatory and compliance needs, managing risks and establish a governance framework for overseeing AI implementation and compliance.

  • Establish an Audit Process, Reporting and Responding to Compliance Issues Develop a process for auditing AI systems to ensure compliance and effectiveness and thereby reporting and addressing them.

  • Use automated tools to monitor AI compliance and address issues promptly.


Conclusion: 


Artificial intelligence and machine learning in the financial technology sector have challenged traditional compliance methods to its highest capacity and opened new avenues toward maximum accuracy. Despite, there are innumerable hurdles to error-free AI support—algorithmic biases, privacy concerns, ever-changing regulations—present as fintech companies, that indulge in AI a bit more every passing day to help them navigate a complex regulatory framework. The landscape of AI within fintech is fast-changing, underscoring the need for firms to have robust AI strategies in place: openness, monitoring ethical impacts, and frequent updating are important. An amalgamation of technology innovation and rigid compliance holds the key to long-term stability, preservation of resources, and reduction of losses involved, not to forget the preservation of trust in the evolving relationship between traditional finance and progressive technology.

References:


  1. Jon Bailey & Johannes Wittmann& Felix Dietlmaier & Florian Jusuf Crossing the lines: How fintech is propelling FS and TMT firms out of their lanes Global Fintech Report 2019, PwC (Aug 6. 2024), https://www.pwc.com/gx/en/industries/financial-services/fintech-survey.html

  2. Cormier et al., Prudence, Profits, and Growth, Global Fintech 2024, BCG (Aug 8, 2024)  https://www.bcg.com/publications/2024/global-fintech-prudence-profits-and-growth

  3. Technology Considerations for Privacy Leaders at Gartner IT Symposium Toronto, Gartner (Aug 7, 2024), https://www.gartner.com/en/newsroom/press-releases/2020-02-25-gartner-says-over-40-percent-of-privacy-compliance-technology-will-rely-on-artificial-intelligence-in-the-next-three-years#:~:text=Gartner%20Analysts%20to%20Discuss%20Technology,%2C%20according%20to%20Gartner%2C%20Inc.

  4. The Governance, Risk Management & Compliance of A.I, A.I. GRC- GRC 2020 (Aug 6, 2024), https://www.credo.ai/blog/introducing-governance-risk-and-compliance-grc-for-ai

  5.   Artificial Intelligence and Privacy, OVIC, Office of Victorian Information Commissioner: Version: August 2018- D21/6354 (Aug 6, 2024), https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-and-privacy-issues-and-challenges/

  6. AI Compliance: A Must-Read for FinTech’s Using AI, InnRegg (Aug 8, 2024), https://www.innreg.com/blog/ai-compliance-a-must-read-for-fintechs-using-ai  

  7. FinTech Market - Forecast (2024 - 2030), Industry Arc (Aug 7, 2024), https://www.industryarc.com/Report/18381/fintech-market.html

  8. Micheal Chui, The State of AI in 2023- Generative AI’s Breakout Year (Aug 8, 2023): Survey, McKinsey, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year

  9. AI Compliance: What It Is and Why You Should Care, EXIN (Aug 6, 2024) https://www.exin.com/article/ai-compliance-what-it-is-and-why-you-should-care/

  10. Meredith Whittaker et al., Disability, Bias, and AI – Report 2019, AI Now Institute (Aug 6, 2024), https://ainowinstitute.org/publication/disabilitybiasai-2019

  11. AI for Compliance- What, Why and How, Kili (Aug 7, 2024)), https://kili-technology.com/data-labeling/ai-for-compliance-what-why-how 

  12. Matt Kunkel, Harnessing AI For Future-Proofing Regulatory Compliance, Forbes (Aug7,2024),https://www.forbes.com/councils/forbestechcouncil/2024/01/25/harnessing-ai-for-future-proofing-regulatory-compliance/


Cases:

  1. Titus v. Zestfinance, Inc., CASE NO. 18-5373 RJB (W.D. Wash. Oct. 18, 2018)

  2. United States of America V. Intuit Inc. And Credit Karma, INC., Civil Action No.: 1:20-cv-03441-ABJ (Amy Berman Jachkson, Aug 2, 2021)


Publications:

  1.   Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI & Society, 38(3), 50-57. https://doi.org/10.1007/s00146-017-0787-1

  2.  Calo, R. (2017). Artificial Intelligence Policy: A Primer and Roadmap. UC Davis Law Review, 51(2), 399-435. https://lawreview.law.ucdavis.edu/issues/51/2/Articles/51-2_Calo.pdf

  3. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. MIT Press. https://fairmlbook.org

  4.  Lodge, M., & Mennicken, A. (2020). Regulation and Risk: The Role of AI in Financial Services. In The Oxford Handbook of Financial Regulation (pp. 211-234). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198806616.013.16

Laws: 

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation), 2016 O.J. (L 119) 1 (EU), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679.

  2. Cal. Civ. Code §§ 1798.100-1798.125 (2018), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201720180AB375.

  3. Cal. Civ. Code §§ 1798.100-1798.199 (2020), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1281.

  4. Bank Secrecy Act of 1970, 31 U.S.C. §§ 5311-5330 (1970), https://www.fincen.gov/resources/statutes-regulations/bank-secrecy-act.

  5. Directive (EU) 2018/843 of the European Parliament and of the Council of 30 May 2018 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, 2018 O.J. (L 156) 43 (EU), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32018L0843.

  6. PCI Security Standards Council, PCI DSS: Payment Card Industry Data Security Standard (2022), https://www.pcisecuritystandards.org/documents/PCI_DSS_v4-0.pdf.

  7. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), COM (2021) 206 final (April 21, 2021), https://ec.europa.eu/info/sites/default/files/proposal_regulation_ai_en.pdf.


Author:

Tiyasa Choudhury

Advocate



bottom of page