SEOSEO News

AI Regulations for Financial Services: Federal Reserve / Blogs / Perficient


Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

Federal Reserve

The largest of the federal banking agencies, the Federal Reserve has four regional federal reserve banks (Atlanta, Boston, New York and San Francisco) set up offices to study financial innovation with AI. These efforts are intended to focus on how regulators can use AI to assist in regulating financial institutions as well as to better understand how banks are using AI in their activities.

While the FRB has not passed AI-specific regulations, a Chief AI Officer has been named, an AI policy approved, and a risk-based review of AI programs and activities has been established and findings will be published and shared with the public.

As noted at the central Federal Reserve Bank level by Chief Artificial Intelligence Officer (“CAIO”), Anderson Monken, the FRB is committed to an artificial intelligence (AI) program for FRB (“Board”) staff that:

  • Promotes the responsible use of AI and enables AI-related innovation
  • Mitigates risks associated with AI use through robust governance and strong risk management practices
  • Complies with all applicable federal requirements related to AI use by federal agencies

As noted in the Federal Reserve System Compliance Plan for OMB Memorandum M-24-10, the Board recognizes the value of a comprehensive enterprise risk-management approach to ensure safe and responsible AI innovation.

Determining Which AI Use Is Presumed to Be Safety- or Rights-Impacting

The Board has implemented its enterprise-wide AI policy and corresponding review process to determine which current or planned AI use cases are determined to be safety- or rights-impacting.

  • Review process. Each current or planned AI use case undergoes a thorough review and assessment by the CAIO and the AI Program team to determine whether the use case meets the definition of safety- or rights-impacting AI as defined in section 6 of OMB M-24-10.
  • Criteria for assessment. FRB assessment criteria are based on the definitions of safety- and rights-impacting AI and examples of AI presumed to be safety- or rights-impacting in OMB M-24-10 section 6 and Appendix I, respectively. These criteria include whether the AI output would serve as a principal basis for a decision or action and real-world considerations of potential harm to protected or otherwise critical populations, entities, and resources.
  • Supplementary criteria. The Board may incorporate additional review criteria to assess safety and rights-impacting AI considerations in response to internal or external developments.

Implementation of Risk-Management Practices and Termination of Noncompliant AI

  • AI policy and review process. The FRB’s AI policy and review process prohibit any use of AI considered to be safety- or rights-impacting without the CAIO’s approval, waiver of one or more risk-management practices, or approved OMB extension, to meet risk-management requirements. All safety- or rights-impacting AI use cases undergo a comprehensive risk impact assessment including validation of all risk-management practices defined in OMB M-24-10 section 5(iv).
  • Enforceability and penalties. Unauthorized or improper use of AI may result in loss of, or limitations on, the use of Board IT resources and in disciplinary or other action, which could include separation from employment. Board of Governors of the Federal Reserve System Compliance Plan for OMB Memorandum M-24-10
  • Technical controls. The Board has technical controls in place to deter, detect, and remediate policy violations. These controls include the ability to terminate instances of non-compliant AI on Board IT resources.
  • Communications and training. The Board’s AI Program team publishes and manages the AI policy through a regularly updated intranet site. The site provides guidance on the AI policy, the process for submitting a use case, and the criteria for determining the permissibility of a use case. The site also offers non-technical and technical AI training materials, a list of best practices for the responsible use of AI, and answers to policy FAQs.

Minimum Risk-Management Practices for Safety- or Rights-Impacting Uses

  • The Board is implementing a comprehensive environment of controls to encompass the risk management practices required by OMB M-24-10. The CAIO and AI Program team are responsible for ensuring that these controls are designed and operating effectively to provide sufficient assurance that the Board can mitigate risks from non-compliant AI uses.
  • Impact assessment. Every AI use case that is presumed to be safety- or rights-impacting undergoes a comprehensive risk impact assessment, which includes a review of controls and processes meeting or exceeding the minimum risk-management practices defined in OMB M-24-10 sections 5(c)(iv) and 5(c)(v). The review process assesses the quality and appropriateness of AI use cases, all data considered for those use cases, purpose of use, and potential harms to health, safety, privacy, security, rights, and opportunities as noted in the Board’s criteria for assessment. Considerations for resourcing, security controls, testing, and validation plans are also reviewed.
  • Determination process. The CAIO, in conjunction with the AI Program team and, as appropriate, senior Board officials, will review whether the AI use case, along with its impact assessment, satisfies the definitions of safety- or rights-impacting in section 6 of OMB M-24-10. The CAIO shall determine whether the AI use case matches the definition of safety- or rights-impacting after considering the conditions and context of the use case and whether the AI is serving as the principal basis for a decision or action.
  • Waiver process. In limited circumstances, waivers of minimum risk-managements practices may be granted in accordance with OMB M-24-10 section 5(c)(iii). The AI Program will develop criteria to guide consistent decision making for the CAIO to waive risk-management practices, ensuring that waivers are granted only when necessary. Any decisions to grant or revoke a waiver will require documentation of the scope, justification, and supporting evidence. The AI Program team will establish procedures for issuing, denying, and revoking waivers, with oversight by the CAIO and the AI Enablement Working Group.
  • Documentation and validation. The CAIO is responsible for documenting and validating that current and planned risk-management practices for all safety- and rights-impacting AI use cases are designed and operating effectively. The AI Program team maintains detailed records of all Managing Risks from the Use of Artificial Intelligence use cases and extension, waiver, and determination decisions to support consistent reviews, enable effective compliance and reporting, and promote transparency and accountability.
  • Publication and annual certification of waiver and determination actions. All materials related to a waiver or determination action will be reported to OMB within 30 days. An annual certification process of the ongoing validity of waivers and determinations will be conducted by the CAIO, the AI Program team, and the owners of relevant AI use cases. The AI Program team will develop procedures for certifying all waivers and determinations. A summary of the outcome of the annual certification process, detailing individual waivers and determinations along with justification, will be shared with OMB and the public in accordance with OMB M-24-10 section 5(a)(ii). If there are no active determinations or waivers, that information will be shared with the public and reported to OMB.
  • Implementation and oversight. The AI Program team has a dedicated workstream with responsibility for the implementation and oversight of risk-management practices. The workstream includes members specializing in relevant mission and compliance functions, including technology, security, privacy, legal, data, and enterprise risk management, and represents a diversity of enterprise perspectives. The group is responsible for promoting consistent and comprehensive AI risk management through the use case review and impact assessment processes. This workstream is also responsible for maintaining a register of enterprise AI risks and associated mitigations to promote active management and accountability across the FRB.





Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button
error

Enjoy Our Website? Please share :) Thank you!