SEOSEO News

NTIA Receives Over 1,450 Comments On AI Accountability


The National Telecommunications and Information Administration (NTIA), a United States Department of Commerce division, called for public commentary on strategies to encourage accountability in trustworthy artificial intelligence (AI) systems.

The objective was to solicit stakeholder feedback to formulate suggestions for a forthcoming report on AI guarantee and accountability frameworks. These suggestions might have guided future federal and non-governmental regulations.

Promoting trustworthy AI that upholds human rights and democratic principles was a principal federal focus per the NTIA request. Nonetheless, gaps remained in ensuring AI systems were responsible and adhered to trustworthy AI rules about fairness, safety, privacy, and transparency.

Accountability mechanisms such as audits, impact evaluations, and certifications could offer assurance that AI systems adhere to trustworthy criteria. But, NTIA observed that implementing effective accountability still presented challenges and complexities.

NTIA discussed a variety of considerations around the balance between trustworthy AI goals, obstacles to implementing responsibility, complex AI supply chains and value chains, and difficulties in standardizing measurements.

Over 1,450 Comments On AI Accountability

Comments were accepted through June 12 to aid in shaping NTIA’s future report and steer potential policy developments surrounding AI accountability.

The number of comments exceeded 1,450.

Comments, which can be searched using keywords, occasionally include links to articles, letters, documents, and lawsuits about the potential impact of AI.

Tech Companies Respond To NTIA

The comments included feedback from the following tech companies striving to develop AI products for the workplace.

OpenAI Letter To The NTIA

In the letter from OpenAI, it welcomed NTIA’s framing of the issue as an “ecosystem” of necessary AI accountability measures to guarantee trustworthy artificial intelligence.

OpenAI researchers believed a ****** AI accountability ecosystem would consist of general accountability elements that apply broadly across domains and vertical elements customized to specific contexts and applications.

OpenAI has been concentrating on developing foundation ****** – broadly applicable AI ****** that learn from extensive datasets.

It views the need to take a safety-focused approach to these ******, irrespective of the particular domains they might be employed in.

OpenAI detailed several current approaches to AI accountability. It publishes “system cards” to offer transparency about significant performance issues and risks of new ******.

It conducts qualitative “red teaming” tests to probe capabilities and failure modes. It performs quantitative evaluations for various capabilities and risks. And it has clear usage policies prohibiting harmful uses along with enforcement mechanisms.

OpenAI acknowledged several significant unresolved challenges, including assessing potentially hazardous capabilities as model capabilities continue to evolve.

It discussed open questions around independent assessments of its ****** by third parties. And it suggested that registration and licensing requirements may be necessary for future foundation ****** with significant risks.

While OpenAI’s current practices focus on transparency, testing, and policies, the company appeared open to collaborating with policymakers to develop more robust accountability measures. It suggested that tailored regulatory frameworks may be necessary for competent AI ******.

Overall, OpenAI’s response reflected its belief that a combination of self-regulatory efforts and government policies would play vital roles in developing an effective AI accountability ecosystem.

Microsoft Letter To The NTIA

In its response, Microsoft asserted that accountability should be a foundational element of frameworks to address the risks posed by AI while maximizing its benefits. Companies developing and using AI should be responsible for the impact of their systems, and oversight institutions need the authority, knowledge, and tools to exercise appropriate oversight.

Microsoft outlined lessons from its Responsible AI program, which aims to ensure that machines remain under human control. Accountability is baked into their governance structure and Responsible AI Standard and includes:

  • Conducting impact assessments to identify and address potential harms.
  • Additional oversight for high-risk systems.
  • Documentation to ensure systems are fit for purpose.
  • Data governance and management practices.
  • Advancing human direction and control.
  • Microsoft described how it conducts red teaming to uncover potential harms and failures and publishes transparency notes for its AI services. Microsoft’s new Bing search engine applies this Responsible AI approach.

Microsoft made six recommendations to advance accountability:

  • Build on NIST’s AI Risk Management Framework to accelerate the use of accountability mechanisms like impact assessments and red teaming, especially for high-risk AI systems.
  • Develop a legal and regulatory framework based on the AI tech stack, including licensing requirements for foundation ****** and infrastructure providers.
  • Advance transparency as an enabler of accountability, such as through a registry of high-risk AI systems.
  • Invest in capacity building for lawmakers and regulators to keep up with AI developments.
  • Invest in research to improve AI evaluation benchmarks, explainability, human-computer interaction, and safety.
  • Develop and align to international standards to underpin an assurance ecosystem, including ISO AI standards and content provenance standards.
  • Overall, Microsoft appeared ready to partner with stakeholders to develop and implement effective approaches to AI accountability.

Microsoft, overall, seemed to stand ready to partner with stakeholders to develop and implement effective approaches to AI accountability.

Google Letter To The NTIA

Google’s response welcomed NTIA’s request for comments on AI accountability policies. It recognized the need for both self-regulation and governance to achieve trustworthy AI.

Google highlighted its own work on AI safety and ethics, such as a set of AI principles focused on fairness, safety, privacy, and transparency. Google also implemented Responsible AI practices internally, including conducting risk assessments and fairness evaluations.

Google endorsed using existing regulatory frameworks where applicable and risk-based interventions for high-risk AI. It encouraged using a collaborative, consensus-based approach for developing technical standards.

Google agreed that accountability mechanisms like audits, assessments, and certifications could provide assurance of trustworthy AI systems. But it noted these mechanisms face challenges in implementation, including evaluating the multitude of aspects that impact an AI system’s risks.

Google recommended focusing accountability mechanisms on key risk factors and suggested using approaches targeting the most likely ways AI systems could significantly impact society.

Google recommended a “hub-and-spoke” model of AI regulation, with sectoral regulators overseeing AI implementation with guidance from a central agency like NIST. It supported clarifying how existing laws apply to AI and encouraging proportional risk-based accountability measures for high-risk AI.

Like others, Google believed it would require a mix of self-regulation, technical standards, and limited, risk-based government policies to advance AI accountability.

Anthropic Letter To The NTIA

Anthropic’s response described the belief that a robust AI accountability ecosystem requires mechanisms tailored for AI ******. It identified several challenges, including the difficulty of rigorously evaluating AI systems and accessing sensitive information needed for audits without compromising security.

Anthropic supported funding for the following:

  • Model evaluations: Current evaluations are an incomplete patchwork and require specialized expertise. It recommended standardizing capability evaluations focused on risks like deception and autonomy.
  • Interpretability research: Grants and funding for interpretability research could enable more transparent and understandable ******. However, regulations demanding interpretability are currently infeasible.
  • Pre-registration of large AI training runs: AI developers should report large training runs to regulators to inform them of novel risks under appropriate confidentiality protections.
  • External red teaming: Mandatory adversarial testing of AI systems before release, either through a centralized organization like NIST or via researcher access. However, red-teaming talent currently resides within private AI labs.
  • Auditors with technical expertise, security consciousness, and flexibility: Auditors need deep machine learning experience while preventing leaks or *******, but must also operate within constraints that promote competitiveness.
  • Anthropic recommended scoping accountability measures based on a model’s capabilities and demonstrated risks, evaluated through targeted capabilities evaluations. It suggested clarifying IP ownership frameworks for AI to enable fair licensing and providing guidance on antitrust issues to allow safety collaborations.
  • Overall, Anthropic stressed the difficulties of rigorously evaluating and accessing information about advanced AI systems due to their sensitive nature. It argued that funding capabilities evaluations, interpretability research, and access to computational resources are critical to an effective AI accountability ecosystem that benefits society.

What To Expect Next

The responses to the NTIA request for comment shows that while AI companies recognize the importance of accountability, there are still open questions and challenges around implementing and scaling accountability mechanisms effectively.

They also indicate that both self-regulatory efforts by companies and government policies will play a role in developing a robust AI accountability ecosystem.

Going forward, the NTIA report is expected to make recommendations to advance the AI accountability ecosystem by leveraging and building upon existing self-regulatory efforts, technical standards, and government policies. The input from stakeholders through the comments process will likely help shape those recommendations.

However, implementing recommendations into concrete policy changes and industry practices that can transform how AI is developed, deployed, and overseen will require coordination among government agencies, tech companies, researchers, and other stakeholders.

The path to ****** AI accountability promises to be long and difficult. But these initial steps show there is momentum toward achieving that goal.


Featured image: EQRoy/Shutterstock





Source link

Related Articles

Back to top button
error

Enjoy Our Website? Please share :) Thank you!