At the 2023 Defcon hacker conference in Las Vegas, major AI tech companies joined forces with algorithmic integrity and transparency organizations to engage thousands of attendees in scrutinizing generative AI platforms. This collaborative effort, commonly referred to as “red-teaming,” received backing from the US government and served as a pivotal moment in uncovering vulnerabilities in these crucial systems. Building upon this initiative, the ethical AI and algorithmic assessment nonprofit, Humane Intelligence, has decided to take a step further. In a recent announcement, the group introduced a call for participation in partnership with the US National Institute of Standards and Technology. This invitation extends to all US residents, allowing them to take part in a qualifying round of a nationwide red-teaming exercise focused on evaluating AI office productivity software.

The qualifying round will be conducted online and is accessible to both developers and the general public as part of NIST’s AI challenges, known as Assessing Risks and Impacts of AI (ARIA). Those who successfully navigate through the qualifying stage will have the opportunity to engage in an in-person red-teaming event scheduled for late October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The primary objective of this initiative is to enhance the capabilities for conducting comprehensive testing of the security, resilience, and ethical standards of generative AI technologies. According to Theo Skeadas, the chief of staff at Humane Intelligence, “The average individual utilizing one of these models lacks the capacity to ascertain whether the model is truly suitable for its intended purpose. Therefore, we strive to democratize the ability to carry out evaluations and ensure that all users of these models can independently assess whether the model aligns with their requirements.”

The concluding event at CAMLIS will involve segregating participants into a red team tasked with assaulting the AI systems and a blue team focusing on defense strategies. Leveraging the AI 600-1 profile, a component of NIST’s AI risk management framework, participants will utilize it as a benchmark to evaluate whether the red team is capable of producing outcomes that deviate from the systems’ anticipated behavior patterns. Rumman Chowdhury, the founder of Humane Intelligence and a contractor at NIST’s Office of Emerging Technologies, highlighted the essence of NIST’s ARIA, stating that it draws insights from structured user feedback to comprehend the practical implications of AI models in real-world scenarios. She emphasized that the ARIA team comprises experts in sociotechnical test and evaluation, leveraging their expertise to advance the field towards a more systematic and scientific assessment of generative AI technologies.

Chowdhury and Skeadas disclosed that the partnership with NIST is just the beginning of a series of AI red team collaborations that Humane Intelligence intends to unveil in the forthcoming weeks, involving various US government agencies, international authorities, and NGOs. The primary objective of this collective effort is to encourage companies and organizations responsible for developing what are presently opaque algorithms to prioritize transparency and accountability. Mechanisms such as “bias bounty challenges” are being advocated, wherein individuals can be incentivized for identifying shortcomings and biases in AI models. Skeadas pointed out the significance of widening the community involved in the testing and evaluation process of these systems. He emphasized that policymakers, journalists, civil society members, and individuals without technical expertise should all play a role in scrutinizing and evaluating these systems for enhanced transparency and credibility.

AI

Articles You May Like

Unpacking the Asus NUC 14 Pro AI: A Revolutionary Mini PC
Prime Video’s Stellar Lineup: A 2024 Review
The Barbie Phone: A Nostalgic Dream or a Frustrating Reality?
Quantum Computing and Cryptocurrency: Navigating the Future with Google’s Willow Chip

Leave a Reply

Your email address will not be published. Required fields are marked *