Congressional Committee Raises Concerns Over Controversial AI Collaboration: NIST and RAND Under Fire
Published on: March 10, 2024
Members of the House Science Committee, including bipartisan leaders, have sounded an alarm over a planned AI research partnership between the National Institute of Standards and Technology (NIST) and the RAND Corp. This collaboration, involving an influential think tank with ties to tech billionaires and the 'effective altruism' movement, has raised questions regarding transparency and the competitive process for research grants.
In a letter dated Dec. 14, lawmakers criticized NIST for failing to announce a competitive process for the planned research grants related to the new U.S. AI Safety Institute. The letter, signed by figures such as House Science Chair Frank Lucas and ranking member Zoe Lofgren, highlighted concerns over the quality of AI safety research and the secrecy often associated with external research groups.
NIST, playing a central role in President Joe Bidenโs AI strategies and responsible for establishing the AI Safety Institute, faces resource constraints. This necessitates partnerships with external entities for fulfilling its AI mandate. However, the undisclosed nature of potential grant recipients, including RAND, has further fueled concerns.
A recent RAND report on biosecurity risks posed by advanced AI models, cited in the House letter, exemplifies worries about research lacking academic peer review. The lack of response from RAND on inquiries about this partnership adds to the uncertainty.
Lawmakers are particularly wary of the influence of 'effective altruism,' a movement known for emphasizing catastrophic risks of AI and biotechnology, often without substantial evidence. The movementโs ties to top AI firms and potential conflicts of interest have not gone unnoticed.
The House Science Committee's insistence on scientific merit and transparency echoes a broader demand for adherence to rigorous scientific standards in federal AI safety research. With NIST exploring options for a competitive research process, the outcome of this partnership and its alignment with the committee's concerns remains to be seen.
The situation underscores the complexities in AI governance and the importance of balancing innovative research with ethical considerations, transparency, and scientific integrity. As AI continues to evolve, the role of federal agencies and their choice of partners will be crucial in shaping the landscape of AI safety and ethics.