Since the viral launch of ChatGPT just over one year ago, Congress, the Biden administration, and the private sector have debated the responsible usage of generative artificial intelligence (AI). Through reports, testimonies, convenings, and other conversations, Brookings’ scholars have weighed in extensively, providing high quality, independent insights to help shape these discussions.
However, much like for other research organizations, the benefits and limitations of this technology have also disrupted our own work. To what extent are these novel tools being utilized? Does the use of AI inject bias into our research? And how might we internalize the responsible and ethical use of AI in our own research and operations?
To help guide the Institution in future use of generative AI, Brookings is issuing new provisional principles regarding the adoption of AI in conducting research and other key activities.
These principles are the product of a deliberate process launched by Brookings Interim President Amy Liu and executive leadership, who formed the Emerging Technologies Advisory Group (ETAG). ETAG is made up of 17 staff members representing every program, business unit, and job level within the Institution and with varying degrees of technical proficiency, co-chaired by the two of us.
ETAG’s mission is to leverage the collective expertise of the Brookings community to learn more about the utilization of generative AI and to inform a set of standards for responsible AI usage that match our institutional values. In other words, ETAG sought not just to give policy advice on responsible AI but also to embody it through our own research and operational practices. ETAG is committed to making these standards publicly available, both to strengthen transparency and so that our principles can serve as a template for others.
To start, ETAG gathered information about how these tools were already being used and whether research organizations, academic institutions, and publishers had established policies on the institutional use of generative AI. During this search, ETAG found that where guidance existed, it ranged from banning all uses of generative AI to allowing it but requiring strong external disclosures. ETAG found little public guidance from peer think tank organizations.
Through an all-staff survey, ETAG also discovered that these tools were already being used across job levels and functions. From operational roles to research positions, generative AI applications were primarily viewed as a productivity enhancing tool, assisting with tasks such as editing, summarizing, and generating event titles and marketing materials. ETAG also found that there was a clear desire for best-practice guidance on their appropriate and responsible use.
Using the information ETAG gathered, we determined how the most common and riskiest use cases aligned with Brookings’s current policies and where additional guidance was necessary. Ultimately, ETAG developed four overarching principles to guide usage of generative AI tools across Brookings.
Our principles—linked above and outlined below—build upon Brookings’s existing policies and are intended to take advantage of generative AI’s tremendous benefits and guard against its risks, while remaining flexible enough to adapt to a rapidly shifting technological landscape.
Brookings principles for the use of generative AI
- Comply with existing Brookings Institution policies. The use of any AI application must comply with any relevant policy or guidelines already in place, including ones for quality review, plagiarism, research misconduct, and acceptable technology use.
- Review and validate outputs. Individuals and supervisors should always validate the output of any AI model or have it reviewed by colleagues with relevant expertise to ensure the quality and integrity of that product.
- Protect sensitive data and information. Individuals should not input any sensitive data or personally identifiable information into a generative AI tool unless they have confirmed that user inputs will be properly protected and that the work product is not considered high-risk.
- Disclose appropriately. Individuals should always disclose to their supervisor, reviewer, or editor if a work product has been created with the assistance of generative AI and whether it has been properly reviewed. For externally facing work products, a disclosure is required for any published images created by a generative AI tool, in line with Brookings’s existing image attribution requirements, and any published texts for which an AI tool’s output is included verbatim at length, in which case the tool would need to be cited in line with Brookings’s quality review and plagiarism policies.
In the longer term, ETAG will serve as a resource to Brookings’s leadership in the development of a comprehensive strategy for the use of emerging technologies at the Institution. That process will require going beyond policy adherence to consider the broader effects of using generative AI at Brookings, including social impacts and ethical considerations. The principles outlined above represent the first step in that process, helping Brookings to embrace those opportunities while upholding our commitment to quality, independence, and impact.
Commentary
Putting research into practice: Brookings’ approach to the responsible use of generative AI
December 12, 2023