The UC Berkeley AI Community: Governance and Community for AI at Berkeley
Artificial Intelligence (AI) is rapidly transforming the academic landscape, presenting unprecedented opportunities and complex challenges for companies and institutions everywhere. This post outlines how I am thinking about how governance and community work together to support a Berkeley AI strategy, aligned with the University of California’s AI Guiding Principles. I’m making the case for a first layer of foundational governance needed to get the ball rolling— establishing basic contracts, security and data reviews, baseline risk assessments— and then getting the tools into peoples’ hands to learn and experiment. We created the UC Berkeley AI Community to leverage the strength of our diverse population and perspectives, to tap the community to accelerate the development and exchange of ideas and information towards building a more sophisticated institutional understanding of AI. Community engagement speeds the path to effective functional governance; ie, having the right existing decision-making bodies and roles making intentional choices about where and when Berkeley uses and does not use AI.
The University has already begun to use and deploy artificial intelligence in its many forms (from expert systems to machine- and deep learning to generative AI). Making thoughtful decisions around when and how AI plays a role at Berkeley is an ongoing effort — for all of us. Berkeley has a strong initial foundation for this, built upon the University of California’s guiding principles for AI. These principles are not mere formalities. They are the bedrock of a values-driven institutional approach to AI adoption. Operationalizing the principles across our campuses actively integrates them into our decision-making processes and daily activities. Some of the work going on now involves existing committees and people to determine how to bring these principles into operational decisions. Some key areas of focus the UC strategy calls out for attention are Human Resources, Policing, Student Experience, and Health- and the UC AI Council has active subcommittees reviewing AI in these contexts.
What I’m calling foundational governance also includes current efforts to establish operative contracts with AI vendors, and to make decisions on enabling access to tools and technologies (as well as setting basic guidelines and guardrails around their use). Procurement and legal professionals are central to enabling AI services, individually and in their roles on committees — to align review of new options with current policies and practices. The Compliance & Enterprise Risk Committee CERC and the Technology Foundations Committee (TFC) provide foundational governance, along with working groups chartered to take the lead on iteratively designing the necessary frameworks to safely examine the tradeoffs of costs, risks and benefits of AI. This foundation ensures that experimentation and innovation can happen — and that innovation happens within a secure and ethical environment.
The UC Berkeley AI Community
With these foundations in place, the second component of our approach involves engaging the wider campus, and developing the UC Berkeley AI Community. Building an inclusive, community approach aims to bring people together from various disciplines and functions within the University. The goal is to create a platform for knowledge exchange, for sharing diverse perspectives on AI, and accelerating collective knowledge and understanding. This community is not just about pooling knowledge; it’s to build an ecosystem around AI for the institution. The charter includes sponsoring senior leaders from across many functional areas of the University, which is different from a community of practice. This is an intentional construction, designed to create bi-directional communication flow between leadership and the wider community. Structured engagement from leadership brings their perspectives to the larger community, promoting alignment in an otherwise decentralized and hierarchical environment. Engagement with leadership also fosters inclusion, as many community members may not have access to leadership perspectives in their day to day roles. This also allows information, including diverse perspectives from all parts of the University, to flow more readily to all levels of the organization to help grow our collective knowledge and sophistication in a rapidly evolving space.
Addressing the size of the overall AI Community
In an organization with more than 60,000 community members (including staff, faculty and students) for a topic as broad as artificial intelligence in all its forms, the overall UC Berkeley AI Community is a very big tent. The UC Berkeley AI Community is positioned to orient people to different aspects of AI and the implications of different types of AI for the University. While a large group promotes comprehensiveness, it brings inherent limitations for depth and intimacy. Like a funnel, the community gives people an accessible starting point to navigate the many opportunities at Berkeley and discover and connect with others working on similar topics or issues. For our overall AI efforts to be successful, the community should coexist with and support subgroups in various categories (also aligned with helping Berkeley grow the expertise needed to enable functional governance as described above). Some potential groups could include:
- ethics/responsible AI/safety/Legal
- technical topics (edgeAI/ open source AI/ LLMs )
As our understanding and application of AI evolve, so too will our approach to governance. It’s important that we leverage existing committees and expertise in the various University functional domains and build on existing policies and practices that align with our values. Referred to here as functional governance— over the long run as community expertise with AI grows, these bodies will bring more nuanced and specialized guidance to Berkeley’s many functional domains over the appropriate use of AI. Groups like Research, Teaching, and Learning, the Academic Senate, formal committees and units focused on maintaining campus safety, and others are best positioned to make informed decisions about AI use-cases and priorities, ensuring Berkeley's approach to AI remains dynamic and responsive to the changing landscape.
Addressing the Risks of Inaction
Failing to adopt a thoughtful, somewhat structured approach to AI will lead to fragmentation and inefficiencies, a real risk to University data, akin to the early days of the Internet when organic adoption and decision-making over time led to inordinate costs and redundancies (even with the best intentions). Uncoordinated efforts in AI adoption will similarly result in inequitable resource distribution, with some departments potentially outpacing others, and a disproportionate focus on certain disciplines.
In implementing this strategy, simplicity and efficiency are key. The aim is to utilize existing committees and policies to the fullest, avoiding complexity such as creating special paths for AI governance rather than using existing ones. This approach ensures a smoother transition and integration of AI into our existing structures, harnessing our collective strengths to navigate this new frontier.
As we experiment with and deploy AI at Berkeley, our approach reflects our commitment to inclusivity, innovation, and responsible governance. By adopting a structured, community-centric strategy, we position Berkeley not just to adapt to the AI revolution but to lead it, ensuring that AI serves to positively benefit all disciplines and departments. We recognize the importance of integrating our approach with the leadership of the broader University of California system, working together.