The Center for Cybersecurity Policy and Law has been holding events with state legislators, universities, and technical leaders to understand how they are effectively deploying and securing AI systems. Our past events have been held with the University of Texas, Austin and the University of Colorado, Colorado Springs. This most recent event was held in North Carolina with Duke University.
As states across the country grapple with how to adopt artificial intelligence responsibly, North Carolina offers a compelling case study - not because it has all the answers, but because it has built the institutional muscle to learn, adapt, and lead.
Over the past two years, North Carolina has taken a deliberate, bipartisan approach to AI governance and deployment. In 2025 Governor Josh Stein issued an executive order establishing both an AI Leadership Council and an AI Accelerator within the Department of Information Technology (NCDIT), and AI Oversight Teams within each state agency. Together, these bodies are charged with, “advis[ing] and support[ing] the Governor and state agencies on AI strategy, policy, and training to achieve the state’s goals of fostering innovation, advancing AI-driven industries, and preparing the workforce for the evolving technological landscape.”
At a recent convening hosted by Duke University’s Sanford School of Public Policy, the Center for Cybersecurity Policy and Law, and CrowdStrike, policymakers, technologists, and academic leaders came together to reflect on what has worked so far, and what remains unresolved. Several themes stood out over the course of the day that are directly relevant to CIOs, CISOs, governors, and legislators in other states.
1. Leadership Matters Across Branches and Institutions
North Carolina’s progress on AI is not the result of a single initiative or office. It reflects sustained leadership across the executive branch, the legislature, state agencies, and the university system.
The state benefits from strong technical leadership within the Department of Information Technology, a growing cybersecurity function, and universities with deep technical expertise. Just as importantly, there is visible political leadership that sees AI as a strategic priority, not only for efficiency and modernization, but also for economic development and competitiveness.
Despite divided party control in the legislature, leaders repeatedly emphasized a strong working relationship and shared recognition that AI has the potential to improve government services and attract investment, while also introducing new and quickly evolving risks. Job automation and workforce retraining, were cited as major concerns that require ongoing attention, while strong cybersecurity and data governance practices were described as essential fundamentals to have in place ahead - and during - AI adoptions.
Rather than rushing toward large-scale deployments, North Carolina has focused on building governance structures and talent capacity first — accepting that much of the risk landscape is still evolving.
Lesson for other states: Durable AI progress depends on trusted relationships and technical credibility inside government, not just executive mandates or pilot projects.
2. AI Adoption Works Best When Leaders Own Their Piece of the Problem
A defining feature of North Carolina’s approach is that AI is not treated as a single policy problem to be solved centrally. Instead, leaders are approaching AI through the lens of their own responsibilities. For example:
- Legislators are examining how AI affects education, constituent services, and economic development.
- The state CIO is prioritizing procurement reform and helping agencies understand where AI can responsibly improve operations.
- The state CISO is focused on secure-by-design principles, identity, data stewardship, and preventing unmanaged risk from fragmented adoption.
This division of labor has enabled each group to develop real expertise in the risks and opportunities most relevant to their domain, while coordinating through shared councils and task forces.
Lesson for other states: You do not need every leader to be an AI expert. You need leaders who understand how AI intersects with their mission and clear forums for coordination.
3. Universities Are Acting as Infrastructure, Not Just Advisors
North Carolina’s universities are playing an active role in operationalizing AI, not just studying it.
Institutions are investing in computing infrastructure, applied research, and partnerships aligned with state priorities such as healthcare, education, cybersecurity, and public administration. Rather than focusing on frontier model development or hypothetical long-term risks, academic work is largely grounded in real-world problems facing the state.
One standout example came from a Duke course that partnered directly with the state to reduce the time required to license homeschool educators. In just ten weeks, students identified the bottleneck, built an AI-enabled tool, and deployed it into production. The system is now live, giving the state firsthand experience with building AI tools in-house and evaluating the tradeoffs between risk, cost, and benefit.
These projects provide a low-risk environment for experimentation, while training future technologists and public servants who understand government constraints.
Lesson for other states: Universities can serve as neutral conveners, trusted delivery partners, and talent pipelines especially when projects are tightly scoped and tied to real outcomes.
4. Lean Into AI to Stay Ahead of Evolving Threats
Rather than shying away from AI technologies, leaders emphasized the need to harness its capabilities to proactively protect critical operations and data in real time. Panelists also acknowledged that adversaries are already using AI to accelerate their attacks, making it clear that avoiding adoption is not a viable risk strategy. Instead, organizations must lean into AI-driven cybersecurity to keep pace with constantly evolving threats.
In addition to leveraging AI for cybersecurity, entities face the challenge of simultaneously addressing the security of AI systems themselves and the data leveraged by AI. AI systems depend on a complex technology stack that includes software, cloud workloads, training data, and more. Each layer of this stack, along with the vendors that produce it, must ensure security from development through deployment and use. Leaders noted that choosing the right AI tool — one with strong security principles built in — is critically important.
Lesson for other states: AI-driven cybersecurity practices are essential, not optional, given the current threat landscape facing the public sector. AI tools to improve business processes or drive efficiency should be evaluated for their security practices — security must be built directly into AI systems and tools themselves.
Top 5 Takeaways for State CIOs and CISOs
- Treat AI as a portfolio, not a product. Evaluate AI capabilities and risks holistically across agencies rather than tool by tool. Fragmented adoption creates blind spots in security, identity, and data governance.
- Invest early in identity protection, data governance, and cybersecurity. Clear data stewardship, strong identity controls, and a robust cybersecurity program are prerequisites, not nice-to-haves, when adding new AI tools to a technology stack.
- Adopt a consulting mindset with agencies. Many agencies do not yet know what they want from AI. CIO and CISO teams add the most value by helping define problems, map workflows, and assess risk–reward tradeoffs before tools are selected.
- Pilot in safe environments before scaling. Use accelerators, universities, and limited-scope deployments to test systems, build trust, and understand operational impacts before statewide rollout.
- Plan for new oversight roles, not just automation. AI will reduce some workloads but create new jobs in system monitoring, validation, and governance. Workforce planning should assume redeployment, not simple headcount reduction.
Building Toward Invisible, Trustworthy AI
Looking ahead, many speakers articulated a shared vision: a future where AI quietly makes the government more responsive, proactive, and humane, without being flashy or intrusive. Success will not be measured by how much AI is deployed, but by whether residents experience simpler, more trustworthy interactions with the government.
North Carolina’s experience shows that responsible AI adoption is not about moving fastest. It is about building the structures, talent, and trust, and to keep learning and experimenting as the technology evolves.
Read Next
Developing a National Cybersecurity Strategy
Developing a national cybersecurity strategy is a critical investment a government can make to secure its future. This paper outlines the components and offers a framework with the tools to design, implement, and improve their strategies.
FedRAMP Signals Acceleration of Requirements for Machine-Readable Packages in the Rev5 Process
FedRAMP has proposed modifications to the Rev5 process in the newly published RFCs that could enact major changes and require Cloud Service Offerings to provide authorization packages in a “machine-readable format.”
Cybersecurity Coalition, HPC Comment on EU CRA Delegated Act on Delaying Dissemination of Notifications About Vulnerabilities and Incidents
The Cybersecurity Coalition and the Hacking Policy Council submitted comments to the European Commission on its consultation related to the Delegated Act.
