As we noted earlier, artificial intelligence (AI) is rapidly transforming our world, with a growing impact on industries ranging from healthcare to finance, transportation, and entertainment. Advancements in AI technology have accelerated at an unprecedented pace, driven by breakthroughs in machine learning, natural language processing, and computer vision.
AI systems can perform complex tasks once thought to be the exclusive domain of human intelligence, such as recognizing speech, understanding natural language, and making decisions based on large data sets. However, while AI holds enormous potential for improving our lives, it also presents significant challenges and ethical considerations, such as data privacy, bias, and accountability.
State governments are increasingly grappling with the implications of AI on privacy, transparency, and accountability to ensure that it is utilized carefully, ethically, and with consideration for potential unintended consequences. As noted previously, the National Conference of State Legislatures (NCSL) released a report on AI regulation, which saw 34 state lawmakers recommending and suggesting various state-level AI policies. Additionally, various states have taken significant action in 2023 related to AI, including studies, commissions, task forces, and, in some cases, bans.
California Governor Signs EO to Prepare for AI
In early September 2023, Governor GavinNewsom (D) signed an executive order to deploy Generative AI (GenAI) throughout state government ethically and responsibly, mitigate its potential harm, and keep California as the “world’s AI leader.” In a press release, the governor noted that California was the hub for GenAI and that the state would take a measured approach to capture the benefits of AI while protecting against potential risks.
The order includes several provisions, including, among other items, directing state agencies and departments to perform a risk-analysis report, establishing a procurement blueprint for AI, directing state agencies and departments to report how AI could be beneficial and also identify potential risks, and directing state agencies to train its employees to use state-approved GenAI.
Oklahoma Governor Announces AI Task Force
In late September 2023, Governor KevinStitt (R) announced an executive order creating a task force to study AI’s potential uses, benefits, and vulnerabilities. The task force will study, evaluate, and recommend policies and administrative actions for deploying AI and GenAI. The state Chief Information Officer will chair the task force, which should report its findings and recommendations to the governor by the end of 2023. Additionally, the executive order calls for all state agencies and departments to task one person to become an AI and GenAI expert.
Pennsylvania Signs Order to Govern the Use of Generative AI
On September 20, 2023, Governor JoshShapiro (D) signed an executive order establishing standards and a governance framework for generative AI use by Commonwealth agencies. In the order, Shapiro called on state agencies to govern the use of AI through ten “core values.” These values include accuracy, adaptability, employee empowerment, equity and fairness, innovation, mission alignment, privacy, proportionality, safety and security, and transparency.
The order further established the Generative AI Governing Board in PA, whose mission is to recommend guidance and direction on using generative AI in state agencies. The board, which shall meet monthly, comprises 12 members, including the governor’s chief of staff, the governor’s director of digital strategy, and the governor’s chief transformation and opportunity officer, among others.
Virginia Signs Directive on AI
On September 20, 2023, Governor GlennYoungkin (R) signed an executive directive to “ensure public protections while recognizing opportunities in AI innovation.” In the directive, the governor notes that the Commonwealth needs standards and protocols for adapting AI across state government to prepare itself for the ever-growing technology to take advantage of its benefits while mitigating its associated risks.
Specifically, the review calls for a review of the legal requirements under state law for using AI technology to identify policy standards for state agencies to effectively use AI, identify the appropriate IT safeguards (i.e., cybersecurity and firewalls) to mitigate security and privacy risks and to teach students how to use the technology while protecting against the misuse of AI in school.
Get The Latest DMGS Updates!
Enter your email address to subscribe to this blog and receive notifications of new posts by email.