Federal AI Action Plan - AI Analysis of Public Comment
On February 26, 2025, the federal government released a Request for Information to gather public input on the development of a Federal AI Action Plan. By the time the March 15th deadline arrived, over 10,000 responses had been received. I was curious about what came in via public comment, but didn't have time to read those many thousands of responses, so I used ChatGPT to help analyze the large volume of public comment. Please read on for an overview of various themes and insights that emerged.
Simon Szykman
4/29/20253 min read


On February 26, 2025, the federal government released a Request for Information to gather public input on the development of a Federal AI Action Plan. By the time the March 15th deadline arrived, over 10,000 responses had been received. Those responses were released to the public last week at www.nitrd.gov. I was curious about what came in via public comment, but didn't have time to read the many thousands of responses, so I used ChatGPT to help analyze the large volume of public comment.
Of the 10,068 responses, 755 were submitted by organizations that were categorized into the following groupings: private sector (companies), industry councils and professional associations, academic institutions, nonprofits, and state and local government organizations. The remaining responses, which accounted for the overwhelming majority (92.5%) of the submissions came from individual members of of the general public. With ChatGPT's assistance, I was able to carry out some interesting analysis of these groups of submissions.
Across the full set of responding groups, there is broad consensus on the need for strong federal leadership, robust data governance, investment in secure and ethical AI infrastructure, and coordinated national policy. Virtually all groups champion innovation and research, call for responsible, inclusive deployment of AI, and highlight the importance of workforce development and education to ensure society is prepared for rapid technological change.
However, perspectives diverge sharply regarding intellectual property, economic impact, and the power of big tech companies. All of the submitting organizations largely adopt a more constructive, partnership-oriented, and future-focused tone with their perspectives and recommendations, aiming to balance innovation with risk management, inclusion, and public trust. In contrast, individual members of the public voiced deep skepticism and distrust, particularly regarding AI’s impact on creative work, copyright, and job security—calling for strict regulation and protection.
Digging further below the surface, sector-specific nuances emerged:
Industry stresses infrastructure and operational needs.
Academia seeks open science and flexible policy.
Professional associations push for standards and public trust.
State/local governments emphasize economic development and equity for all communities.
Nonprofits advocate for marginalized groups, and individuals focus on personal and creator rights.
Despite these differences, the shared priorities of leadership, ethical practice, and a national strategy for AI remain foundational across all groups. It was also interesting to examine both areas that were generally common to various groups of respondents, and those that differed. Here are a few shared themes and priorities:
Demand for Strong Government Leadership & Policy: All groups call for robust, coordinated federal action and clear national standards for AI.
Data, Security, and Infrastructure: Every group places high importance on data governance, privacy, and security.
Responsible, Ethical, and Inclusive AI: Ethical use, public trust, and inclusion are cross-cutting priorities.
Innovation and Research: Support for innovation, research, and R&D investment is universal.
Education and Workforce Development: Almost all groups highlight workforce development, digital literacy, and education as critical.
And in contrast, here are some more unique perspectives and key differences from among the different groups:
Intellectual Property, Copyright, and Big Tech Power: Individual responses are dominated by negative sentiment about copyright infringement, creative work exploitation, and distrust of AI companies. Other groups focus less on this issue.
Tone and Sentiment: Individuals are overwhelmingly negative/alarmist. Other groups are mostly positive/proactive, seeing AI as an opportunity—with constructive concerns about risks.
Sector-Specific Emphasis: Each group emphasizes different aspects (infrastructure, research, standards, economic development, vulnerable communities, creator rights).
Recommendations: Industry, professional associations, and academia advocate for partnerships. Individuals demand direct regulation and compensation.
This blog post is meant to serve as a quick read -- you can think of it as an aggregated analysis in executive summary form. If you are interested in a deeper dive, I have created stand-alone posts with a more detailed analysis for each of these categories, complete with a sentiment analysis, word cloud, list of topics common across many responses, recurring themes among recommendations to the government, and any notable observations. If you don't have time to read all of them but are interested comparing and contrasting the most divergent perspectives, I'll suggest you take a look at the summary for industry and the one for individual members of the general public, as these are the two largest groups of respondents and also those that offered very different views in their input to the government.
Links to deep-dive analyses:
Unique Entity ID: RF8CGEK78DY4
© 2025. All rights reserved.


Navigation
#CambioMeansChange
CAGE Code: 9XEK8

