Analysis of Pillar 1 of America's new AI Action Plan
- Ksenia Laputko
- Jul 24
- 8 min read

Yesterday marked the release of America's new AI Action Plan. Given the current U.S. administration’s established stance on advancing “AI without borders,” I anticipated a document focused heavily on promoting innovation and leadership in AI—and the title alone confirms that direction.
That said, considering the U.S. is home to some of the world’s largest and most influential tech hubs, this plan carries significant weight both domestically and globally.
As an AI governance and privacy expert, I’ve identified several key takeaways from the document worth highlighting. Let’s explore them.
1. Section about recommended actions
"work with all Federal agencies to identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development or deployment"
Remember the Biden Executive Order on Safe and Trustworthy AI—the one that directed federal agencies to develop concrete safeguards and standards for AI systems? Yes, the same order that was among the first to be rolled back.
In this new AI Action Plan, we see a clear shift: the intention appears to be removing many of the earlier boundaries and compliance expectations that were starting to take shape.
Is that a safer approach? Personally, I don’t think so. Removing foundational guardrails in the name of rapid progress may come at the cost of accountability, fairness, and long-term trust.
it follows...The AI Action Plan proposes that federal agencies with discretionary AI-related funding should, where legally allowed, take into account a state’s AI regulatory environment when making funding decisions. If a state's AI regulations are seen as likely to hinder the effectiveness of federal funding or awards, agencies are encouraged to limit or withhold such funding.
And it goes even further...includes a directive for reviewing past Federal Trade Commission (FTC) actions related to AI to ease regulatory roadblocks:
a)FTC investigations initiated under the previous administration will be reviewed to ensure they aren't based on overly burdensome theories of liability that could stifle AI innovation.
b)All final orders, consent decrees, and injunctions previously issued by the FTC will also be reassessed.Where appropriate, the government may seek to modify or set aside any enforcement actions deemed excessively restrictive to AI development.
In essence, the aim is to recalibrate past enforcement to better support innovation while maintaining consumer protections
Section about freedom of speech. + Empower American Workers in the Age of AI
Good thing:
1."AI systems will play a profound role in how we educate our children, do our jobs, and consume media it is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective.We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas."
2. Open-source and open-weight AI models, freely available for use and modification, offer significant advantages for innovation, especially for startups, academia, and entities with sensitive data. They reduce reliance on closed model vendors and support rigorous research. To maintain leadership and promote American values globally, the U.S. should foster an environment that supports the development and responsible release of open models, recognizing their strategic and economic value.
3. The "worker-first AI agenda" introduced by the Trump Administration offers a strategically sound approach by recognizing that while AI can boost productivity and create entirely new industries, it also brings fundamental changes to how work is performed. The emphasis on expanding AI literacy, developing new skills, and supporting workers through transitions signals a proactive stance toward ensuring that technological advancement does not come at the expense of the American workforce.
The initiatives, including the executive orders focused on AI education for youth and preparing Americans for high-paying skilled trade jobs, demonstrate a commitment to building future-ready talent. However, for this vision to succeed, it must be supported by more than just temporary pilots or symbolic actions. Meaningful implementation will require long-term funding, widespread access to training, and programs that are deeply aligned with labor market demands.
Equally important is the need to ensure that AI literacy programs are comprehensive—not limited to surface-level digital skills but inclusive of more advanced knowledge in data, automation, and ethical use. To avoid deepening the digital divide, retraining initiatives must also reach underserved and rural communities. Furthermore, close collaboration with industry will be necessary to ensure that the skills being developed are truly relevant.
In essence, while the agenda is promising and timely, its impact will depend heavily on consistent investment, inclusive design, and strong public-private cooperation.
Section "Enable AI Adoption"
"Today, the bottleneck to harnessing AI’s full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large, established organizations. Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry."
This statement rightly identifies the slow adoption of AI in critical sectors like healthcare but overlooks a crucial concern: the reason for this hesitancy is not just cultural inertia or regulatory complexity — it is also the very real need for safeguards. Encouraging a "try-first" culture without robust and enforceable legal and ethical frameworks is risky, especially in high-stakes domains like healthcare, where errors can cost lives and trust.
Critical sectors require more than cultural encouragement; they demand clear limitations on AI’s role, strict data protection rules, explainability standards, and accountability mechanisms. A "try-first" approach, if not paired with strong oversight, could lead to premature deployment of AI tools in environments where reliability, safety, and human oversight are non-negotiable.
Rather than accelerating adoption at any cost, the government should prioritize building a trustworthy AI governance framework that includes sector-specific guardrails — not bypass them in the name of innovation.
Section "Support Next-Generation Manufacturing"
This section, while emphasizing innovation and global competitiveness, raises significant concerns due to its uncritical endorsement of AI integration into high-risk, sensitive domains—particularly autonomous drones and military applications. The push to support next-generation manufacturing powered by AI glosses over the ethical, legal, and safety risks that such technologies pose, especially when deployed in areas like defense and surveillance.
Autonomous drones and AI-enabled robotics introduce serious dilemmas: issues of accountability in the event of malfunctions, the potential for disproportionate use of force, and the erosion of human oversight in life-and-death decisions. The mention of "applications to defense and national security" hints at militarization, yet fails to outline any safeguards or limitations to prevent misuse or escalation.
Rather than calling solely for investment, this strategy should emphasize strict regulation, risk assessments, and international coordination. Supporting innovation should not come at the cost of undermining human rights, global stability, or democratic oversight. The enthusiasm for a so-called "industrial renaissance" must be tempered by a clear framework of ethical responsibility and rigorous public debate, especially when such technologies can cause real-world harm.
Sections "Invest in AI Interpretability, Control, and Robustness Breakthroughs" and "Build an AI Evaluations Ecosystem"
This section stands out as one of the more thoughtful and logically grounded parts of the document. It rightly highlights the urgent need to invest in AI interpretability, control, and robustness—particularly in the context of high-stakes domains like defense and national security. Acknowledging that even leading experts often cannot fully explain or predict the outputs of large language models (LLMs) reflects a realistic understanding of the current limitations of AI.
Unlike earlier sections that promote rapid deployment of AI in sensitive sectors without adequate safeguards, this part recognizes that unpredictability and lack of transparency are real barriers to responsible AI use. It implicitly suggests a pause for foundational research before rushing into operational deployment, which is a welcome and necessary stance. However, it also slightly contradicts the earlier enthusiasm for deploying AI in defense and autonomous systems without equivalent emphasis on oversight or ethical guardrails.
Nonetheless, the recognition that we need scientific breakthroughs in interpretability before AI can be safely applied in life-critical domains is a crucial and commendable message. It reflects a maturity that should be consistently applied across the entire strategy.
This section is also one of the stronger and more constructive parts of the document. It emphasizes the essential role of rigorous evaluations in assessing the performance and reliability of AI systems—particularly in regulated industries where the stakes are high. Recognizing evaluations as a foundation for accountability and safety reflects a sound understanding of the need for measurable standards.
The suggestion that regulators should consider using evaluations as a tool when applying existing laws to AI systems is both practical and forward-thinking. It offers a bridge between current regulatory frameworks and emerging technologies, without immediately resorting to creating entirely new laws. This approach allows for more adaptive oversight and supports responsible innovation.In contrast to sections that encourage rapid AI deployment with minimal safeguards, this point reinforces the importance of building trust through transparency and measurable benchmarks. It’s a necessary component of any credible AI governance framework and deserves to be expanded and prioritized across the policy.
Section "Accelerate AI Adoption in Government"
This provision, while seemingly beneficial on the surface, clearly requires careful limitations and oversight. Accelerating AI adoption in government has the potential to improve efficiency, but without strict safeguards, it also opens the door to serious risks.
When AI is used in public administration—particularly in decision-making that affects individuals' rights, benefits, or access to services—it must be subject to transparency, explainability, and strong accountability measures. Automating internal processes may streamline bureaucracy, but automating interactions with the public (such as eligibility determinations, permit approvals, or law enforcement functions) risks entrenching biases, reducing human oversight, and eroding public trust.
Moreover, the notion of a “highly responsive government” powered by AI could easily turn into a government that makes decisions faster but not necessarily more justly or accurately. To prevent this, AI use in government must be paired with clear limits, robust human-in-the-loop mechanisms, and legally binding rules to protect civil liberties and due process.
Thus, while modernization is welcome, the blanket acceleration of AI in government without specifying these guardrails could be dangerous and counterproductive.
Section "Drive Adoption of AI within the Department of Defense"
While scholars such as Barry Scannell have rightly raised concerns about the ethical and legal risks of using AI in warfare—highlighting issues such as loss of human accountability, unpredictability of autonomous systems, and compliance with international humanitarian law—this section of the action plan seems to brush aside those warnings with almost unsettling confidence.
The idea of “aggressively adopting AI” in the Armed Forces, framed as a strategy to maintain “global military preeminence,” comes off as not only reckless but somewhat detached from the very real dangers experts have flagged. At a time when respected voices in the AI and legal communities are urging caution, international cooperation, and strict oversight, this provision reads more like a tech-enthusiast sales pitch than a carefully weighed national security policy.
It borders on the absurd to advocate for widespread AI militarization while the world hasn’t even reached consensus on banning autonomous weapons. Scannell and others highlight how AI, when applied in conflict, opens the door to catastrophic errors, untraceable decision-making, and destabilizing arms races. Ignoring these concerns or treating them as footnotes risks turning speculative fears into imminent crises.
Section "Combat Synthetic Media in the Legal System"
This provision stands out as one of the more thoughtful and necessary components of the action plan. It directly acknowledges the growing and serious threat that synthetic media—especially deepfakes—poses to the integrity of the legal system. Recognizing that AI-generated content, including fake videos, photos, and audio, could be weaponized to distort evidence or undermine judicial processes is a critical step.
The reference to the TAKE IT DOWN Act is a positive signal, but more importantly, the plan’s call for further legal safeguards reflects a realistic understanding that current protections are insufficient. Unlike other sections of the plan that promote rapid deployment of AI even in high-risk domains, this one appropriately calls for caution, regulation, and anticipatory measures. It’s a clear, logical, and necessary response to one of the most visible and damaging misuses of generative AI—and deserves support and further development.
留言