In 2025, AI regulation has gained global momentum—especially within the EU and US—with governments rolling out new frameworks and plans to ensure ethical, transparent, and safe AI.
EU AI Act Clarifications Drawn into Focus
On July 18, 2025, the European Commission published new draft guidelines specifying how the EU AI Act applies to General-Purpose AI (GPAI) systems. These guidelines clarify compliance expectations related to lifecycle management, risk categorization, and developer responsibilities. They are designed to help companies demonstrate control and transparency as regulations begin to take hold. americanbar.org+5BlackFog+5The Hacker News+5artificialintelligenceact.eu
EU InvestAI Initiative Underscores Regulatory Balance
At the recent AI Action Summit, the EU announced InvestAI, a €200 billion investment initiative that includes €20 billion specifically for AI gigafactories. Alongside this, France alone saw €110 billion pledged for AI development, and other global partners like UAE and Canada committed massive funding. This indicates that Europe aims to pair regulation with large-scale innovation to maintain leadership. Wikipedia
US Lags—but Pressure Builds for AI Diplomacy
A recent Time article (August 13, 2025) highlights that, contrary to a perceived U.S. lag, China is aggressively pushing AI safety standards—enacting product reviews, safety assessments, and emergency planning. The piece calls for the U.S. to step up, particularly through technical collaboration and governance frameworks with China to address global risks.TIME
Summary Table
Jurisdiction | Key Developments |
---|---|
EU | GPAI guidelines; InvestAI funding push |
US | Calls for stronger AI safety policy and cooperation with China |
This convergence indicates regulatory momentum: the EU is coupling innovation with oversight, and the U.S. may be pressured into strategic coordination on safety. Your business should prepare for a complex regulatory mosaic—global transparency, audit standards, and governance frameworks.