Align in Time

Humans and the Emerging Superintelligence

Content Curation and Reporting by Alison Avery

Email: alison@alignintime.org

● Is Alignment Possible? ● Who’s Taking It Seriously? ● Will We Have Time?

A Midnight Maneuver: How AI Giants Could Strip States of Their Power

A graphic stating, "did you know? In the US, sandwich shops have more rules than AI companies."


In the final days before Congress votes on the National Defense Authorization Act, AI industry giants are urging the inclusion of a clause that bans states from enforcing AI regulations. If successful, the provision would invalidate all existing state laws and ban future laws, including those that motivate companies toward responsible and safe AI advancement.

This isn’t the first time big AI tech has tried this move.

In July, the Senate firmly rejected a similar tactic that was intended to be a part of the One Big Beautiful Bill.

In its original form, the proposal was languaged as a 10-year ban preventing states and local governments from making or enforcing AI-related laws. More precisely, it forbade “any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce”—although laws removing legal obstacles to AI deployment or those streamlining “licensing, permitting, routing, zoning, procurement, or reporting procedures” were welcome.  

Later, the proposition also became tied to federal funding. States not complying with the decade moratorium would become inellgible to receive funds from Broadband Equity, Access, and Deployment (BEAD), a $42.45 billion national grant program focused on creating infrastructure that ensures all Americans have access to high-speed internet services. 

Thankfully, during Congressional attempts to settle on the phrasing of the ban, a bipartisan group of senators, including Senators Maria Cantwell (D-WA), Marsha Blackburn (R-TN), Ed Markley (D-MA), and Susan Collins (R-ME), recognized the dangers of banishing state-level protections and submitted an amendment to strike the proposition entirely

The amendment to strike passed the Senate with a rare, almost unanimous vote of 99–1.

In that moment, the Senate underscored that state-level laws are crucial due to the fact that people in the United States remain largely unprotected from serious AI risks. In other words, on a national level, there’s really nothing standing between U.S. citizens and the potential dangers that come with fast-moving AI technologies.

Now, only months later, AI industry forces and the White House are renewing their shared aspiration to ban state AI laws. Since the prior ban attempt, House Majority Leader Steve Scalise says that some House Republicans have been actively seeking other legislative means to push the ban through. And he confirms the annual National Defense Authorization Act is a target. 

How would they do it? As tried before with the OBBB, they’d simply tuck in the language. The defense policy conference reports used by the House and Senate to compose and negotiate the annual bill are thousands of pages long, and last year’s final NDAA totaled 794 pages. The massive size makes it an attractive vehicle for edging in controversial provisions.

For example, when House Representative Marjorie Taylor Greene regretfully noticed the AI law moratorium in the One Big Beautiful Bill, she first spotted the language on just two of its pages, but only after the House had passed the legislation. 

An emphasis on speed adds another hurdle for legislators reviewing the NDAA documents—with numerous powers granted by the bill expiring at the close of the fiscal year, House and Senate members feel compelled to finalize voting as early as possible.

This is precisely why AI safety advocacy groups have been leading the charge in notifying citizens and members of Congress about the possible ban; once the massive NDAA conference report is released, there will be limited time to spot the language before the bill is finalized.

The proposed ban has raised serious worries and pushback from Americans who rarely agree on anything else, including governors, AI researchers, safety experts, state attorneys general, lawmakers, and the general public—Republican and Democrat.

At the heart of their concerns is a disturbing fact: in the absence of comprehensive federal AI legislation, prohibiting states from making laws leaves a threatening and potentially long-standing regulatory vacuum during the most crucial phase of AI development and transition. 

Senator Brian Schatz (D-HI) points out:

Even if you’re excited about AI, even the smartest, most pro-AI people think that there is a regulatory framework that has to be established. And so it would be one thing if we were to establish a federal standard, and that would preempt the states. But this is just saying that ‘we don’t know what the hell we’re going to do, and none of you all can do anything until we figure it out.

Experts and safety advocates with the respected Future of Life Institute work to steer powerfully transformative technologies in directions that are pro-human and away from catastrophic threats. The institute’s president, long-time MIT physicist and AI researcher, Max Tegmark, recently commented on AI oversight:  

The sandwich shop on your corner has more oversight than an AI company. Federal preemption of state AI laws would hand Big Tech a blank check—overriding the legitimate concerns of local communities while letting corporations release products they freely admit are dangerous and uncontrollable.”

Prior to the One Big Beautiful Bill vote, attorneys general from 40 of the nation’s 56 jurisdictions—representing more than 71% of all states, territories, and the District of Columbia—sent a letter to Congressional leadership opposing the inclusion of the ban. 

The letter made clear:

“…the amendment added to the reconciliation bill abdicates federal leadership and mandates that all states abandon their leadership in this area as well. This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI. Moreover, this bill purports to wipe away any state-level frameworks already in place.” 

“These laws and their regulations have been developed over years through careful consideration and extensive stakeholder input from consumers, industry, and advocates. And, in the years ahead, additional matters—many unforeseeable today given the rapidly evolving nature of this technology—are likely to arise.” 

Bruce Schneier, a cybersecurity expert at Harvard Kennedy School, a Fellow at the Berkman Klein Center for Internet & Society, and a board member of the Electronic Frontier Foundation, has specialized for years at the intersection of security, technology, and policy.

In December, he corresponded with Route Fifty, a publication on leadership and technology for U.S. state and local decision-makers, where he mentioned that not allowing states to govern AI while Congress has no plan in place to attend to AI impacts would be “kind of disastrous.” He went on to express doubts about the federal government’s capacity to produce reasonable legislation, adding:

“That leaves states, which are closest to the people and are in the best position to regulate the harms of AI. And, even better, as states experiment with different regulations we can see what works and what doesn’t” 

Over a week ago, 296 state legislators representing 43 states sent a letter to Congress strongly opposing any AI preemption language that would curtail state efforts to legally address AI impacts. In the letter, they emphasized: 

“States serve as laboratories of democracy, directly accountable to their residents, and must retain the flexibility to confront new digital challenges as they arise. State experimentation and varied approaches to AI governance help build a stronger national foundation for sound policymaking. And as AI evolves rapidly, state and local governments may be better positioned than Congress or federal agencies to respond in real time. Freezing state action now would stifle needed innovation in policy design at a moment when it is most needed.” 

Foremost Republican governors are also sounding the alarm. Florida’s Republican governor, Ron Desantis, had this to say on X.com:

“The rise of AI is the most significant economic and cultural shift occurring at the moment; denying the people the ability to channel these technologies in a productive way via self-government constitutes federal government overreach and lets technology companies run wild.” 

Sarah Huckabee Sanders, Republican Governor of Arkansas and Former White House press secretary to Donald Trump, also posted on X.com. 

“This summer I led 20 GOP governors to pressure Congress to vote down its 10 year prohibition on state-level AI regulations… Now isn’t the time to backtrack. Drop the preemption plan now…” 

In an interview with National Public Radio, Republican Governor of Utah, Spencer Cox, explained:

 “…when it comes to the deployment of artificial intelligence as it impacts our kids and our families, our schools, the human-flourishing piece of this, we should also be incredibly cautious. I’m very worried about any type of federal incursion into states’ abilities to regulate AI. We’ve seen how social media companies, the most powerful companies in the history of the world, have used this incredible tool to utterly destroy our kids and our families, their mental health, to use this in ways that have made them a lot of money and gotten people addicted to outrage. AI’s going to be even worse.”

Clearly, this all points to something bigger than mere grumbling on the part of U.S. states.  

In my everyday life, I’ve never been very into U.S. national politics. I don’t seek out political discussions or feel moved to keep abreast of every legislative development. The game of never-ending political combat wears me down. In the day-to-day, it just isn’t something I enjoy. 

But I WILL participate when I know I need to, and now is one of those times.

Luckily, I have fighters in the ring who want to do the daily combat part of politicking, or they wouldn’t be there—I call them Congress members. 

Two mouths facing each other and opened wide with the tongues turning into fists and fighting each other.

So,

knowing what I know about how AI could go catastrophically wrong, and knowing we need more time to figure things out, and knowing, right now, state governments are the one’s closest to understanding this… 

I sent this letter to my Congress members:

“As a resident of Florida and a knowledgeable writer in the AI safety research space, for the protection of all of us, including you, your family, your colleagues, your community, and your friends… PLEASE do whatever it takes to make sure the National Defense Authorization Act DOES NOT PREEMPT OR BLOCK STATES’ RIGHTS to actively participate in AI regulation and safety stewardship on behalf of their constituents.

AI companies are swiftly iterating and releasing AI technologies throughout our country and our world, reaching into all aspects of our existence and livelihoods. Each rapid iteration brings ever-increasing intelligence, including abilities for AIs to make their own decisions and take autonomous action within our society’s foundational systems and infrastructure.

The publicly stated objective of AI companies is to achieve not just beneficial AI, but extraordinarily “Powerful AI” (also referred to as “AGI” or “ASI”) with abilities that immeasurably surpass humans in intelligence, speed, and coordinated action. Simultaneously, these companies openly confirm that there is no known method to guarantee that a powerful AI’s abilities will stay within the control of humans. Academic, independent, and non-profit AI science research from around the globe firmly agrees on this is fact.

For all of its promise and possibility, AI’s corresponding potential for severe and catastrophic human danger is significant. Its pace of advancement and real-world integration is happening at an unprecedented velocity. This combination of conditions makes risks from AI the most immediate collective safety threat we face.

Informed and mature involvement is needed by all of us to guide AI’s continued unfolding to the best possible outcomes while also preventing the worst from transpiring. This responsible involvement includes the right of Florida’s government and citizens to enact AI safety laws when we believe their protection is just and necessary.

With human stakes this high, we can’t continue to allow AI companies to be the sole arbiters of threats to our collective safety.”

– Alison Avery

I love technology and I use AI tools every day. I’m not wishing AI weren’t here. And I don’t want it to go away. But if national due diligence to prevent AI from going down a very bad path within 2 to 20 years seems important to you as it does to me, we need to speak up now
 
You can easily send a letter to your own Congress members. The safety advocates at the Future of Life Institute have made it incredibly easy

Enter your name, address, and state. 
Use their template letter or your own words. 
Press submit. 
 
And off it will go.


Posted

in

by

Tags: