Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Tumbler Ridge students barricaded themselves as shooter opened fire

    February 11, 2026

    Real Sociedad claim El Derbi Vasco bragging rights vs Athletic Club

    February 11, 2026

    Wednesday Sports Roundup: MLB's 'hamate epidemic,' Olympics history, Seahawks parade

    February 11, 2026
    Facebook X (Twitter) Instagram
    Select Language
    Facebook X (Twitter) Instagram
    NEWS ON CLICK
    Subscribe
    Wednesday, February 11
    • Home
      • United States
      • Canada
      • Spain
      • Mexico
    • Top Countries
      • Canada
      • Mexico
      • Spain
      • United States
    • Politics
    • Business
    • Entertainment
    • Fashion
    • Health
    • Science
    • Sports
    • Travel
    NEWS ON CLICK
    Home»Science & Technology»US Science & Tech»Scott Wiener on his fight to make Big Tech disclose AI’s dangers
    US Science & Tech

    Scott Wiener on his fight to make Big Tech disclose AI’s dangers

    News DeskBy News DeskSeptember 23, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Scott Wiener on his fight to make Big Tech disclose AI's dangers
    Share
    Facebook Twitter Pinterest Email Copy Link

    This is not California state Senator Scott Wiener’s first attempt at addressing the dangers of AI.

    In 2024, Silicon Valley mounted a fierce campaign against his controversial AI safety bill, SB 1047, which would have made tech companies liable for the potential harms of their AI systems. Tech leaders warned that it would stifle America’s AI boom. Governor Gavin Newsom ultimately vetoed the bill, echoing similar concerns, and a popular AI hacker house promptly threw a “SB 1047 Veto Party.” One attendee told me, “Thank god, AI is still legal.”

    Now Wiener has returned with a new AI safety bill, SB 53, which sits on Governor Newsom’s desk awaiting his signature or veto sometime in the next few weeks. This time around, the bill is much more popular or at least, Silicon Valley doesn’t seem to be at war with it.

    Anthropic outright endorsed SB 53 earlier this month. Meta spokesperson Jim Cullinan tells TechCrunch that the company supports AI regulation that balances guardrails with innovation and says “SB 53 is a step in that direction,” though there are areas for improvement.

    Former White House AI policy advisor Dean Ball tells TechCrunch that SB 53 is a “victory for reasonable voices,” and thinks there’s a strong chance Governor Newsom signs it.

    If signed, SB 53 would impose some of the nation’s first safety reporting requirements on AI giants like OpenAI, Anthropic, xAI, and Google — companies that today face no obligation to reveal how they test their AI systems. Many AI labs voluntarily publish safety reports explaining how their AI models could be used to create bioweapons and other dangers, but they do this at will and they’re not always consistent.

    The bill requires leading AI labs — specifically those making more than $500 million in revenue — to publish safety reports for their most capable AI models. Much like SB 1047, the bill specifically focuses on the worst kinds of AI risks: their ability to contribute to human deaths, cyberattacks, and chemical weapons. Governor Newsom is considering several other bills that address other types of AI risks, such as engagement-optimization techniques in AI companions.

    SB 53 also creates protected channels for employees working at AI labs to report safety concerns to government officials, and establishes a state-operated cloud computing cluster, CalCompute, to provide AI research resources beyond the big tech companies.

    One reason SB 53 may be more popular than SB 1047 is that it’s less severe. SB 1047 also would have made AI companies liable for any harms caused by their AI models, whereas SB 53 focuses more on requiring self-reporting and transparency. SB 53 also narrowly applies to the world’s largest tech companies, rather than startups.

    But many in the tech industry still believe states should leave AI regulation up to the federal government. In a recent letter to Governor Newsom, OpenAI argued that AI labs should only have to comply with federal standards — which is a funny thing to say to a state governor. The venture firm Andreessen Horowitz wrote a recent blog post vaguely suggesting that some bills in California could violate the Constitution’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.

    Senator Wiener addresses these concerns: he lacks faith in the federal government to pass meaningful AI safety regulation, so states need to step up. In fact, Wiener thinks the Trump administration has been captured by the tech industry, and that recent federal efforts to block all state AI laws are a form of Trump “rewarding his funders.”

    The Trump administration has made a notable shift away from the Biden administration’s focus on AI safety, replacing it with an emphasis on growth. Shortly after taking office, Vice President J.D. Vance appeared at an AI conference in Paris and said: “I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity.”

    Silicon Valley has applauded this shift, exemplified by Trump’s AI Action Plan, which removed barriers to building out the infrastructure needed to train and serve AI models. Today, Big Tech CEOs are regularly seen dining at the White House or announcing hundred-billion-dollar data centers alongside President Trump.

    Senator Wiener thinks it’s critical for California to lead the nation on AI safety, but without choking off innovation.

    I recently interviewed Senator Wiener to discuss his years at the negotiating table with Silicon Valley and why he’s so focused on AI safety bills. Our conversation has been edited lightly for clarity and brevity. My questions are in bold, and his answers are not.

    Maxwell Zeff: Senator Wiener, I interviewed you when SB 1047 was sitting on Governor Newsom’s desk. Talk to me about the journey you’ve been on to regulate AI safety in the last few years.

    Scott Wiener: It’s been a roller coaster, an incredible learning experience, and just really rewarding. We’ve been able to help elevate this issue [of AI safety], not just in California, but in the national and international discourse.

    We have this incredibly powerful new technology that is changing the world. How do we make sure it benefits humanity in a way where we reduce the risk? How do we promote innovation, while also being very mindful of public health and public safety. It’s an important — and in some ways, existential — conversation about the future. SB 1047, and now SB 53, have helped to foster that conversation about safe innovation.

    In the last 20 years of technology, what have you learned about the importance of laws that can hold Silicon Valley to account?

    I’m the guy who represents San Francisco, the beating heart of AI innovation. I’m immediately north of Silicon Valley itself, so we’re right here in the middle of it all. But we’ve also seen how the large tech companies — some of the wealthiest companies in world history — have been able to stop federal regulation.

    Every time I see tech CEOs having dinner at the White House with the aspiring fascist dictator, I have to take a deep breath. These are all really brilliant people who have generated enormous wealth. A lot of folks I represent work for them. It really pains me when I see the deals that are being struck with Saudi Arabia and the United Arab Emirates, and how that money gets funneled into Trump’s meme coin. It causes me deep concern.

    I’m not someone who’s anti-tech. I want tech innovation to happen. It’s incredibly important. But this is an industry that we should not trust to regulate itself or make voluntary commitments. And that’s not casting aspersions on anyone. This is capitalism, and it can create enormous prosperity but also cause harm if there are not sensible regulations to protect the public interest. When it comes to AI safety, we’re trying to thread that needle.

    SB 53 is focused on the worst harms that AI could imaginably cause — death, massive cyber attacks, and the creation of bioweapons. Why focus there?

    The risks of AI are varied. There is algorithmic discrimination, job loss, deep fakes, and scams. There have been various bills in California and elsewhere to address those risks. SB 53 was never intended to cover the field and address every risk created by AI. We’re focused on one specific category of risk, in terms of catastrophic risk.

    That issue came to me organically from folks in the AI space in San Francisco — startup founders, frontline AI technologists, and people who are building these models. They came to me and said, ‘This is an issue that needs to be addressed in a thoughtful way.’

    Do you feel that AI systems are inherently unsafe, or have the potential to cause death and massive cyberattacks?

    I don’t think they’re inherently safe. I know there are a lot of people working in these labs who care very deeply about trying to mitigate risk. And again, it’s not about eliminating risk. Life is about risk, unless you’re going to live in your basement and never leave, you’re going to have risk in your life. Even in your basement, the ceiling might fall down.

    Is there a risk that some AI models could be used to do significant harm to society? Yes, and we know there are people who would love to do that. We should try to make it harder for bad actors to cause these severe harms, and so should the people developing these models.

    Anthropic issued its support for SB 53. What are your conversations like with other industry players?

    We’ve talked to everyone: large companies, small startups, investors, and academics. Anthropic has been really constructive. Last year, they never formally supported [SB 1047] but they had positive things to say about aspects of the bill. I don’t think [Anthropic} loves every aspect of SB 53, but I think they concluded that on balance the bill was worth supporting.

    I’ve had conversations with large AI labs who are not supporting the bill, but are not at war with it in the way they were with SB 1047. It’s not surprising. SB 1047 was more of a liability bill, SB 53 is more of a transparency bill. Startups have been less engaged this year because the bill really focuses on the largest companies.

    Do you feel pressure from the large AI PACs that have formed in recent months?

    This is another symptom of Citizens United. The wealthiest companies in the world can just pour endless resources into these PACs to try to intimidate elected officials. Under the rules we have, they have every right to do that. It’s never really impacted how I approach policy. There have been groups trying to destroy me for as long as I’ve been in elected office. Various groups have spent millions trying to blow me up, and here I am. I’m in this to do right by my constituents and try to make my community, San Francisco, and the world a better place.

    What’s your message to Governor Newsom as he’s debating whether to sign or veto this bill?

    My message is that we heard you. You vetoed SB 1047 and provided a very comprehensive and thoughtful veto message. You wisely convened a working group that produced a very strong report, and we really looked to that report in crafting this bill. The governor laid out a path, and we followed that path in order to come to an agreement, and I hope we got there.

    AI regulation ai safety California
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    News Desk
    • Website

    News Desk is the dedicated editorial force behind News On Click. Comprised of experienced journalists, writers, and editors, our team is united by a shared passion for delivering high-quality, credible news to a global audience.

    Related Posts

    US Science & Tech

    OpenAI disbands mission alignment team, which focused on ‘safe’ and ‘trustworthy’ AI development

    February 11, 2026
    US Science & Tech

    Get up to 81 percent off ExpressVPN two-year plans

    February 11, 2026
    US Science & Tech

    How to get into a16z’s super-competitive Speedrun startup accelerator program

    February 11, 2026
    US Science & Tech

    The Helldivers movie will star Jason Momoa and hits theaters on November 10, 2027

    February 11, 2026
    US Science & Tech

    Google releases the first beta of Android 17, adopts a continous developer release plan

    February 11, 2026
    US Science & Tech

    TikTok US launches a local feed that leverages a user’s exact location

    February 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Don't Miss

    Tumbler Ridge students barricaded themselves as shooter opened fire

    News DeskFebruary 11, 20260

    Jarbas Noronha, who was teaching a grade 12 auto mechanics class with about 15 students,…

    Real Sociedad claim El Derbi Vasco bragging rights vs Athletic Club

    February 11, 2026

    Wednesday Sports Roundup: MLB's 'hamate epidemic,' Olympics history, Seahawks parade

    February 11, 2026

    Aston Villa leave it late as Burnley stun Palace

    February 11, 2026
    Tech news by Newsonclick.com
    Top Posts

    Tumbler Ridge students barricaded themselves as shooter opened fire

    February 11, 2026

    The Roads Not Taken – Movie Reviews. TV Coverage. Trailers. Film Festivals.

    September 12, 2025

    Huey Lewis & The News, Heart And Soul

    September 12, 2025

    FNE Oscar Watch 2026: Croatia Selects Fiume o morte! as Oscar Bid

    September 12, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Editors Picks

    Tumbler Ridge students barricaded themselves as shooter opened fire

    February 11, 2026

    Real Sociedad claim El Derbi Vasco bragging rights vs Athletic Club

    February 11, 2026

    Wednesday Sports Roundup: MLB's 'hamate epidemic,' Olympics history, Seahawks parade

    February 11, 2026

    Aston Villa leave it late as Burnley stun Palace

    February 11, 2026
    About Us

    NewsOnClick.com is your reliable source for timely and accurate news. We are committed to delivering unbiased reporting across politics, sports, entertainment, technology, and more. Our mission is to keep you informed with credible, fact-checked content you can trust.

    We're social. Connect with us:

    Facebook X (Twitter) Instagram Pinterest YouTube
    Latest Posts

    Tumbler Ridge students barricaded themselves as shooter opened fire

    February 11, 2026

    Real Sociedad claim El Derbi Vasco bragging rights vs Athletic Club

    February 11, 2026

    Wednesday Sports Roundup: MLB's 'hamate epidemic,' Olympics history, Seahawks parade

    February 11, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Editorial Policy
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    • Advertise
    • Contact Us
    © 2026 Newsonclick.com || Designed & Powered by ❤️ Trustmomentum.com.

    Type above and press Enter to search. Press Esc to cancel.