THE LATEST CRYPTO NEWS

User Models

Active Filters
# ai safety
#ai safety #ai legislation #california ai bill #sb 1047 #ai safety bill #newsom ai #gavin newsom ai #gavin newsom tech

Gavin Newsom has vetoed SB 1048, saying that “while well-intentioned,” it could place unnecessary restrictions on emerging AI companies in California. 

#law #startups #bills #ai safety #innovators #ai legislation #nancy pelosi #california ai bill #sb 1047 #scott wiener #tech regulation #california assembly

Senator Scott Wiener defends California’s AI bill, SB 1047, against criticism from Nancy Pelosi and other policymakers, emphasizing the need for oversight beyond tech companies.

#artificial intelligence #ai safety #open-source ai #google ai #ai transparency #gemma 2 #shieldgemma #gemma scope #generative ai models

These new models offer additional tools for developers and researchers, contributing to ongoing efforts toward a secure and transparent AI future.

#openai #ai safety #ai regulation #senate bills #future of ai innovation act #create ai act #nsf ai education act

OpenAI's support for these bills highlights a broader vision for AI that balances safety, accessibility, and the potential for educational progress.

#artificial intelligence #openai #ai safety #sec investigation #illegal ndas #whistleblowers #chuck grassley #non-disclosure agreements #openai response

The complainants called the matter "urgent," but it remains unclear if the SEC will open an investigation.

#artificial intelligence #ai development #ai safety #ilya sutskever #ssi #safe superintelligence inc #daniel levy #daniel gross #palo alto #tel aviv

The new company will develop AI safety and capabilities in tandem.

#artificial intelligence #openai #anthropic #ai ethics #ai development #ai risks #ai safety #agi #whistleblower protections #deepmind #right to warn

Former OpenAI, Anthropic, and DeepMind employees urge AI companies to expand whistleblower protections to publicly address AI risks amid growing concerns over the “deprioritization” of safety.

#openai #ai safety #agi #ilya sutskever #jan leike #superalignment team #governance crisis #internal restructuring

Following the recent resignations, OpenAI has opted to dissolve its “Superalignment” team and integrate its functions into other research projects within the organization.

#artificial intelligence #quantum computing #ai risks #wef #ai safety #misinformation #global risks report #job market disruptions #frontier technologies #world economic forum

The WEF also noted that AI offers productivity benefits and breakthroughs in fields as diverse as healthcare, education and climate change.