The OpenAI-Pentagon deal and the federal standoff with Anthropic signal the urgent need for a more developed AI safety industry to provide external security standards.
Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Now, in the absence of rules, there’s not a lot to protect them.
Anthropic’s chatbot Claude seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon.
Block cut 40% of its workforce from a position of strength. Jack Dorsey says most CEOs are next. Here’s the three-layer framework every board needs before that happens.

When Xiangyi Cheng published her first journal paper as a principal investigator in IEEE Access in 2024, it marked more than a professional milestone. For Cheng, an IEEE member and an assistant professor of mechanical engineering at Loyola Marymount U…
AI lets small businesses compete by automating work, but demands verified code, data ethics, ROI.
Leaked code reveals Google Maps is testing Nano Banana AI to let users restyle Street View images, bringing generative AI to over 2 billion users
OpenAI’s CEO claims its new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic.
Explore the Companion relationship between humans and AI and what the research says about its genuine capacity to reduce loneliness, where it holds real promise for human wellbeing and how to engage with it in ways that strengthen rather than replace t…