A quiet change of direction from Apple means users deciding when or whether to update to iOS 26 may want to act quickly.
Premium AV preamp and power amp options add a new level of performance to the brand’s acclaimed AV product range.
Anker enters the whole-home battery race with its modular Solix E10
In a few weeks, the next major iPhone update will land. And the latest beta reveals an important change is on its way which could radically change messaging.

NASA is getting ready to send four astronauts around the Moon with Artemis II, laying the foundation for sustainable missions to the lunar surface and paving the way for human exploration on Mars. As the agency considers deep space endeavors that could last months or years, it must develop ways to feed astronauts beyond sending […]
Leaked Android Canary files reveal Google is bringing the iPhone’s “AirDrop” compatibility to the Pixel 9.
CrowdStrike’s Seraphic acquisition shows why securing the browser is critical as identity, AI use, and endpoint signals converge in modern attack chains.
Salesforce on Tuesday launched an entirely rebuilt version of Slackbot, the company’s workplace assistant, transforming it from a simple notification tool into what executives describe as a fully powered AI agent capable of searching enterprise data, drafting documents, and taking action on behalf of employees.
The new Slackbot, now generally available to Business+ and Enterprise+ customers, is Salesforce’s most aggressive move yet to position Slack at the center of the emerging “agentic AI” movement — where software agents work alongside humans to complete complex tasks. The launch comes as Salesforce attempts to convince investors that artificial intelligence will bolster its products rather than render them obsolete.
“Slackbot isn’t just another copilot or AI assistant,” said Parker Harris, Salesforce co-founder and Slack’s chief technology officer, in an exclusive interview with Salesforce. “It’s the front door to the agentic enterprise, powered by Salesforce.”
Harris was blunt about what distinguishes the new Slackbot from its predecessor: “The old Slackbot was, you know, a little tricycle, and the new Slackbot is like, you know, a Porsche.”
The original Slackbot, which has existed since Slack’s early days, performed basic algorithmic tasks — reminding users to add colleagues to documents, suggesting channel archives, and delivering simple notifications. The new version runs on an entirely different architecture built around a large language model and sophisticated search capabilities that can access Salesforce records, Google Drive files, calendar data, and years of Slack conversations.
“It’s two different things,” Harris explained. “The old Slackbot was algorithmic and fairly simple. The new Slackbot is brand new — it’s based around an LLM and a very robust search engine, and connections to third-party search engines, third-party enterprise data.”
Salesforce chose to retain the Slackbot brand despite the fundamental technical overhaul. “People know what Slackbot is, and so we wanted to carry that forward,” Harris said.
The new Slackbot runs on Claude, Anthropic’s large language model, a choice driven partly by compliance requirements. Slack’s commercial service operates under FedRAMP Moderate certification to serve U.S. federal government customers, and Harris said Anthropic was “the only provider that could give us a compliant LLM” when Slack began building the new system.
But that exclusivity won’t last. “We are, this year, going to support additional providers,” Harris said. “We have a great relationship with Google. Gemini is incredible — performance is great, cost is great. So we’re going to use Gemini for some things.” He added that OpenAI remains a possibility as well.
Harris echoed Salesforce CEO Marc Benioff’s view that large language models are becoming commoditized: “You’ve heard Marc talk about LLMs are commodities, that they’re democratized. I call them CPUs.”
On the sensitive question of training data, Harris was unequivocal: Salesforce does not train any models on customer data. “Models don’t have any sort of security,” he explained. “If we trained it on some confidential conversation that you and I have, I don’t want Carolyn to know — if I train it into the LLM, there is no way for me to say you get to see the answer, but Carolyn doesn’t.”
Salesforce has been testing the new Slackbot internally for months, rolling it out to all 80,000 employees. According to Ryan Gavin, Slack’s chief marketing officer, the results have been striking: “It’s the fastest adopted product in Salesforce history.”
Internal data shows that two-thirds of Salesforce employees have tried the new Slackbot, with 80% of those users continuing to use it regularly. Internal satisfaction rates reached 96% — the highest for any AI feature Slack has shipped. Employees report saving between two and 20 hours per week.
The adoption happened largely organically. “I think it was about five days, and a Canvas was developed by our employees called ‘The Most Stealable Slackbot Prompts,'” Gavin said. “People just started adding to it organically. I think it’s up to 250-plus prompts that are in this Canvas right now.”
Kate Crotty, a principal UX researcher at Salesforce, found that 73% of internal adoption was driven by social sharing rather than top-down mandates. “Everybody is there to help each other learn and communicate hacks,” she said.
During a product demonstration, Amy Bauer, Slack’s product experience designer, showed how Slackbot can synthesize information across multiple sources. In one example, she asked Slackbot to analyze customer feedback from a pilot program, upload an image of a usage dashboard, and have Slackbot correlate the qualitative and quantitative data.
“This is where Slackbot really earns its keep for me,” Bauer explained. “What it’s doing is not just simply reading the image — it’s actually looking at the image and comparing it to the insight it just generated for me.”
Slackbot can then query Salesforce to find enterprise accounts with open deals that might be good candidates for early access, creating what Bauer called “a really great justification and plan to move forward.” Finally, it can synthesize all that information into a Canvas — Slack’s collaborative document format — and find calendar availability among stakeholders to schedule a review meeting.
“Up until this point, we have been working in a one-to-one capacity with Slackbot,” Bauer said. “But one of the benefits that I can do now is take this insight and have it generate this into a Canvas, a shared workspace where I can iterate on it, refine it with Slackbot, or share it out with my team.”
Rob Seaman, Slack’s chief product officer, said the Canvas creation demonstrates where the product is heading: “This is making a tool call internally to Slack Canvas to actually write, effectively, a shared document. But it signals where we’re going with Slackbot — we’re eventually going to be adding in additional third-party tool calls.”
Among Salesforce’s pilot customers is Beast Industries, the parent company of YouTube star MrBeast. Luis Madrigal, the company’s chief information officer, joined the launch announcement to describe his experience.
“As somebody who has rolled out enterprise technologies for over two decades now, this was practically one of the easiest,” Madrigal said. “The plumbing is there. Slack as an implementation, Enterprise Tools — being able to turn on the Slackbot and the Slack AI functionality was as simple as having my team go in, review, do a quick security review.”
Madrigal said his security team signed off “rather quickly” — unusual for enterprise AI deployments — because Slackbot accesses only the information each individual user already has permission to view. “Given all the guardrails you guys have put into place for Slackbot to be unique and customized to only the information that each individual user has, only the conversations and the Slack rooms and Slack channels that they’re part of—that made my security team sign off rather quickly.”
One Beast Industries employee, Sinan, the head of Beast Games marketing, reported saving “at bare minimum, 90 minutes a day.” Another employee, Spencer, a creative supervisor, described it as “an assistant who’s paying attention when I’m not.”
Other pilot customers include Slalom, reMarkable, Xero, Mercari, and Engine. Mollie Bodensteiner, SVP of Operations at Engine, called Slackbot “an absolute ‘chaos tamer’ for our team,” estimating it saves her about 30 minutes daily “just by eliminating context switching.”
The launch puts Salesforce in direct competition with Microsoft’s Copilot, which is integrated into Teams and the broader Microsoft 365 suite, as well as Google’s Gemini integrations across Workspace. When asked what distinguishes Slackbot from these alternatives, Seaman pointed to context and convenience.
“The thing that makes it most powerful for our customers and users is the proximity — it’s just right there in your Slack,” Seaman said. “There’s a tremendous convenience affordance that’s naturally built into it.”
The deeper advantage, executives argue, is that Slackbot already understands users’ work without requiring setup or training. “Most AI tools sound the same no matter who is using them,” the company’s announcement stated. “They lack context, miss nuance, and force you to jump between tools to get anything done.”
Harris put it more directly: “If you’ve ever had that magic experience with AI — I think ChatGPT is a great example, it’s a great experience from a consumer perspective — Slackbot is really what we’re doing in the enterprise, to be this employee super agent that is loved, just like people love using Slack.”
Amy Bauer emphasized the frictionless nature of the experience. “Slackbot is inherently grounded in the context, in the data that you have in Slack,” she said. “So as you continue working in Slack, Slackbot gets better because it’s grounded in the work that you’re doing there. There is no setup. There is no configuration for those end users.”
Salesforce positions Slackbot as what Harris calls a “super agent” — a central hub that can eventually coordinate with other AI agents across an organization.
“Every corporation is going to have an employee super agent,” Harris said. “Slackbot is essentially taking the magic of what Slack does. We think that Slackbot, and we’re really excited about it, is going to be that.”
The vision extends to third-party agents already launching in Slack. Last month, Anthropic released a preview of Claude Code for Slack, allowing developers to interact with Claude’s coding capabilities directly in chat threads. OpenAI, Google, Vercel, and others have also built agents for the platform.
“Most of the net-new apps that are being deployed to Slack are agents,” Seaman noted during the press conference. “This is proof of the promise of humans and agents coexisting and working together in Slack to solve problems.”
Harris described a future where Slackbot becomes an MCP (Model Context Protocol) client, able to leverage tools from across the software ecosystem — similar to how the developer tool Cursor works. “Slack can be an MCP client, and Slackbot will be the hub of that, leveraging all these tools out in the world, some of which will be these amazing agents,” he said.
But Harris also cautioned against over-promising on multi-agent coordination. “I still think we’re in the single agent world,” he said. “FY26 is going to be the year where we started to see more coordination. But we’re going to do it with customer success in mind, and not demonstrate and talk about, like, ‘I’ve got 1,000 agents working together,’ because I think that’s unrealistic.”
Slackbot is included at no additional cost for customers on Business+ and Enterprise+ plans. “There’s no additional fees customers have to do,” Gavin confirmed. “If they’re on one of those plans, they’re going to get Slackbot.”
However, some enterprise customers may face other cost pressures related to Salesforce’s broader data strategy. CIOs may see price increases for third-party applications that work with Salesforce data, as effects of higher charges for API access ripple through the software supply chain.
Fivetran CEO George Fraser has warned that Salesforce’s shift in pricing policy for API access could have tangible consequences for enterprises relying on Salesforce as a system of record. “They might not be able to use Fivetran to replicate their data to Snowflake and instead have to use Salesforce Data Cloud. Or they might find that they are not able to interact with their data via ChatGPT, and instead have to use Agentforce,” Fraser said in a recent CIO report.
Salesforce has framed the pricing change as standard industry practice.
The new Slackbot begins rolling out today and will reach all eligible customers by the end of February. Mobile availability will complete by March 3, Bauer confirmed during her interview with VentureBeat.
Some capabilities remain works in progress. Calendar reading and availability checking are available at launch, but the ability to actually book meetings is “coming a few weeks after,” according to Seaman. Image generation is not currently supported, though Bauer said it’s “something that we are looking at in the future.”
When asked about integration with competing CRM systems like HubSpot and Microsoft Dynamics, Salesforce representatives declined to provide specifics during the interview, though they acknowledged the question touched on key competitive differentiators.
The Slackbot launch is Salesforce’s bet that the future of enterprise work is conversational — that employees will increasingly prefer to interact with AI through natural language rather than navigating traditional software interfaces.
Harris described Slack’s product philosophy using principles like “don’t make me think” and “be a great host.” The goal, he said, is for Slackbot to surface information proactively rather than requiring users to hunt for it.
“One of the revelations for me is LLMs applied to unstructured information are incredible,” Harris said. “And the amount of value you have if you’re a Slack user, if your corporation uses Slack — the amount of value in Slack is unbelievable. Because you’re talking about work, you’re sharing documents, you’re making decisions, but you can’t as a human go through that and really get the same value that an LLM can do.”
Looking ahead, Harris expects the interfaces themselves to evolve beyond pure conversation. “We’re kind of saturating what we can do with purely conversational UIs,” he said. “I think we’ll start to see agents building an interface that best suits your intent, as opposed to trying to surface something within a conversational interface that matches your intent.”
Microsoft, Google, and a growing roster of AI startups are placing similar bets — that the winning enterprise AI will be the one embedded in the tools workers already use, not another application to learn. The race to become that invisible layer of workplace intelligence is now fully underway.
For Salesforce, the stakes extend beyond a single product launch. After a bruising year on Wall Street and persistent questions about whether AI threatens its core business, the company is wagering that Slackbot can prove the opposite — that the tens of millions of people already chatting in Slack every day is not a vulnerability, but an unassailable advantage.
Haley Gault, the Salesforce account executive in Pittsburgh who stumbled upon the new Slackbot on a snowy morning, captured the shift in a single sentence: “I honestly can’t imagine working for another company not having access to these types of tools. This is just how I work now.”
That’s precisely what Salesforce is counting on.
For the needs of educators, business users and streamers, the Link 2 Pro webcam is launching with two distinct designs: one with a two axis-gimbal and one static model.
In an impressive feat, Japanese startup Sakana AI’s coding agent ALE-Agent recently secured first place in the AtCoder Heuristic Contest (AHC058), a complex coding competition that involves complicated optimization problems — and a more difficult and perhaps telling challenge than benchmarks like HumanEval, which mostly test the ability to write isolated functions, and which many AI models and agents now regularly pass with ease (“benchmark saturation”).
Sakana’s accomplishment with ALE-Agent hints at a shift toward agents capable of autonomously optimizing themselves to navigate and perform well in complex, dynamic systems such as enterprise software stacks, workflows, and operational environments.
In four hours, the agent used inference-time scaling to generate, test, and iterate over hundreds of solutions, solving a problem that typically requires deep intuition and time-consuming trial and error from human experts. It outperformed over 800 human participants, including top-tier competitive programmers.
The challenge in AHC058 was a classic combinatorial optimization problem. Participants were tasked with managing a set of machines with hierarchical relationships, such as machines that produce apples, and other machines that build those apple-producing machines. The goal was to maximize output over a fixed number of turns.
In the enterprise world, this workflow usually follows a strict pattern: a domain expert works with a client to define an “objective function” (aka the Scorer), and then engineers build a software system to optimize it. These problems are notoriously difficult because they cannot be solved in a single stage. They require exploration, strategy, and the ability to pivot when a plan isn’t working.
Human experts typically approach this using a two-stage strategy. First, they use a “Greedy” method (a lightweight solver that makes the best immediate choice at each step) to generate a decent baseline solution. Then, they apply “simulated annealing,” a technique that takes the existing plan and makes tiny, random adjustments to see if the score improves. However, this standard approach is rigid. If the initial Greedy plan heads in the wrong direction, simulated annealing can rarely fix it because it only looks for local improvements in a faulty area of the solution space.
ALE-Agent’s innovation was transforming this static initialization tool into a dynamic reconstruction engine. Instead of relying on immediate value, the agent independently derived a concept it called “Virtual Power.” It assigned values to components that were not yet operational, treating them as if they already possessed value. By valuing potential future assets rather than just current ones, the agent capitalized on the “compound interest effect,” a concept it explicitly identified in its internal logs. Basically, it could look a few steps ahead and reason about the future instead of looking at the immediate feedback it was receiving from its environment.
Crucially, the agent needed to maintain this strategy over a four-hour window without losing focus, a common failure mode known as “context drift.” In comments provided to VentureBeat, the Sakana AI team explained that the agent generates textual “insights” by reflecting on each trial. It gathers this knowledge to prevent cycling back to previously failed strategies and creates a working memory that allows it to look a few steps ahead rather than just reacting to immediate feedback.
Furthermore, the agent integrated Greedy methods directly into the simulated annealing phase to avoid getting stuck in local optima, using high-speed reconstruction to delete and rebuild large sections of the solution on the fly.
This breakthrough fits directly into existing enterprise workflows where a scoring function is already available. Currently, companies rely on scarce engineering talent to write optimization algorithms. ALE-Agent demonstrates a future where humans define the “Scorer” (i.e., the business logic and goals) and the agent handles the technical implementation.
This shifts the operational bottleneck from engineering capacity to metric clarity. If an enterprise can measure a goal, the agent can optimize it. This has direct applications in logistics, such as vehicle routing, as well as server load balancing and resource allocation.
According to the Sakana AI team, this could democratize optimization. “It enables a future where non-technical clients can interact directly with the agent, tweaking business constraints in real-time until they get the output they desire,” they said.
The Sakana AI team told VentureBeat that ALE-Agent is currently proprietary and not available for public use, and the company is currently focused on internal development and proof-of-concept collaborations with enterprises.
At the same time, the team is already looking ahead to “self-rewriting” agents. These future agents could define their own scorers, making them feasible for ill-defined problems where human experts struggle to formulate clear initial metrics.
Running ALE-Agent was not cheap. The four-hour operation incurred approximately $1,300 in compute costs involving over 4,000 reasoning calls to models like GPT-5.2 and Gemini 3 Pro. While this price point might seem high for a single coding task, the return on investment for optimization problems is often asymmetric. In a resource-management setting, a one-time cost of a few thousand dollars can result in millions of dollars in annual efficiency savings.
However, enterprises expecting costs to simply drop might be missing the strategic picture. While the cost of tokens is falling, total spend may actually rise as companies compete for better answers, a concept known as the Jevons paradox.
“While smarter algorithms will drive efficiency, the primary value of AI is its ability to explore vast solution spaces,” the Sakana AI team said. “As inference costs fall, rather than simply banking the savings, enterprises will likely choose to leverage that affordability to conduct even deeper, broader searches to find superior solutions.”
The experiment highlights the immense value still to be unlocked through inference-time scaling techniques. As AI systems gain the ability to handle complex reasoning tasks across longer contexts, building better scaffolding and allocating larger budgets for “thinking time” allows agents to rival top human experts.