Apple Loop: iPhone 18 Specs Leak, Apple Challenges Adone, New iPhone Trade-In Prices

This week’s Apple headlines: iPhone 17e upgrades, iPhone 18 specs leak, Apple challenges Adobe, hidden MacBook Pro release date and more…

Android Circuit: Galaxy S26 Ultra Disappointment, Xiaomi Extends Support, Pixel 10a Pricing

Taking a look back at this week’s news and headlines across the Android world, including Galaxy S26 Ultra disappointment, Pixel 10a pricing, Redmii Note 15 arrives in UK, Xiaomi extends support window and more…

Austrian Audio’s The Arranger Are Open-Back Premium Headphones With A New Price Point

The Arranger headphones offer more resolution, scale and refinement than Austrian Audio’s Hi-X range, while remaining more accessible than The Composer model.

Holywater Raises Additional $22 Million To Expand AI Vertical Video Platform

Fox backed Holywater raises $22 million to expand its AI powered vertical video streaming platform, scaling mobile-first episodic content, microdramas, and data driven IP discovery.

Listen Labs raises $69M after viral billboard hiring stunt to scale AI customer interviews

Alfred Wahlforss was running out of options. His startup, Listen Labs, needed to hire over 100 engineers, but competing against Mark Zuckerberg’s $100 million offers seemed impossible. So he spent $5,000 — a fifth of his marketing budget — on a billboard in San Francisco displaying what looked like gibberish: five strings of random numbers.

The numbers were actually AI tokens. Decoded, they led to a coding challenge: build an algorithm to act as a digital bouncer at Berghain, the Berlin nightclub famous for rejecting nearly everyone at the door. Within days, thousands attempted the puzzle. 430 cracked it. Some got hired. The winner flew to Berlin, all expenses paid.

That unconventional approach has now attracted $69 million in Series B funding, led by Ribbit Capital with participation from Evantic and existing investors Sequoia Capital, Conviction, and Pear VC. The round values Listen Labs at $500 million and brings its total capital to $100 million. In nine months since launch, the company has grown annualized revenue by 15x to eight figures and conducted over one million AI-powered interviews.

“When you obsess over customers, everything else follows,” Wahlforss said in an interview with VentureBeat. “Teams that use Listen bring the customer into every decision, from marketing to product, and when the customer is delighted, everyone is.”

Why traditional market research is broken, and what Listen Labs is building to fix it

Listen’s AI researcher finds participants, conducts in-depth interviews, and delivers actionable insights in hours, not weeks. The platform replaces the traditional choice between quantitative surveys — which provide statistical precision but miss nuance—and qualitative interviews, which deliver depth but cannot scale.

Wahlforss explained the limitation of existing approaches: “Essentially surveys give you false precision because people end up answering the same question… You can’t get the outliers. People are actually not honest on surveys.” The alternative, one-on-one human interviews, “gives you a lot of depth. You can ask follow up questions. You can kind of double check if they actually know what they’re talking about. And the problem is you can’t scale that.”

The platform works in four steps: users create a study with AI assistance, Listen recruits participants from its global network of 30 million people, an AI moderator conducts in-depth interviews with follow-up questions, and results are packaged into executive-ready reports including key themes, highlight reels, and slide decks.

What distinguishes Listen’s approach is its use of open-ended video conversations rather than multiple-choice forms. “In a survey, you can kind of guess what you should answer, and you have four options,” Wahlforss said. “Oh, they probably want me to buy high income. Let me click on that button versus an open ended response. It just generates much more honesty.”

The dirty secret of the $140 billion market research industry: rampant fraud

Listen finds and qualifies the right participants in its global network of 30 million people. But building that panel required confronting what Wahlforss called “one of the most shocking things that we’ve learned when we entered this industry”—rampant fraud.

“Essentially, there’s a financial transaction involved, which means there will be bad players,” he explained. “We actually had some of the largest companies, some of them have billions in revenue, send us people who claim to be kind of enterprise buyers to our platform and our system immediately detected, like, fraud, fraud, fraud, fraud, fraud.”

The company built what it calls a “quality guard” that cross-references LinkedIn profiles with video responses to verify identity, checks consistency across how participants answer questions, and flags suspicious patterns. The result, according to Wahlforss: “People talk three times more. They’re much more honest when they talk about sensitive topics like politics and mental health.”

Emeritus, an online education company that uses Listen, reported that approximately 20% of survey responses previously fell into the fraudulent or low-quality category. With Listen, they reduced this to almost zero. “We did not have to replace any responses because of fraud or gibberish information,” said Gabrielli Tiburi, Assistant Manager of Customer Insights at Emeritus.

How Microsoft, Sweetgreen, and Chubbies are using AI interviews to build better products

The speed advantage has proven central to Listen’s pitch. Traditional customer research at Microsoft could take four to six weeks to generate insights. “By the time we get to them, either the decision has been made or we lose out on the opportunity to actually influence it,” said Romani Patel, Senior Research Manager at Microsoft.

With Listen, Microsoft can now get insights in days, and in many cases, within hours.

The platform has already powered several high-profile initiatives. Microsoft used Listen Labs to collect global customer stories for its 50th anniversary celebration. “We wanted users to share how Copilot is empowering them to bring their best self forward,” Patel said, “and we were able to collect those user video stories within a day.” Traditionally, that kind of work would have taken six to eight weeks.

Simple Modern, an Oklahoma-based drinkware company, used Listen to test a new product concept. The process took about an hour to write questions, an hour to launch the study, and 2.5 hours to receive feedback from 120 people across the country. “We went from ‘Should we even have this product?’ to ‘How should we launch it?'” said Chris Hoyle, the company’s Chief Marketing Officer.

Chubbies, the shorts brand, achieved a 24x increase in youth research participation—growing from 5 to 120 participants — by using Listen to overcome the scheduling challenges of traditional focus groups with children. “There’s school, sports, dinner, and homework,” explained Lauren Neville, Director of Insights and Innovation. “I had to find a way to hear from them that fit into their schedules.”

The company also discovered product issues through AI interviews that might have gone undetected otherwise. Wahlforss described how the AI “through conversations, realized there were like issues with the the kids short line, and decided to, like, interview hundreds of kids. And I understand that there were issues in the liner of the shorts and that they were, like, scratchy, quote, unquote, according to the people interviewed.” The redesigned product became “a blockbuster hit.”

The Jevons paradox explains why cheaper research creates more demand, not less

Listen Labs is entering a massive but fragmented market. Wahlforss cited research from Andreessen Horowitz estimating the market research industry at roughly $140 billion annually, populated by legacy players — some with more than a billion dollars in revenue — that he believes are vulnerable to disruption.

“There are very much existing budget lines that we are replacing,” Wahlforss said. “Why we’re replacing them is that one, they’re super costly. Two, they’re kind of stuck in this old paradigm of choosing between a survey or interview, and they also take months to work with.”

But the more intriguing dynamic may be that AI-powered research doesn’t just replace existing spending — it creates new demand. Wahlforss invoked the Jevons paradox, an economic principle that occurs when technological advancements make a resource more efficient to use, but increased efficiency leads to increased overall consumption rather than decreased consumption.

“What I’ve noticed is that as something gets cheaper, you don’t need less of it. You want more of it,” Wahlforss explained. “There’s infinite demand for customer understanding. So the researchers on the team can do an order of magnitude more research, and also other people who weren’t researchers before can now do that as part of their job.”

Inside the elite engineering team that built Listen Labs before they had a working toilet

Listen Labs traces its origins to a consumer app that Wahlforss and his co-founder built after meeting at Harvard. “We built this consumer app that got 20,000 downloads in one day,” Wahlforss recalled. “We had all these users, and we were thinking like, okay, what can we do to get to know them better? And we built this prototype of what Listen is today.”

The founding team brings an unusual pedigree. Wahlforss’s co-founder “was the national champion in competitive programming in Germany, and he worked at Tesla Autopilot.” The company claims that 30% of its engineering team are medalists from the International Olympiad in Informatics — the same competition that produced the founders of Cognition, the AI coding startup.

The Berghain billboard stunt generated approximately 5 million views across social media, according to Wahlforss. It reflected the intensity of the talent war in the Bay Area.

“We had to do these things because some of our, like early employees, joined the company before we had a working toilet,” he said. “But now we fixed that situation.”

The company grew from 5 to 40 employees in 2024 and plans to reach 150 this year. It hires engineers for non-engineering roles across marketing, growth, and operations — a bet that in the AI era, technical fluency matters everywhere.

Synthetic customers and automated decisions: what Listen Labs is building next

Wahlforss outlined an ambitious product roadmap that pushes into more speculative territory. The company is building “the ability to simulate your customers, so you can take all of those interviews we’ve done, and then extrapolate based on that and create synthetic users or simulated user voices.”

Beyond simulation, Listen aims to enable automated action based on research findings. “Can you not just make recommendations, but also create spawn agents to either change things in code or some customer churns? Can you give them a discount and try to bring them back?”

Wahlforss acknowledged the ethical implications. “Obviously, as you said, there’s kind of ethical concerns there. Of like, automated decision making overall can be bad, but we will have considerable guardrails to make sure that the companies are always in the loop.”

The company already handles sensitive data with care. “We don’t train on any of the data,” Wahlforss said. “We will also scrub any sensitive PII automatically so the model can detect that. And there are times when, for example, you work with investors, where if you accidentally mention something that could be material, non public information, the AI can actually detect that and remove any information like that.”

How AI could reshape the future of product development

Perhaps the most provocative implication of Listen’s model is how it could reshape product development itself. Wahlforss described a customer — an Australian startup — that has adopted what amounts to a continuous feedback loop.

“They’re based in Australia, so they’re coding during the day, and then in their night, they’re releasing a Listen study with an American audience. Listen validates whatever they built during the day, and they get feedback on that. They can then plug that feedback directly into coding tools like Claude Code and iterate.”

The vision extends Y Combinator’s famous dictum — “write code, talk to users” — into an automated cycle. “Write code is now getting automated. And I think like talk to users will be as well, and you’ll have this kind of infinite loop where you can start to ship this truly amazing product, almost kind of autonomously.”

Whether that vision materializes depends on factors beyond Listen’s control — the continued improvement of AI models, enterprise willingness to trust automated research, and whether speed truly correlates with better products. A 2024 MIT study found that 95% of AI pilots fail to move into production, a statistic Wahlforss cited as the reason he emphasizes quality over demos.

“I’m constantly have to emphasize like, let’s make sure the quality is there and the details are right,” he said.

But the company’s growth suggests appetite for the experiment. Microsoft’s Patel said Listen has “removed the drudgery of research and brought the fun and joy back into my work.” Chubbies is now pushing its founder to give everyone in the company a login. Sling Money, a stablecoin payments startup, can create a survey in ten minutes and receive results the same day.

“It’s a total game changer,” said Ali Romero, Sling Money’s marketing manager.

Wahlforss has a different phrase for what he’s building. When asked about the tension between speed and rigor — the long-held belief that moving fast means cutting corners — he cited Nat Friedman, the former GitHub CEO and Listen investor, who keeps a list of one-liners on his website.

One of them: “Slow is fake.”

It’s an aggressive claim for an industry built on methodological caution. But Listen Labs is betting that in the AI era, the companies that listen fastest will be the ones that win. The only question is whether customers will talk back.

Kilo launches AI-powered Slack bot that ships code from a chat message

Kilo Code, the open-source AI coding startup backed by GitLab cofounder Sid Sijbrandij, is launching a Slack integration that allows software engineering teams to execute code changes, debug issues, and push pull requests directly from their team chat — without opening an IDE or switching applications.

The product, called Kilo for Slack, arrives as the AI-assisted coding market heats up with multibillion-dollar acquisitions and funding rounds. But rather than building another siloed coding assistant, Kilo is making a calculated bet: that the future of AI development tools lies not in locking engineers into a single interface, but in embedding AI capabilities into the fragmented workflows where decisions actually happen.

“Engineering teams don’t make decisions in IDE sidebars. They make them in Slack,” Scott Breitenother, Kilo Code’s co-founder and CEO, said in an interview with VentureBeat. “The Slackbot allows you to do all this — and more — without leaving Slack.”

The launch also marks a partnership with MiniMax, the Hong Kong-based AI company that recently completed a successful initial public offering. MiniMax’s M2.1 model will serve as the default model powering Kilo for Slack — a decision the company frames as a statement about the closing gap between open-weight and proprietary frontier models.

How Kilo for Slack turns team conversations into pull requests without leaving the chat

The integration operates on a simple premise: Slack threads often contain the context needed to fix a bug or implement a feature, but that context gets lost the moment a developer switches to their code editor.

With Kilo for Slack, users mention @Kilo in a Slack thread, and the bot reads the entire conversation, accesses connected GitHub repositories, and either answers questions about the codebase or creates a branch and submits a pull request.

A typical interaction might look like this: A product manager reports a bug in a Slack channel. Engineers discuss potential causes. Instead of someone copying the conversation into their IDE and re-explaining the problem to an AI assistant, a developer simply types: “@Kilo based on this thread, can you implement the fix for the null pointer exception in the Authentication service?”

The bot then spins up a cloud agent, reads the thread context, implements the fix, and pushes a pull request — all visible in Slack.

The company says the entire process eliminates the need to copy information between apps or jump between windows — developers can trigger complex code changes with nothing more than a single message in Slack.

Why Kilo says Cursor and Claude Code fall short when developers need multi-repo context

Kilo’s launch explicitly positions the product against two leading AI coding tools: Cursor, which raised $2.3 billion at a $29.3 billion valuation in November, and Claude Code, Anthropic’s agentic coding tool.

Breitenother outlined specific limitations he sees in both products’ Slack capabilities.

“The Cursor Slack integration is configured on a single-repository basis per workspace or channel,” he said. “As a result, if a Slack thread references multiple repositories, users need to manually switch or reconfigure the integration to pull in that additional context.”

On Anthropic’s offering, he added: “Claude Code documentation for Slack shows how Claude can be added to a workspace and respond to mentions using the surrounding conversation context. However, it does not describe persistent, multi-turn thread state or task-level continuity across longer workflows. Each interaction is handled based on the context included at the time of the prompt, rather than maintaining an evolving execution state over time.”

Kilo claims its integration works across multiple repositories simultaneously, maintains conversational context across extended Slack threads, and enables handoffs between Slack, IDEs, cloud agents, and the command-line interface.

Kilo picks a Chinese AI company’s model as its default—and addresses enterprise security concerns head-on

Perhaps the most provocative element of the announcement is Kilo’s choice of default model. MiniMax is headquartered in Shanghai and recently went public in Hong Kong — a lineage that may raise eyebrows among enterprise customers wary of sending proprietary code through Chinese infrastructure.

Breitenother addressed the concern directly: “MiniMax’s recent Hong Kong IPO drew backing from major global institutional investors, including Baillie Gifford, ADIA, GIC, Mirae Asset, Aspex, and EastSpring. This speaks to strong global confidence in models built for global users.”

He emphasized that MiniMax models are hosted by major U.S.-compliant cloud providers. “MiniMax M2-series are global leading open-source models, and are hosted by many U.S. compliant cloud providers such as AWS Bedrock, Google Vertex and Microsoft AI Foundry,” he said. “In fact, MiniMax models were featured by Matt Garman, the AWS CEO, during this year’s re:Invent keynote, showing they’re ready for enterprise use at scale.”

The company stresses that Kilo for Slack is fundamentally model-agnostic. “Kilo doesn’t force customers into any single model,” Breitenother said. “Enterprise customers choose which models they use, where they’re hosted, and what fits their security, compliance, and risk requirements. Kilo offers access to more than 500 models, so teams can always choose the right model for the job.”

The decision to default to M2.1 reflects Kilo’s broader thesis about the AI market. According to the company, the performance gap between open-weight and proprietary models has narrowed from 8 percent to 1.7 percent on several key benchmarks. Breitenother clarified that this figure “refers to convergence between open and closed models as measured by the Stanford AI Index using major general benchmarks like HumanEval, MATH, and MMLU, not to any specific agentic coding evaluation.”

In third-party evaluations, M2.1 has performed competitively. “In LMArena, an open platform for community-driven AI benchmarking, M2.1 achieved a number-four ranking, right after OpenAI, Anthropic, and Google,” Breitenother noted. “What this shows is that M2.1 competes with frontier models in real-world coding workflows, as judged directly by developers.”

What happens to your code when you @mention an AI bot in Slack

For engineering teams evaluating the tool, a critical question is what happens to sensitive code and conversations when routed through the integration.

Breitenother walked through the data flow: “When someone mentions @Kilo in Slack, Kilo reads only the content of the Slack thread where it’s mentioned, along with basic metadata needed to understand context. It does not have blanket access to a workspace. Access is governed by Slack’s standard permission model and the scopes the customer approves during installation.”

For repository access, he added: “If the request requires code context, Kilo accesses only the GitHub repositories the customer has explicitly connected. It does not index unrelated repos. Permissions mirror the access level granted through GitHub, and Kilo can’t see anything the user or workspace hasn’t authorized.”

The company states that data is not used to train models and that output visibility follows existing Slack and GitHub permissions.

A particularly thorny question for any AI system that can push code directly to repositories is security. What prevents an AI-generated vulnerability from being merged into production?

“Nothing gets merged automatically,” Breitenother said. “When the Kilo Slackbot opens a pull request from a Slack thread, it follows the same guardrails teams already rely on today. The PR goes through existing review workflows and approval processes before anything reaches production.”

He added that Kilo can automatically run its built-in code review feature on AI-generated pull requests, “flagging potential issues or security concerns before it ever reaches a developer for review.”

The open-source paradox: why Kilo believes giving away its code won’t kill the business

Kilo Code sits in an increasingly common but still tricky position: the open-source company charging for hosted services. The complete IDE extension is open-source under an Apache 2.0 license, but Kilo for Slack is a paid, hosted product.

The obvious question: What stops a well-funded competitor — or even a customer — from forking the code and building their own version?

“Forking the code isn’t what worries us, because the code itself isn’t the hardest part,” Breitenother said. “A competitor could fork the repository tomorrow. What they wouldn’t get is the infrastructure that safely executes agentic workflows across Slack, GitHub, IDEs, and cloud agents. The experience we’ve built operating this at scale across many teams and repositories. The trust, integrations, and enterprise-ready controls customers expect out of the box.”

He drew parallels to other successful open-source companies: “Open core drives adoption and trust, while the hosted product delivers convenience, reliability, and ongoing innovation. Customers aren’t paying for access to code. They’re paying for a system that works every day, securely, at scale.”

Inside the $29 billion “vibe coding” market that Kilo wants to disrupt

Kilo enters a market that has attracted extraordinary attention and capital over the past year. The practice of using large language models to write and modify code — popularly known as “vibe coding,” a term coined by OpenAI co-founder Andrej Karpathy in February 2025 — has become a central focus of enterprise AI investment.

Microsoft CEO Satya Nadella disclosed in April that AI-generated code now accounts for 30 percent of Microsoft’s codebase. Google acquired senior employees from AI coding startup Windsurf in a $2.4 billion transaction in July. Cursor’s November funding round valued the company at $29.3 billion.

Kilo raised $8 million in seed funding in December 2025 from Breakers, Cota Capital, General Catalyst, Quiet Capital, and Tokyo Black. Sijbrandij, who stepped down as GitLab CEO in 2024 to focus on cancer treatment but remains board chair, contributed early capital and remains involved in day-to-day strategy.

Asked about non-compete considerations given GitLab’s own AI investments, Breitenother was brief: “There are no non-compete issues. Kilo is building a fundamentally different approach to AI coding.”

Notably, GitLab disclosed in a recent SEC filing that it paid Kilo $1,000 in exchange for a right of first refusal for 10 business days should the startup receive an acquisition proposal before August 2026.

When asked to name an enterprise customer using the Slack integration in production, Breitenother declined: “That’s not something we can disclose.”

How a 34-person startup plans to outmaneuver OpenAI and Anthropic in AI coding

The most significant threat to Kilo’s position may come not from other startups but from the frontier AI labs themselves. OpenAI and Anthropic are both building deeper integrations for coding workflows, and both have vastly greater resources.

Breitenother argued that Kilo’s advantage lies in its architecture, not its model performance.

“We don’t think the long-term moat in AI coding is raw compute or who ships a Slack agent first,” he said. “OpenAI and Anthropic are world-class model companies, and they’ll continue to build impressive capabilities. But Kilo is built around a different thesis: the hard problem isn’t generating code, it’s integrating AI into real engineering workflows across tools, repos, and environments.”

He outlined three areas where he believes Kilo can differentiate:

“Workflow depth: Kilo is designed to operate across Slack, IDEs, cloud agents, GitHub, and the CLI, with persistent context and execution. Even with OpenAI or Anthropic Slack-native agents, those agents are still fundamentally model-centric. Kilo is workflow-centric.”

“Model flexibility: We’re model-agnostic by design. Teams don’t have to bet on one frontier model or vendor roadmap. That’s difficult for companies like OpenAI or Anthropic, whose incentives are naturally aligned with driving usage toward their own models first.”

“Platform neutrality: Kilo isn’t trying to pull developers into a closed ecosystem. It fits into the tools teams already use.”

The future of AI-assisted software development may belong to whoever solves the integration problem first

Kilo’s launch reflects a maturing phase in the AI coding market. The initial wave of tools focused on proving that large language models could generate useful code. The current wave is about integration — fitting AI capabilities into the messy reality of how software actually gets built.

That reality involves context fragmented across Slack threads, GitHub issues, IDE windows, and command-line sessions. It involves teams that use different models for different tasks and organizations with complex compliance requirements around data residency and model providers.

Kilo is betting that the winners in this market will not be the companies with the best models, but those that best solve the integration problem — meeting developers in the tools they already use rather than forcing them into new ones.

Kilo for Slack is available now for teams with Kilo Code accounts. Users connect their GitHub repositories through Kilo’s integrations dashboard, add the Slack integration, and can then mention @Kilo in any channel where the bot has been added. Usage-based pricing matches the rates of whatever model the team selects.

Whether a 34-person startup can execute on that vision against competitors with billions in capital remains an open question. But if Breitenother is right that the hard problem in AI coding isn’t generating code but integrating into workflows, Kilo may have picked the right fight. After all, the best AI in the world doesn’t matter much if developers have to leave the conversation to use it.

Mission Announces First Network Music Player Designed To Match Mission’s 778X Amplifier

British audio brand Mission has released its first network music player. The compact 778S was designed in partnership with audio streaming specialist Silent Angel.

The AI That Built Itself In 10 Days Now Wants Access To Your Desktop

Anthropic launched Cowork, bringing the autonomous capabilities of its developer-focused Claude Code tool to non-technical users through a desktop application.

What Are The Real Questions Leaders Will Be Asking At Davos 2026?

As Davos 2026 convenes, the most important debates focus on deeper tensions around AI trust, geopolitics, growth, jobs and planetary limits.

Apple Picks Gemini For Siri, More Meta VR Cuts, Higgsfield AI Snares $130 Million, Xreal Grabs $100 Million

Apple selects Google’s Gemini for Siri as Meta cuts Reality Labs, Higgsfield raises $130 Million, and Xreal secures $100 Million ahead of its Android XR Aura smartglasses launch.