Three former Google X scientists aim to give you a second brain virtually — not in the sci-fi or chip-in-your-head sense — but through an AI-powered app that gains context by listening to everything you say in the background. Their startup, TwinMind, has raised $5.7 million in seed funding and released an Android version, along with a new AI speech model. It also has an iPhone version.
Co-founded in March 2024 by Daniel George (CEO) and his former Google X colleagues Sunny Tang and Mahi Karim (both CTOs), TwinMind runs in the background, capturing ambient speech (with user permission) to build a personal knowledge graph.
By turning spoken thoughts, meetings, lectures and conversations into structured memory, the app can generate AI-powered notes, to-dos, and answers. It works offline, processes audio in real-time to transcribe on-device, and can capture audio continuously for 16–17 hours without draining the device’s battery, the founders say. The app can also back up user data so conversations can be recovered if the device is lost, though users can opt-out of that. It also supports real-time translation in over 100 languages.
TwinMind differentiates itself from AI meeting note-takers like Otter, Granola, and Fireflies by capturing audio passively in the background all day. To make this possible, the team built a low-level service in pure Swift that runs natively on the iPhone. In contrast, many competitors use React Native and rely on cloud-based processing, which Apple restricts from running in the background for extended periods, George said in an exclusive interview.
“We spent about six to seven months last year just perfecting this audio capture continuously and getting there to find a lot of hacks around Apple’s walled garden,” he told TechCrunch.
George left Google X in 2020 and got the idea for TwinMind in 2023 when was working at JPMorgan as Vice President and Applied AI Lead, attending back-to-back meetings each day. To save time, he built a script that captured audio, transcribed it on his iPad, and fed it into ChatGPT — which began to understand his projects and even generate usable code. Impressed by the results, he shared it with friends and posted about it on Blind, where others showed interest but did not want something running on their work laptops. That led him to build an app that could run on a personal phone, quietly listening during meetings to gather useful context.
In addition to the mobile app, TwinMind offers a Chrome extension that gathers additional context through browser activity. Using vision AI, it can visually scan open tabs and interpret content from various platforms, including email, Slack, and Notion.
Techcrunch event
San Francisco
|
October 27-29, 2025
The startup even used the extension itself to shortlist interns from over 850 applications they received this summer.
“We opened all the LinkedIn profiles and CVs of the 854 applicants in browser tabs, then asked the Chrome extension to rank the best candidates,” George said. “It did a fantastic job — that’s how we hired our final four interns.”

He noted that current AI chatbots — including OpenAI’s ChatGPT and Anthropic’s Claude — cannot easily process hundreds of documents or parse sign-ups from tools like LinkedIn or Gmail to gather contextual information. Similarly, AI-powered browsers such as those from Perplexity and The Browser Company lack the ability to build knowledge from your offline conversations and in-person meetings.
The startup currently has over 30,000 users, with about 15,000 of them active each month. As much as 20–30% of TwinMind users also use the Chrome extension, George said.
While the U.S. is the largest base for TwinMind so far, the startup is also seeing traction from India, Brazil, the Philippines, Ethiopia, Kenya, and Europe.
TwinMind targets the general audience, although 50–60% of its users are currently professionals, about 25% are students, and the remaining 20–25% are individuals using it for personal purposes.
George told TechCrunch that his father is among the individuals using TwinMind to write their autobiography.
One of AI’s significant drawbacks is its potential to compromise user privacy. But George asserted that TwinMind does not train its models on user data and is designed to work without sending recordings to the cloud. Unlike many other AI note-taking apps, TwinMind does not let users access audio recordings later — the audio is deleted on the fly — while only the transcribed text is stored locally in the app, he noted.
Google X experience helped speed things up
The TwinMind co-founders spent a few years working on various projects at Google X. George told TechCrunch that he worked on six projects alone, including iyO — the team behind AI-powered earbuds, which recently made headlines for suing OpenAI and Jony Ive. That experience helped the TwinMind team move quickly from concept to product.
“Google X was actually the perfect place to prepare for starting your own company,” said George. “There are around 30 to 40 startup-like projects happening at any given time. Nobody else gets to work at six early-stage startups over two or three years before launching their own — at least not in such a short span.”

Before joining Google, George worked on applying deep learning to gravitational wave astrophysics as part of the Nobel Prize–winning LIGO group at the University of Illinois’ National Center for Supercomputing Applications. He had completed his PhD in AI for astrophysics in just one year — at the age of 24 — a feat that led him to join Stephen Wolfram’s research lab in 2017 as a deep learning and AI researcher.
That early connection with Wolfram came full circle years later — he ended up writing the first check for TwinMind, marking his first-ever investment in a startup. The recent seed round was led by Streamlined Ventures, with participation from Sequoia Capital and other investors, including Wolfram. The round values TwinMind at $60 million post-money.
TwinMind Ear-3 model
In addition to its apps and browser extension, TwinMind has also introduced the TwinMind Ear-3 model, a successor to its existing Ear-2, which supports over 140 languages worldwide and has a word error rate of 5.26%, the startup said. The new model can also recognize different speakers in a conversation and has a speaker diarization error rate of 3.8%.
The new AI model is a fine-tuned blend of several open-source models, trained on a curated set of human-annotated internet data — including podcasts, videos, and movies.
“We found that the more languages you support, the better the model gets at understanding accents and regional dialects because it’s training on a broader range of speakers,” George said.
The model costs $0.23/ hour and will be available through an API to developers and enterprises over the next few weeks.

The Ear-3, unlike the Ear-2, does not support a complete offline experience, as it is larger in size and runs on the cloud. However, the app automatically switches to Ear-2 if the internet goes away and then moves back to Ear-3 when it is back, George said.
With the introduction of the Ear-3, TwinMind now offers a Pro subscription at $15/month, with a larger context window of up to 2 million tokens and email support within 24 hours. Nonetheless, the free version still exists with all the existing features including unlimited hours of transcriptions and on-device speech recognition.
The startup currently has a team of 11 members. It plans to hire a few designers to enhance its user experience and set up a business development team to sell its API. Furthermore, there are plans to spend some money on acquiring new users.
Add Comment