Private beta since Feb 13, 2026 · 10+ active users 2026.02.13 起内测 · 10+ 用户运行中

Not what you said.
Why you said it.
不止记住你说了什么。
而是理解你为什么这么说。

MindFish teaches AI how you actually think — your values, contradictions, and decision patterns. Not another memory layer. A cognitive one. MindFish 让 AI 真正理解你的思维方式——你的价值观、矛盾和决策模式。不是又一个记忆层,而是认知层。

Request Access 申请内测 See how it works 了解原理

Every AI has amnesia about who you are. 每个 AI 都不认识自己的用户。

Memory APIs store facts — "likes dark mode, lives in Shanghai." But when your values collide, when speed fights quality, when what you say contradicts what you do — they have nothing. 记忆类产品存的是事实——"喜欢深色模式,住在上海。"但当你的价值观冲突、说的和做的不一致时,它们什么都做不了。

MindFish builds a living model of how each person thinks. Not what they prefer — why they prefer it, and when they'd choose differently. MindFish 为每个人构建一个活的认知模型。不是他们偏好什么——而是为什么偏好,以及什么情况下会做不同选择。

Memory layer 记忆层

user.preferences = {
  dark_mode: true,
  city: "shanghai",
  diet: "no peanuts"
}
user.preferences = {
  dark_mode: true,
  city: "上海",
  diet: "花生过敏"
}

Cognitive layer 认知层

user.cognition = {
  belief: "speed > quality",
  confidence: 0.84,
  exception: "quality wins when
    reputation at stake",
  boundary: "never ship without tests"
}
user.cognition = {
  belief: "速度 > 质量",
  confidence: 0.84,
  exception: "涉及声誉时质量优先",
  boundary: "绝不裸跑上线"
}

Five operations. One cognitive engine. 五步操作,一个认知引擎。

01

IngestIngest — 注入

Feed it conversations. MindFish extracts beliefs, values, and reasoning patterns — not keywords. It builds hypotheses about why your user does what they do. 传入对话数据。MindFish 提取信念、价值观和推理模式——不是关键词。它构建关于「为什么这个用户会这样做」的假设。

02

ProbeProbe — 探测

Don't wait for data. MindFish identifies cognitive gaps and generates precise questions to fill them. One well-placed question reveals more than a hundred conversations. 不等数据慢慢积累。MindFish 识别认知缺口,生成精准的探测问题。一个好问题抵得上一百次对话。

03

PredictPredict — 预测

Given a scenario, predict what this specific person would choose — with calibrated confidence and explainable reasoning. Not "users like this tend to…" — "this person would…, because…" 给定场景,预测这个具体的人会怎么选——带有校准过的置信度和可解释的推理链。不是"这类用户通常……",而是"这个人会……,因为……"

04

DecideDecide — 裁决

People contradict themselves. MindFish maps which values override which, under what conditions. When speed and quality collide, it resolves the conflict — not with averages, but with context. 人会自相矛盾。MindFish 映射哪个价值观在什么条件下优先。当速度和质量冲突时,它用场景上下文裁决——不是取平均值。

05

CalibrateCalibrate — 校准

Every prediction is verified against reality. Every hypothesis carries an empirical hit rate. Wrong? The model corrects itself. MindFish doesn't just learn — it knows how well it's learning. 每个预测都用真实行为验证。每个假设都有实证命中率。错了?模型自动修正。MindFish 不只是在学——它知道自己学得有多准。

predict.js
// Ask MindFish: what would this user actually do?
const result = await mindfish.predict({
  userId: "user_28f3k",
  scenario: "Team wants to ship with known tech debt."
});

// {
// choice: "ship",
// confidence: 0.84,
// reasoning: "velocity > perfection (hit_rate: 0.87)",
// conflict: {
// with: "no biased iteration",
// resolved: "velocity wins when deadline < 2 weeks"
// }
// }

Six things no one else does. 六件没有人做到的事。

Active Probing 主动探测

Others wait for data. MindFish asks the right questions to fill cognitive gaps — before they cause wrong predictions. 别人等数据来。MindFish 主动提问,在错误预测发生之前填补认知缺口。

Conflict Mapping 冲突映射

"I value balance" + "I reply at 2am." MindFish models which value wins which context — with documented resolution logic. "我重视平衡" + "我凌晨两点还在回消息"。MindFish 知道哪个价值观在哪种场景胜出。

Calibrated Confidence 校准置信度

When MindFish says 84%, it was right 84% of the time. Confidence scores backed by empirical hit rates. MindFish 说 84%,就是历史上 84% 的情况下预测正确。置信度背后是实证数据。

Hard Boundaries 硬边界

What someone would never do matters as much as what they prefer. MindFish models absolute rules that don't bend. 知道一个人绝对不做什么,和知道他偏好什么同等重要。MindFish 建模不可妥协的底线。

Six-Layer Architecture 六层认知架构

Facts → Hypotheses → Values → Boundaries → Reasoning → Calibration. Structured cognition, not flat key-value pairs. 事实 → 假设 → 价值观 → 底线 → 推理链 → 校准。结构化认知,不是扁平键值对。

Self-Correction 自我修正

Every prediction gets verified. Stale beliefs flagged. Contradictions surfaced. The model stays honest about what it doesn't know. 每个预测都被验证。过时的信念被标记,矛盾自动浮现。模型对自己不确定的地方保持诚实。

"Everyone's building smarter models.
Nobody's building models smarter about you."
"所有人都在造更聪明的模型
没人在造更懂你的模型。"

Get early access. 申请内测。

MindFish has been live since Feb 2026 with 10+ real users. Leave your email — we'll reach out with API access when we open up. MindFish 自 2026 年 2 月起在 10+ 真实用户上持续运行中。留下邮箱,开放 API 时第一时间通知你。

You're on the list. We'll be in touch. 已加入候补。我们会尽快联系你。