How Iris Works: Checking for Mutual Attraction Before an Intro
Iris predicts mutual visual attraction using faces-only models and your like/pass feedback, then delivers a small, high-signal shortlist instead of an endless feed.
TL;DR
- Iris learns your visual type from quick taps (likes/passes).
- Matching is faces-only—no bios, chat logs, or behavioral data.
- “Mutual” means both personal models cross “likely-yes” cutoffs.
- Intros arrive in small batches (shortlists), not a firehose.
- Feedback keeps your model current; guardrails prevent overfitting (too-narrow focus).
- If mutual isn’t confirmed, the intro stays private and the model updates quietly.
- Results vary by photo quality, activity level, and city size.
How does Iris check for mutual attraction?
Answer: We build a lightweight personal model of your visual preferences from your explicit choices.
Faces are transformed into embeddings (compact number-based summaries of a face). Your model estimates how attractive each face is to you. We do the same for the other person. An intro is mutual-likely when both sides’ estimates clear internal confidence thresholds (our “high-chance” cutoffs).
Important boundaries
- Inputs for matching: face images + your explicit taps.
- Not used for matching: bios, prompts, messages, “dwell time,” or broader engagement data.
- Safety is separate: selfie verification and moderation improve trust, but those signals do not influence matching.
What signals indicate likely mutual interest?

Answer: Two fit scores must exceed calibrated cutoffs, and we sanity-check stability.
- Personal fit (you → them): your model’s score for their face.
- Reciprocal fit (them → you): their model’s score for your face.
- Agreement band: both scores must exceed the calibrated “likely-yes/yes” band.
- Stability checks: we prefer faces your model rates consistently vs. one-off spikes.
- We do not use personality text, messaging style, or popularity to guess attraction.
How are intros timed and delivered?
Answer: As a shortlist—a small, curated batch, not an endless feed.
That protects attention and keeps each intro high-signal. It also adds space for learning between choices. You control basics like distance and discovery windows (e.g., evenings/weekends). Iris respects those, but matching itself remains faces-only.
What feedback loops update my model?
Answer: Your taps update the model immediately; brief sessions refine it.
- Your taps: every like or pass updates the model right away.
- Session summaries: small calibration after short rating sessions (“taste training”).
- Soft decay: older signals fade out (count a bit less over time) so the model reflects your current taste.
- Edge-case learning: if you like a face the model predicted you’d pass (or vice versa), we adjust most around that boundary.
We do not mine chats or outside behavior to “nudge” results.

How does Iris avoid overfitting my type?
Answer: We keep exploration small but steady and vary examples inside your taste cluster.
- Exploration rate: a small, controlled portion probes nearby looks.
- Diversity constraints: even inside your cluster, we vary hair/angles/lighting to avoid “clones only.”
- Confidence margins: the model must be meaningfully confident, not barely above a threshold.
- Time-weighted learning: recency matters, but older, consistent preferences still anchor the model.
These checks keep results reliable without getting stale.
How do you balance novelty vs. familiarity?
Answer: Most of the shortlist sits in your comfort zone; a smaller slice explores nearby novelty.
If you respond well to exploratory items, the dial nudges toward more variety. If not, it tilts back toward familiarity.

What happens if mutual interest isn’t confirmed?
Answer: One-sided interest stays private; models update quietly.
If the other side doesn’t clear their threshold, there’s no exposed “half-match.” We update based on your choice and deprioritize that direction for now. As models evolve, some pairs may be reconsidered—without adding noise to your feed.
Privacy & safety notes (plain language)
- Matching uses faces-only plus your explicit taps—no bios or behavioral surveillance.
- Trust ≠ matching: selfie verification and moderation improve the pool but don’t decide who you see.
- Predictions ≠ guarantees: mutual-likely reduces obvious mismatches; chemistry still unfolds in conversation and real life.

Key takeaways
- Iris’s core decision is mutual visual attraction predicted from faces-only models trained on your explicit feedback.
- You get shortlists, not feeds—better signal, less burnout.
- Continuous learning + controlled exploration keeps results accurate without overfitting.
- One-sided interest is never exposed; the system learns quietly and moves on.
FAQs
What if my photos change a lot?
Retake photos and do a brief “taste training” session. The model will adapt quickly to your updated presentation.
I’m in a small city—will this feel slow?
Probably a bit. Shortlists favor quality over volume. Consider widening distance windows when you’re open to meeting.
Can I opt out of exploration?
Exploration stays small by default. You can nudge it down by passing on exploratory items; the system will respond.
