Watch document rankings shift in real time as user feedback trains the Bayesian probability model.
As you feed relevance clicks, the model learns a steeper sigmoid and higher threshold — the BM25 score signal overtakes the prior, causing documents to swap ranks. Watch pairs D02/D09 and D03/D10.
Click a document row to boost it. Each click adds a diminishing increment to that document's log-odds: $\Delta\!\operatorname{logit} = 1.5 \cdot \ln(1 + n_{\text{clicks}})$, with probability mass conserved across all documents. The Step/Run buttons train the global sigmoid $(\alpha, \beta)$ via SGD with a hidden user model.
Temporal weighting (sidebar toggle): click boosts decay exponentially — $\Delta\!\operatorname{logit} = 1.5 \cdot \sum_i e^{-\lambda(t - t_i)}$ where $\lambda = \ln 2 / h$ and $h$ is the half-life in steps. Old clicks gradually lose influence unless reinforced. The global model also uses time-weighted Polyak averaging so recent SGD updates carry more weight.
Attention fusion (sidebar toggle): decomposes each document's score into three signals — BM25 likelihood, TF prior, and doc-length prior — and learns softmax weights to combine them via log-odds conjunction. Watch the weights shift as the model discovers which signals matter most. Gating (relu/swish/gelu) filters weak signals in logit space before aggregation.
Training mode: balanced (C1) trains on sigmoid likelihood; prior-aware (C2) trains on the full Bayesian posterior; prior-free (C3) trains on likelihood but uses prior=0.5 at inference, ignoring TF and doc-length priors entirely.
Base rate (Model Parameters slider): corpus-level fraction of relevant documents. Lower values produce more conservative probabilities; higher values are more generous.
| # | ID | Title | BM25 | TF | DLR | P(R) | Step | Total |
|---|
Precision@5: 0/5 Total reviews: 0