ProBERT v1.0 vs DistilBERT Base
Compare fine-tuned ProBERT against base DistilBERT.
- 🟢 process_clarity: Step-by-step reasoning you can verify
- 🟠rhetorical_confidence: Assertive claims without supporting process
- 🔴 scope_blur: Vague generalizations with ambiguous boundaries
ProBERT is fine-tuned on just 450 examples (150 per class) to detect rhetorical patterns. DistilBERT Base has random weights (no training). Notice how base gives ~33% uniform noise while ProBERT shows sharp separation. That's what fine-tuning adds!
Model: collapseindex/ProBERT-1.0
Examples