Whitebox Lab
AI you can explain. Models you can trust.
Our Mission
At Whitebox Lab, transparency is a first-class citizen, not an afterthought. We are constantly redefining the research frontiers in tabular Machine Learning to enable users in high-stakes domains to stop choosing between accuracy and interpretability.
What Makes Us Unique
The Lab is led by Albert Dorador, a Statistics PhD (UW-Madison, la Caixa Fellow) and Professor with experience in high-stakes regulated industries. Our approach to accurate predictive modeling is grounded in science and built around two non-negotiable pillars:
Trustworthy by design
Meet our flagship model: TRUST (Transparent, Robust, Ultra-Sparse Trees). By combining the strengths of the two workhorses of Interpretable Machine Learning – Decision Trees and Generalized Linear Models – our peer-reviewed TRUST model is able to capture complex global relationships while providing localized linear models that illuminate the simplest plausible explanation that the data supports.
Radical Interpretability
On-the-fly report generation to explain and compare predictions in plain language along with state-of-the-art visualization tools like Accumulated Local Effects Plots and SHAP Waterfall Plots, among others.
Our explainability tools are designed to be accessible to all stakeholders.

Solutions
While our architecture is currently optimized for regression, we are actively extending it to multi-class classification (expected by April 2026). Below is our current algorithm lineup.
| Algorithm | Description |
|---|---|
| TRUST | Tree architecture with Renet or AEN models in its leaves |
| Renet | State-of-the-art Relaxed Elastic Net |
| AEN | Adaptive Elastic Net |
| TurboSolve | Fast OLS solver |
| FeatureImportance | Model-agnostic direct & systemic feature importance |
TRUST in Action
We partner with the Mundo Orenda NGO to apply TRUST to predict time-to-recovery of malnourished children in Angola, leveraging model transparency to directly inform humanitarian decision-making.
Our models provide the accuracy of ensemble methods with the transparency of a simple decision tree as well in other high-stakes domains, like insurance.