Cheat sheet for likelihoods, loss functions, gradients, and Hessians.
How I tricked AWS into serving R Shiny with my local custom applications using rocker and Elastic Beanstalk.
The combination of an IDE, a Jupyter notebook, and some best practices can radically shorten the Metaflow development and debugging cycle.
Some of these are specific to Metaflow, some are more general to Python and ML.
Configurable, repeatable, parallel model selection using Metaflow, including randomized hyperparameter tuning, cross-validation, and early stopping.
Here’s where that n and that 2 come from in the square-loss objective function, in gory detail.
I get a nearly 6x speedup over standard grep by using GNU parallel.
In this post I’ll attempt to hack a scikit-learn model prediction microservice with AWS Lambda.
If you need $k$ samples out of $N$ in Hive or Pig, typically you’d naively choose $p = k/N$, but this only gives you $k$ on average.
The streaming distributed bootstrap is a really fun solution, and I’ve mocked up a Python package to test it out.