Newsletter of Rishit Dagli - Issue #9

Newsletter of Rishit Dagli - Issue #9

Hey there, this is the ninth edition of this newsletter where I share new things I find interesting in technology and AI (paper summaries, open-source, more) straight to your inbox! Feel free to submit any links for the next issue.

My updates on Open source 🧑‍💻

An implementation of the Nyströmformer, using Nystrom method to approximate standard self-attention to allow easy scalability.

Great reads from the community📖

Increasingly secret, complex, and inscrutable computational systems are being used to intensify existing power relations, and to create new ones. This paper sheds some more light on why explainability is important.

Regularization seems fairly insignificant at first glance but it has a huge impact on deep models. I’ll use a one-layer neural network trained on the MNIST dataset to give an intuition for how common regularization techniques affect learning.

An awesome talk by Stephen Augustus at KubeCon EU talks about spending enough time talking about building the communities that support technical systems.

This paper introduces a method for policy improvement that interpolates between the greedy approach of value-based reinforcement learning (RL) and the full planning approach typical of model-based RL.

This blog talks about how Google Meet goes to high definition real-time segment with a new novel real-time video segmentation model and a new, highly efficient approach for in-browser ML acceleration using WebGL.

COptiDICE is a new algorithm that directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction.

A novel instance of a kernel Bayes’ rule based on a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected posterior features, based on regression from the kernel or neural net features of the observations.

Open-source from community 👏

The weights for Stable Diffusion are finally open source!!

apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models quickly.

I also wanted to shout out to Shivay Lamba for being my first GitHub Sponsor. That’s all, hope you liked this. Stay tuned for more updates. Feel free to submit any links for the next issue.

Regards,

Rishit Dagli

Did you find this article valuable?

Support Rishit Dagli's Newsletter Archive by becoming a sponsor. Any amount is appreciated!