Newsletter of Rishit Dagli - Issue #3

Newsletter of Rishit Dagli - Issue #3

·

3 min read

Hey there, this is the third edition of this newsletter where I share new things I find interesting in technology and AI (paper summaries, open-source, more) straight to your inbox!

My updates on Open source 🧑‍💻

Released v0.2.0 with end-to-end training examples, pre-trained model, reproducing paper results, and tests.

Contributed >20 new models (RegNet, TridentNet, FCOS…) some new to the TensorFlow community as well which we created while experimenting for our most recent paper.

My writings đź“ť

In this article, I help you get started and fundamentally understand how graph neural networks work while also trying to address the question “why” at each stage. Finally, we will also take a look at implementing some of the methods we talk about in this article in code.

Great reads from the communityđź“–

Denoising diffusion probabilistic models are effective for a variety of speech synthesis problems but because of high sampling costs are difficult to use in real-time speech processing applications. This is based on denoising diffusion GANs.

An informative article about Neural Architecture Search, the process of automating the design of neural networks’ topology. This article gives an overview of NAS methods and then goes into implementing NAS

Video restoration generally requires to utilize temporal information from multiple adjacent but usually misaligned video frames. Existing methods do so with a sliding window or recurrent architecture. VRT with parallel frame prediction and has long-range temporal dependency modeling abilities.

I recently came across this short super interesting interview with Roger Penrose. Is math invented by humans just like chisels and hammers? Or is mathematics discovered—always out there, somewhere, like mysterious islands waiting to be found?

This does remind one of something Einstein said: “The eternal mystery of the world is its comprehensibility…. The fact that it is comprehensible is a miracle.” and similar words by Paul Dirac too, it is a mystery why math works so well this video does go into some of these things.

Methods combining local and global features show great performance but local features become redundant and are trained with a global loss. This method works on mid-level features. From all local descriptors, you create a new, ordered set of descriptors for the image retrieval, starting with the most discriminative. E.g: 1: “Eiffel tower”, 2: dalmatin dog, so on.

Open-source from community đź‘Ź

The MATH Dataset (NeurIPS 2021). A new Machine Learning dataset containing 12,500 challenging competition mathematics problems a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.

A Python package for simulating Active Inference agents in Markov Decision Process environments.

That’s all, hope you liked this. Stay tuned for more updates. Feel free to submit any links for the next issue.

Regards,

Rishit Dagli

Did you find this article valuable?

Support Rishit Dagli's Newsletter Archive by becoming a sponsor. Any amount is appreciated!

Â