RandomForest scaled 1

How to Set the Number of Trees in Random Forest

Scientific publication T. M. Lange, M. Gültas, A. O. Schmitt & F. Heinrich (2025). optRF: Optimising random forest stability by determining the optimal number of trees. BMC bioinformatics, 26(1), 95. Follow this LINK to the original publication. ContentsForest — A Powerful Tool for Anyone Working With DataWhat is Random Forest?Making Predictions with Random ForestsVariable Selection …

How to Set the Number of Trees in Random Forest Read More »

1747410119 AIGA

Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer

AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Picture created by the author using various tools including Dall-E3 via ChatGPT. Models have undeniably revolutionized how many of us approach coding, but they’re often more like a super-powered intern than a seasoned architect. Errors, bugs and hallucinations happen all the time, and …

Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer Read More »

image 133

Boost 2-Bit LLM Accuracy with EoRA

is one of the key techniques for reducing the memory footprint of large language models (LLMs). It works by converting the data type of model parameters from higher-precision formats such as 32-bit floating point (FP32) or 16-bit floating point (FP16/BF16) to lower-precision integer formats, typically INT8 or INT4. For example, quantizing a model to 4-bit …

Boost 2-Bit LLM Accuracy with EoRA Read More »

combined animation

Empowering LLMs to Think Deeper by Erasing Thoughts

ContentsRecent large language models (LLMs) — such as OpenAI’s o1/o3, DeepSeek’s R1 and Anthropic’s Claude 3.7 — demonstrate that allowing the model to think deeper and longer at test time can significantly enhance model’s reasoning capability. The core approach underlying their deep thinking capability is called chain-of-thought (CoT), where the model iteratively generates intermediate reasoning …

Empowering LLMs to Think Deeper by Erasing Thoughts Read More »

1 qjTq1 o s4XkznvjBBeFHg

A Review of AccentFold: One of the Most Important Papers on African ASR

I enjoyed reading this paper, not because I’ve met some of the authors before🫣, but because it felt necessary. Most of the papers I’ve written about so far have made waves in the broader ML community, which is great. This one, though, is unapologetically African (i.e. it solves a very African problem), and I think …

A Review of AccentFold: One of the Most Important Papers on African ASR Read More »

dan cristian padure h3kuhYUCE9A unsplash scaled 1

Log Link vs Log Transformation in R — The Difference that Misleads Your Entire Data Analysis

distributions are the most commonly used, a lot of real-world data unfortunately is not normal. When faced with extremely skewed data, it’s tempting for us to utilize log transformations to normalize the distribution and stabilize the variance. I recently worked on a project analyzing the energy consumption of training AI models, using data from Epoch …

Log Link vs Log Transformation in R — The Difference that Misleads Your Entire Data Analysis Read More »

holdinghands

What My GPT Stylist Taught Me About Prompting Better

GPT-powered fashion assistant, I expected runway looks—not memory loss, hallucinations, or semantic déjà vu. But what unfolded became a lesson in how prompting really works—and why LLMs are more like wild animals than tools. This article builds on my previous article on TDS, where I introduced Glitter as a proof-of-concept GPT stylist. Here, I explore …

What My GPT Stylist Taught Me About Prompting Better Read More »

Scroll to Top