deep think key art 16 9.width 1300

Updates to Gemini 2.5 from Google DeepMind

ContentsNew Gemini 2.5 capabilitiesNative audio output and improvements to Live API New Gemini 2.5 capabilities Native audio output and improvements to Live API Today, the Live API is introducing a preview version of audio-visual input and native audio out dialogue, so you can directly build conversational experiences, with a more natural and expressive Gemini. It …

Updates to Gemini 2.5 from Google DeepMind Read More »

blog6 1

Agentic AI 102: Guardrails and Agent Evaluation

ContentsIn the first post of this series (Agentic AI 101: Starting Your Journey Building AI Agents), we talked about the fundamentals of creating AI Agents and introduced concepts like reasoning, memory, and tools. Of course, that first post touched only the surface of this new area of the data industry. There is so much more …

Agentic AI 102: Guardrails and Agent Evaluation Read More »

image 1 1

The Automation Trap: Why Low-Code AI Models Fail When You Scale

In the , building Machine Learning models was a skill only data scientists with knowledge of Python could master. However, low-code AI platforms have made things much easier now. Anyone can now directly make a model, link it to data, and publish it as a web service with just a few clicks. Marketers can now …

The Automation Trap: Why Low-Code AI Models Fail When You Scale Read More »

RandomForest scaled 1

How to Set the Number of Trees in Random Forest

Scientific publication T. M. Lange, M. Gültas, A. O. Schmitt & F. Heinrich (2025). optRF: Optimising random forest stability by determining the optimal number of trees. BMC bioinformatics, 26(1), 95. Follow this LINK to the original publication. ContentsForest — A Powerful Tool for Anyone Working With DataWhat is Random Forest?Making Predictions with Random ForestsVariable Selection …

How to Set the Number of Trees in Random Forest Read More »

1747410119 AIGA

Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer

AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Picture created by the author using various tools including Dall-E3 via ChatGPT. Models have undeniably revolutionized how many of us approach coding, but they’re often more like a super-powered intern than a seasoned architect. Errors, bugs and hallucinations happen all the time, and …

Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer Read More »

image 133

Boost 2-Bit LLM Accuracy with EoRA

is one of the key techniques for reducing the memory footprint of large language models (LLMs). It works by converting the data type of model parameters from higher-precision formats such as 32-bit floating point (FP32) or 16-bit floating point (FP16/BF16) to lower-precision integer formats, typically INT8 or INT4. For example, quantizing a model to 4-bit …

Boost 2-Bit LLM Accuracy with EoRA Read More »

combined animation

Empowering LLMs to Think Deeper by Erasing Thoughts

ContentsRecent large language models (LLMs) — such as OpenAI’s o1/o3, DeepSeek’s R1 and Anthropic’s Claude 3.7 — demonstrate that allowing the model to think deeper and longer at test time can significantly enhance model’s reasoning capability. The core approach underlying their deep thinking capability is called chain-of-thought (CoT), where the model iteratively generates intermediate reasoning …

Empowering LLMs to Think Deeper by Erasing Thoughts Read More »

1 qjTq1 o s4XkznvjBBeFHg

A Review of AccentFold: One of the Most Important Papers on African ASR

I enjoyed reading this paper, not because I’ve met some of the authors before🫣, but because it felt necessary. Most of the papers I’ve written about so far have made waves in the broader ML community, which is great. This one, though, is unapologetically African (i.e. it solves a very African problem), and I think …

A Review of AccentFold: One of the Most Important Papers on African ASR Read More »

Scroll to Top