Machine Learning
Welcome to the Machine Learning Hub, your one-stop destination for all things related to machine learning!
Get ready to embark on an exciting journey into the realm of AI and discover how machines can learn and make intelligent decisions. Our blog articles are crafted with simplicity and clarity in mind, making complex machine learning concepts easy to understand for everyone. Whether you’re a beginner or an experienced practitioner, we’ve got you covered with informative and insightful content. Explore the fascinating world of algorithms, models, and data as we delve into supervised and unsupervised learning, reinforcement learning, and more. Discover practical applications in various domains like healthcare, finance, and autonomous vehicles. From introductory guides to advanced techniques, we’re here to help you demystify machine learning and unlock its potential. Join us on this journey as we unravel the secrets of machine learning and empower you to build intelligent systems that can analyze data, make predictions, and drive innovation.
Let’s shape the future together with the power of machine learning!
New Gemini 2.5 capabilities Native audio output and improvements to Live API Today, the Live API is introducing a preview version of audio-visual input and native audio out dialogue, so you can directly build conversational experiences, with a more natural and expressive Gemini. It also allows the user to steer its tone, accent and style …
Updates to Gemini 2.5 from Google DeepMind Read More »
In the first post of this series (Agentic AI 101: Starting Your Journey Building AI Agents), we talked about the fundamentals of creating AI Agents and introduced concepts like reasoning, memory, and tools. Of course, that first post touched only the surface of this new area of the data industry. There is so much more …
Agentic AI 102: Guardrails and Agent Evaluation Read More »
In the , building Machine Learning models was a skill only data scientists with knowledge of Python could master. However, low-code AI platforms have made things much easier now. Anyone can now directly make a model, link it to data, and publish it as a web service with just a few clicks. Marketers can now …
The Automation Trap: Why Low-Code AI Models Fail When You Scale Read More »
will share how to build an AI journal with the LlamaIndex. We will cover one essential function of this AI journal: asking for advice. We will start with the most basic implementation and iterate from there. We can see significant improvements for this function when we apply design patterns like Agentic Rag and multi-agent workflow. …
How to Build an AI Journal with LlamaIndex Read More »
Scientific publication T. M. Lange, M. Gültas, A. O. Schmitt & F. Heinrich (2025). optRF: Optimising random forest stability by determining the optimal number of trees. BMC bioinformatics, 26(1), 95. Follow this LINK to the original publication. Forest — A Powerful Tool for Anyone Working With Data What is Random Forest? Have you ever wished …
How to Set the Number of Trees in Random Forest Read More »
is one of the key techniques for reducing the memory footprint of large language models (LLMs). It works by converting the data type of model parameters from higher-precision formats such as 32-bit floating point (FP32) or 16-bit floating point (FP16/BF16) to lower-precision integer formats, typically INT8 or INT4. For example, quantizing a model to 4-bit …
Boost 2-Bit LLM Accuracy with EoRA Read More »
New AI agent evolves algorithms for math and practical applications in computing by combining the creativity of large language models with automated evaluators
Recent large language models (LLMs) — such as OpenAI’s o1/o3, DeepSeek’s R1 and Anthropic’s Claude 3.7 — demonstrate that allowing the model to think deeper and longer at test time can significantly enhance model’s reasoning capability. The core approach underlying their deep thinking capability is called chain-of-thought (CoT), where the model iteratively generates intermediate reasoning …
Empowering LLMs to Think Deeper by Erasing Thoughts Read More »