Deploy Your AI Assistant to Monitor and Debug n8n Workflows Using Claude and MCP


n8n workflows in production, you know the stress of hearing that a process failed and needing to dig through logs to find the root cause.

User: Samir, your automation does not work anymore, I did not receive my notification!

The first step is to open your n8n interface and review the last executions to identify the issues.

Example of key workflows that failed during the night – (Image by Samir Saci)

After a few minutes, you find yourself jumping between executions, comparing timestamps and reading JSON errors to understand where things broke.

image 328
Example of debugging a failed execution – (Image by Samir Saci)

What if an agent could tell you why your workflow failed at 3 AM without you having to dig through the logs?

It is possible!

As an experiment, I connected the n8n API, which provides access to execution logs of my instance, to an MCP server powered by Claude.

image 329
n8n workflow with a webhook to collect information from my instance – (Image by Samir Saci)

The result is an AI assistant that can monitor workflows, analyse failures, and explain what went wrong in natural language.

image 343
Example of root cause analysis performed by the agent – (Image by Samir Saci)

In this article, I will walk you through the step-by-step process of building this system.

The first section will show a real example from my own n8n instance, where several workflows failed during the night.

image 340
Failed executions listed by hour – (Image by Samir Saci)

We’ll use this case to see how the agent identifies issues and explains their root causes.

Then, I’ll detail how I connected my n8n instance’s API to the MCP server using a webhook to enable Claude Desktop to fetch execution data for natural-language debugging.

image 65
Workflow with webhook to connect to my instance – (Image by Author)

The webhook includes three functions:

  • Get Active Workflows: which provides the list of all active workflows
  • Get Last Executions: includes information about the last n executions
  • Get Executions Details (Status = Error): details of failed executions formatted to support root cause analyses

You can find the complete tutorial, along with the n8n workflow template and the MCP server source code, linked in this article.

Demonstration: Using AI to Analyse Failed n8n Executions

Let us look together at one of my n8n instances, which runs several workflows that fetch event information from different cities around the world.

These workflows help business and networking communities discover interesting events to attend and learn from.

image 63
Example of Automated Notifications received on Telegram using these workflows – (Image by Samir Saci)

To test the solution, I will start by asking the agent to list the active workflows.

Step 1: How many workflows are active?

image 64
Initial Question – (Image by Samir Saci)

Based on the question alone, Claude understood that it needed to interact with the n8n-monitor tool, which was built using an MCP server.

image 67
Here is the n8n-monitor tool that is available for Claude – (Image by Samir Saci)

From there, it automatically selected the corresponding function, Get Active Workflows, to retrieve the list of active automations from my n8n instance.

image 330
All the active workflows – (Image by Samir Saci)

This is where you start to sense the power of the model.

It automatically categorised the workflows based on their names

  • 8 workflows to connect to fetch events from APIs and process them
  • 3 other workflows that are work-in-progress, including the one used to fetch the logs
image 331
Short unrequested analysis of the agent based on the data extracted – (Image by Samir Saci)

This marks the beginning of the analysis; all these insights will be utilised in the root cause analysis.

Step 2: Analyse the last n executions

At this stage, we can begin asking Claude to retrieve the latest executions for analysis.

image 332
Request to analyse the last 25 executions – (Image by Samir Saci)

Thanks to the context provided in the doc-strings, which I will explain in the next section, Claude understood that it needed to call the get workflow executions.

It will receive a summary of the executions, with the percentage of failures and the number of workflows impacted by these failures.

{
  "summary": {
    "totalExecutions": 25,
    "successfulExecutions": 22,
    "failedExecutions": 3,
    "failureRate": "12.00%",
    "successRate": "88.00%",
    "totalWorkflowsExecuted": 7,
    "workflowsWithFailures": 1
  },
  "executionModes": {
    "webhook": 7,
    "trigger": 18
  },
  "timing": {
    "averageExecutionTime": "15.75 seconds",
    "maxExecutionTime": "107.18 seconds",
    "minExecutionTime": "0.08 seconds",
    "timeRange": {
      "from": "2025-10-24T06:14:23.127Z",
      "to": "2025-10-24T11:11:49.890Z"
    }
  },
[...]

This is the first thing it will share with you; it provides a clear overview of the situation.

image 337
Part I – Overall Analysis and Alerting (Image by Samir Saci)

In the second part of the outputs, you can find a detailed breakdown of the failures for each workflow impacted.

  "failureAnalysis": {
    "workflowsImpactedByFailures": [
      "7uvA2XQPMB5l4kI5"
    ],
    "failedExecutionsByWorkflow": {
      "7uvA2XQPMB5l4kI5": {
        "workflowId": "7uvA2XQPMB5l4kI5",
        "failures": [
          {
            "id": "13691",
            "startedAt": "2025-10-24T11:00:15.072Z",
            "stoppedAt": "2025-10-24T11:00:15.508Z",
            "mode": "trigger"
          },
          {
            "id": "13683",
            "startedAt": "2025-10-24T09:00:57.274Z",
            "stoppedAt": "2025-10-24T09:00:57.979Z",
            "mode": "trigger"
          },
          {
            "id": "13677",
            "startedAt": "2025-10-24T07:00:57.167Z",
            "stoppedAt": "2025-10-24T07:00:57.685Z",
            "mode": "trigger"
          }
        ],
        "failureCount": 3
      }
    },
    "recentFailures": [
      {
        "id": "13691",
        "workflowId": "7uvA2XQPMB5l4kI5",
        "startedAt": "2025-10-24T11:00:15.072Z",
        "mode": "trigger"
      },
      {
        "id": "13683",
        "workflowId": "7uvA2XQPMB5l4kI5",
        "startedAt": "2025-10-24T09:00:57.274Z",
        "mode": "trigger"
      },
      {
        "id": "13677",
        "workflowId": "7uvA2XQPMB5l4kI5",
        "startedAt": "2025-10-24T07:00:57.167Z",
        "mode": "trigger"
      }
    ]
  },

As a user, you now have visibility into the impacted workflows, along with details of the failure occurrences.

temp
Part II – Failure Analysis & Alerting – (Image by Samir Saci)

For this specific case, the workflow “Bangkok Meetup” is triggered every hour.

What we could see is that we had issues three times (out of five) during the last five hours.

Note: We can ignore the last sentence as the agent does not yet have access to the execution details.

The last section of the outputs includes an analysis of the overall performance of the workflows.

 "workflowPerformance": {
    "allWorkflowMetrics": {
      "CGvCrnUyGHgB7fi8": {
        "workflowId": "CGvCrnUyGHgB7fi8",
        "totalExecutions": 7,
        "successfulExecutions": 7,
        "failedExecutions": 0,
        "successRate": "100.00%",
        "failureRate": "0.00%",
        "lastExecution": "2025-10-24T11:11:49.890Z",
        "executionModes": {
          "webhook": 7
        }
      },
[... other workflows ...]
,
    "topProblematicWorkflows": [
      {
        "workflowId": "7uvA2XQPMB5l4kI5",
        "totalExecutions": 5,
        "successfulExecutions": 2,
        "failedExecutions": 3,
        "successRate": "40.00%",
        "failureRate": "60.00%",
        "lastExecution": "2025-10-24T11:00:15.072Z",
        "executionModes": {
          "trigger": 5
        }
      },
      {
        "workflowId": "CGvCrnUyGHgB7fi8",
        "totalExecutions": 7,
        "successfulExecutions": 7,
        "failedExecutions": 0,
        "successRate": "100.00%",
        "failureRate": "0.00%",
        "lastExecution": "2025-10-24T11:11:49.890Z",
        "executionModes": {
          "webhook": 7
        }
      },
[... other workflows ...]
      }
    ]
  }

This detailed breakdown can help you prioritise the maintenance in case you have multiple workflows failing.

Read Also:  From Data Scientist IC to Manager: One Year In
image 338
Part III – Performance Ranking – (Image by Samir Saci)

In this specific example, I have only a single failing workflow, which is the Ⓜ️ Bangkok Meetup.

What if I want to know when issues started?

Don’t worry, I’ve added a section with the details of the execution hour by hour.

  "timeSeriesData": {
    "2025-10-24T11:00": {
      "total": 5,
      "success": 4,
      "error": 1
    },
    "2025-10-24T10:00": {
      "total": 6,
      "success": 6,
      "error": 0
    },
    "2025-10-24T09:00": {
      "total": 3,
      "success": 2,
      "error": 1
    },
    "2025-10-24T08:00": {
      "total": 3,
      "success": 3,
      "error": 0
    },
    "2025-10-24T07:00": {
      "total": 3,
      "success": 2,
      "error": 1
    },
    "2025-10-24T06:00": {
      "total": 5,
      "success": 5,
      "error": 0
    }
  }

You just have to let Claude create a nice visual like the one you have below.

image 340
Analysis by Hour – (Image by Samir Saci)

Let me remind you here that I did not provide any suggestion of results presentation to Claude; this is all from its own initiative!

Impressive, no?

Step 3: Root Cause Analysis

Now that we know which workflows have issues, we should search for the root cause(s).

image 341

Claude should generally call the Get Error Executions function to retrieve details of executions with failures.

For your information, the failure of this workflow is due to an error in the node JSON Tech that processes the output of the API call.

  • Meetup Tech is sending an HTTP query to the Meetup API
  • Processed by Result Tech Node
  • JSON Tech is supposed to transform this output into a transformed JSON
image 347
Workflow with the failing node JSON Tech – (Image by Samir Saci)

Here is what happens when everything goes well.

image 345
Example of good inputs for the node JSON Tech – (Image by Samir Saci)

However, it can happen that the API call sometimes fails and the JavaScript node receives an error, as the input is not in the expected format.

Note: This issue has been corrected in production since then (the code node is now more robust), but I kept it here for the demo.

Let us see if Claude can locate the root cause.

Here is the output of the Get Error Executions function.

{
  "workflow_id": "7uvA2XQPMB5l4kI5",
  "workflow_name": "Ⓜ️ Bangkok Meetup",
  "error_count": 5,
  "errors": [
    {
      "id": "13691",
      "workflow_name": "Ⓜ️ Bangkok Meetup",
      "status": "error",
      "mode": "trigger",
      "started_at": "2025-10-24T11:00:15.072Z",
      "stopped_at": "2025-10-24T11:00:15.508Z",
      "duration_seconds": 0.436,
      "finished": false,
      "retry_of": null,
      "retry_success_id": null,
      "error": {
        "message": "A 'json' property isn't an object [item 0]",
        "description": "In the returned data, every key named 'json' must point to an object.",
        "http_code": null,
        "level": "error",
        "timestamp": null
      },
      "failed_node": {
        "name": "JSON Tech",
        "type": "n8n-nodes-base.code",
        "id": "dc46a767-55c8-48a1-a078-3d401ea6f43e",
        "position": [
          -768,
          -1232
        ]
      },
      "trigger": {}
    },
[... 4 other errors ...]
  ],
  "summary": {
    "total_errors": 5,
    "error_patterns": {
      "A 'json' property isn't an object [item 0]": {
        "count": 5,
        "executions": [
          "13691",
          "13683",
          "13677",
          "13660",
          "13654"
        ]
      }
    },
    "failed_nodes": {
      "JSON Tech": 5
    },
    "time_range": {
      "oldest": "2025-10-24T05:00:57.105Z",
      "newest": "2025-10-24T11:00:15.072Z"
    }
  }
}

Claude now has access to the details of the executions with the error message and the impacted nodes.

image 342
Analysis of the errors on the last five executions – (Image by Samir Saci)

In the response above, you can see that Claude summarised the outputs of multiple executions in a single analysis.

We know now that:

  • Errors occurred every hour except at 08:00 am
  • Each time, the same node, called “JSON Tech”, is impacted
  • The error occurs quickly after the workflow is triggered

This descriptive analysis is completed by the beginning of a diagnostic.

image 343
Diagnosis – (Image by Samir Saci)

This assertion is not incorrect, as evidenced by the error message on the n8n UI.

image 348
Wrong Inputs for JSON Tech node – (Image by Samir Saci)

However, due to the limited context, Claude starts to provide recommendations to fix the workflow that are not correct.

image 349
Proposed fix in the JSON Tech Node – (Image by Samir Saci)

In addition to the code correction, it provides an action plan.

image 350
Actions Items prepared by Claude – (Image by Samir Saci)

As I know that the issue is not (only) on the code node, I wanted to guide Claude in the root cause analysis.

image 351
Challenge its conclusion – (Image by Samir Saci)

It finally challenged the initial proposal of the resolution and began to share assumptions about the root cause(s).

image 353
Corrected Analysis – (Image by Samir Saci)

This begins to get closer to the actual root cause, providing enough insights for us to start exploring the workflow.

image 354
Fix proposed – (Image by Samir Saci)

The revised fix is now better as it considers the possibility that the issue comes from the node input data.

For me, this is the best I could expect from Claude, considering the limited information that he has on hand.

Conclusion: Value Proposition of This Tool

This simple experiment demonstrates how an AI agent powered by Claude can extend beyond basic monitoring to deliver genuine operational value.

Before manually checking executions and logs, you can first converse with your automation system to ask what failed, why it failed, and receive context-aware explanations within seconds.

This will not replace you entirely, but it can accelerate the root cause analysis process.

In the next section, I will briefly introduce how I set up the MCP Server to connect Claude Desktop to my instance.

Building a local MCP Server to connect Claude Desktop to a FastAPI Microservice

To equip Claude with the three functions available in the webhook (Get Active Workflows, Get Workflow Executions and Get Error Executions), I have implemented an MCP Server.

image 68
MCP Server Connecting Claude Desktop UI to our workflow – (Image by Samir Saci)

In this section, I will briefly introduce the implementation, focusing only on Get Active Workflows and Get Workflows Executions, to demonstrate how I explain the usage of these tools to Claude.

Read Also:  How AI can decipher dolphin communication

For a comprehensive and detailed introduction to the solution, including instructions on how to deploy it on your machine, I invite you to watch this tutorial on my YouTube Channel.

You will also find the MCP Server source code and the n8n workflow of the webhook.

Create a Class to Query the Workflow

Before examining how to set up the three different tools, let me introduce the utility class, which is defined with all the functions needed to interact with the webhook.

You can find it in the Python file: ./utils/n8n_monitory_sync.py

import logging
import os
from datetime import datetime, timedelta
from typing import Any, Dict, Optional
import requests
import traceback

logger = logging.getLogger(__name__)


class N8nMonitor:
    """Handler for n8n monitoring operations - synchronous version"""
    
    def __init__(self):
        self.webhook_url = os.getenv("N8N_WEBHOOK_URL", "")
        self.timeout = 30

Essentially, we retrieve the webhook URL from an environment variable and set a query timeout of 30 seconds.

The first function get_active_workflows is querying the webhook passing as a parameter: "action": get_active_workflows".

def get_active_workflows(self) -> Dict[str, Any]:
    """Fetch all active workflows from n8n"""
    if not self.webhook_url:
        logger.error("Environment variable N8N_WEBHOOK_URL not configured")
        return {"error": "N8N_WEBHOOK_URL environment variable not set"}
    
    try:
        logger.info("Fetching active workflows from n8n")
        response = requests.post(
            self.webhook_url,
            json={"action": "get_active_workflows"},
            timeout=self.timeout
        )
        response.raise_for_status()
        
        data = response.json()
        
        logger.debug(f"Response type: {type(data)}")
        
        # List of all workflows
        workflows = []
        if isinstance(data, list):
            workflows = [item for item in data if isinstance(item, dict)]
            if not workflows and data:
                logger.error(f"Expected list of dictionaries, got list of {type(data[0]).__name__}")
                return {"error": "Webhook returned invalid data format"}
        elif isinstance(data, dict):
            if "data" in data:
                workflows = data["data"]
            else:
                logger.error(f"Unexpected dict response with keys: {list(data.keys())} n {traceback.format_exc()}")
                return {"error": "Unexpected response format"}
        else:
            logger.error(f"Unexpected response type: {type(data)} n {traceback.format_exc()}")
            return {"error": f"Unexpected response type: {type(data).__name__}"}
        
        logger.info(f"Successfully fetched {len(workflows)} active workflows")
        
        return {
            "total_active": len(workflows),
            "workflows": [
                {
                    "id": wf.get("id", "unknown"),
                    "name": wf.get("name", "Unnamed"),
                    "created": wf.get("createdAt", ""),
                    "updated": wf.get("updatedAt", ""),
                    "archived": wf.get("isArchived", "false") == "true"
                }
                for wf in workflows
            ],
            "summary": {
                "total": len(workflows),
                "names": [wf.get("name", "Unnamed") for wf in workflows]
            }
        }
        
    except requests.exceptions.RequestException as e:
        logger.error(f"Error fetching workflows: {e} n {traceback.format_exc()}")
        return {"error": f"Failed to fetch workflows: {str(e)} n {traceback.format_exc()}"}
    except Exception as e:
        logger.error(f"Unexpected error fetching workflows: {e} n {traceback.format_exc()}")
        return {"error": f"Unexpected error: {str(e)} n {traceback.format_exc()}"}

I have added many checks, as the API sometimes fails to return the expected data format.

This solution is more robust, providing Claude with all the information to understand why a query failed.

Now that the first function is covered, we can focus on getting all the last n executions with get_workflow_executions.

def get_workflow_executions(
    self, 
    limit: int = 50,
    includes_kpis: bool = False,
) -> Dict[str, Any]:
    """Fetch workflow executions of the last 'limit' executions with or without KPIs """
    if not self.webhook_url:
        logger.error("Environment variable N8N_WEBHOOK_URL not set")
        return {"error": "N8N_WEBHOOK_URL environment variable not set"}
    
    try:
        logger.info(f"Fetching the last {limit} executions")
        
        payload = {
            "action": "get_workflow_executions",
            "limit": limit
        }
        
        response = requests.post(
            self.webhook_url,
            json=payload,
            timeout=self.timeout
        )
        response.raise_for_status()
        
        data = response.json()
        
        if isinstance(data, list) and len(data) > 0:
            data = data[0]
        
        logger.info("Successfully fetched execution data")
        
        if includes_kpis and isinstance(data, dict):
            logger.info("Including KPIs in the execution data")

            if "summary" in data:
                summary = data["summary"]
                failure_rate = float(summary.get("failureRate", "0").rstrip("%"))
                data["insights"] = {
                    "health_status": "🟢 Healthy" if failure_rate < 10 else 
                                "🟡 Warning" if failure_rate < 25 else 
                                "🔴 Critical",
                    "message": f"{summary.get('totalExecutions', 0)} executions with {summary.get('failureRate', '0%')} failure rate"
                }
        
        return data
        
    except requests.exceptions.RequestException as e:
        logger.error(f"HTTP error fetching executions: {e} n {traceback.format_exc()}")
        return {"error": f"Failed to fetch executions: {str(e)}"}
    except Exception as e:
        logger.error(f"Unexpected error fetching executions: {e} n {traceback.format_exc()}")
        return {"error": f"Unexpected error: {str(e)}"}

The only parameter here is the number n of executions you want to retrieve: "limit": n.

The outputs include a summary with a health status that is generated by the code node Processing Audit. (more details in the tutorial)

image 329
n8n workflow with a webhook to collect information from my instance – (Image by Samir Saci)

The function get_workflow_executions only retrieves the outputs for formatting before sending them to the agent.

Now that we have defined our core functions, we can create the tools to equip Claude via the MCP server.

Set up an MCP Server with Tools

Now it is the time to create our MCP server with tools and resources to equip (and teach) Claude.

from mcp.server.fastmcp import FastMCP
import logging
from typing import Optional, Dict, Any
from utils.n8n_monitor_sync import N8nMonitor

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler("n8n_monitor.log"),
        logging.StreamHandler()
    ]
)

logger = logging.getLogger(__name__)

mcp = FastMCP("n8n-monitor")

monitor = N8nMonitor()

It is a basic implementation using FastMCP and importing n8n_monitor_sync.py with the functions defined in the previous section.

# Resource for the agent (Samir: update it each time you add a tool)
@mcp.resource("n8n://help")
def get_help() -> str:
    """Get help documentation for the n8n monitoring tools"""
    return """
    📊 N8N MONITORING TOOLS
    =======================
    
    WORKFLOW MONITORING:
    • get_active_workflows()
      List all active workflows with names and IDs
    
    EXECUTION TRACKING:
    • get_workflow_executions(limit=50, include_kpis=True)
      Get execution logs with detailed KPIs
      - limit: Number of recent executions to retrieve (1-100)
      - include_kpis: Calculate performance metrics
    
    ERROR DEBUGGING:
    • get_error_executions(workflow_id)
      Retrieve detailed error information for a specific workflow
      - Returns last 5 errors with comprehensive debugging data
      - Shows error messages, failed nodes, trigger data
      - Identifies error patterns and problematic nodes
      - Includes HTTP codes, error levels, and timing info
    
    HEALTH REPORTING:
    • get_workflow_health_report(limit=50)
      Generate comprehensive health analysis based on recent executions
      - Identifies problematic workflows
      - Shows success/failure rates
      - Provides execution timing metrics
    
    KEY METRICS PROVIDED:
    • Total executions
    • Success/failure rates
    • Execution times (avg, min, max)
    • Workflows with failures
    • Execution modes (manual, trigger, integrated)
    • Error patterns and frequencies
    • Failed node identification
    
    HEALTH STATUS INDICATORS:
    • 🟢 Healthy: <10% failure rate
    • 🟡 Warning: 10-25% failure rate
    • 🔴 Critical: >25% failure rate
    
    USAGE EXAMPLES:
    - "Show me all active workflows"
    - "What workflows have been failing?"
    - "Generate a health report for my n8n instance"
    - "Show execution metrics for the last 48 hours"
    - "Debug errors in workflow CGvCrnUyGHgB7fi8"
    - "What's causing failures in my data processing workflow?"
    
    DEBUGGING WORKFLOW:
    1. Use get_workflow_executions() to identify problematic workflows
    2. Use get_error_executions() for detailed error analysis
    3. Check error patterns to identify recurring issues
    4. Review failed node details and trigger data
    5. Use workflow_id and execution_id for targeted fixes
    """

As the tool is complex to apprehend, we include a prompt, in the form of an MCP resource, to summarise the objective and features of the n8n workflow connected via webhook.

Read Also:  Introducing the Frontier Safety Framework

Now we can define the first tool to get all the active workflows.

@mcp.tool()
def get_active_workflows() -> Dict[str, Any]:
    """
    Get all active workflows in the n8n instance.
    
    Returns:
        Dictionary with list of active workflows and their details
    """
    try:
        logger.info("Fetching active workflows")
        result = monitor.get_active_workflows()
        
        if "error" in result:
            logger.error(f"Failed to get workflows: {result['error']}")
        else:
            logger.info(f"Found {result.get('total_active', 0)} active workflows")
        
        return result
        
    except Exception as e:
        logger.error(f"Unexpected error: {str(e)}")
        return {"error": str(e)}

The docstring, used to explain to the MCP server how to use the tool, is relatively brief, as there are no input parameters for get_active_workflows().

Let us do the same for the second tool to retrieve the last n executions.

@mcp.tool()
def get_workflow_executions(
    limit: int = 50,
    include_kpis: bool = True
) -> Dict[str, Any]:
    """
    Get workflow execution logs and KPIs for the last N executions.
    
    Args:
        limit: Number of executions to retrieve (default: 50)
        include_kpis: Include calculated KPIs (default: true)
    
    Returns:
        Dictionary with execution data and KPIs
    """
    try:
        logger.info(f"Fetching the last {limit} executions")
        
        result = monitor.get_workflow_executions(
            limit=limit,
            includes_kpis=include_kpis
        )
        
        if "error" in result:
            logger.error(f"Failed to get executions: {result['error']}")
        else:
            if "summary" in result:
                summary = result["summary"]
                logger.info(f"Executions: {summary.get('totalExecutions', 0)}, "
                          f"Failure rate: {summary.get('failureRate', 'N/A')}")
        
        return result
        
    except Exception as e:
        logger.error(f"Unexpected error: {str(e)}")
        return {"error": str(e)}

Unlike the previous tool, we need to specify the input data with the default value.

We have now equipped Claude with these two tools that can be used as in the example presented in the previous section.

What’s next? Deploy it on your machine!

As I wanted to keep this article short, I will only introduce these two tools.

For the rest of the functionalities, I invite you to watch this complete tutorial on my YouTube channel.

I include step-by-step explanations on how to deploy this on your machine with a detailed review of the source code shared on my GitHub (MCP Server) and n8n profile (workflow).

Conclusion

This is just the beginning!

We can consider this as version 1.0 of what can become a super agent to manage your n8n workflows.

What do I mean by this?

There is a massive potential for improving this solution, especially for the root cause analysis by:

  • Providing more context to the agent using the sticky notes inside the workflows
  • Showing how good inputs and outputs look with evaluation nodes to help Claude perform gap analyses
  • Exploiting the other endpoints of the n8n API for more accurate analyses

However, I don’t think I can, as a full-time startup founder and CEO, develop such a comprehensive tool on my own.

Therefore, I wanted to share that with the Towards Data Science and n8n community as an open-source solution available on my GitHub profile.

Need inspiration to start automating with n8n?

In this blog, I have published multiple articles to share examples of workflow automations we have implemented for small, medium and large operations.

image 81
Articles published on Towards Data Science – (Image by Samir Saci)

The focus was mainly on logistics and supply chain operations with real case studies:

I also have a complete playlist on my YouTube Channel, Supply Science, with more than 15 tutorials.

image 80
Playlist with 15+ tutorials with ready-to-deploy workflows shared – (Image by Samir Saci)

You can follow these tutorials to deploy the workflows I share on my n8n creator profile (linked in the descriptions) that cover:

  • Process Automation for Logistics and Supply Chain
  • AI-Powered Workflows for Content Creation
  • Productivity and Language Learning

Feel free to share your questions in the comment sections of the videos.

Other examples of MCP Server Implementation

This is not my first implementation of MCP servers.

In another experiment, I connected Claude Desktop with a Supply-Chain Network Optimisation tool.

image 70
How to Connect an MCP Server for an AI-Powered, Supply-Chain Network Optimisation Agent – (Image by Samir Saci)

In this example, the n8n workflow is replaced by a FastAPI microservice hosting a linear programming algorithm.

image 71
Supply Chain Network Optimisation – (Image by Samir Saci)

The objective is to determine the optimal set of factories to produce and deliver products to market at the lowest cost and with the smallest environmental footprint.

image 72
Comparative Analysis of multiple Scenarios – (Image by Samir Saci)

In this type of exercise, Claude is doing a great job of synthesising and presenting results.

For more information, have a look at this Towards Data Science Article.

About Me

Let’s connect on Linkedin and Twitter. I am a Supply Chain Engineer who uses data analytics to improve logistics operations and reduce costs.

For consulting or advice on analytics and sustainable supply chain transformation, feel free to contact me via Logigreen Consulting.

If you are interested in Data Analytics and Supply Chain, look at my website.

Samir Saci | Data Science & Productivity



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top