Thursday, October 9, 2025

Comparison between n8n, OpenAI Agent Platform, Google Opal and Make.com as automation and AI agent platforms

Comparison of tools


Introduction

Organizations in the present day depend on automation systems and AI agent platforms to optimize their operations at scale. The platforms differ from each other through their design methods which range between visual construction of workflows and development of conversational AI applications. This page assesses n8n, OpenAI Agent Platform, Google Opal and Make.com through their pricing models, core functionality, hosting flexibility and LLM options.

 

Pricing💰

The n8n platform provides unlimited workflows and steps and users to all plans while charging customers based on the number of workflow executions. The plans starts from $20 per month with 2,500 executions, up to $50 per month with higher execution limits. The Community Edition of n8n operates as a free version for users who want to host it themselves. There is a 50% discounted plan for startups with fewer than 20 employees and less than $5M revenue.

 

The OpenAI Agent Platform uses API-based token pricing instead of subscription-based pricing for its services. The Standard model pricing structure applies to GPT-5 models which cost between $2.50 and $10 per million input tokens based on the model version.

 

The Google Opal service operates without cost during its experimental US-only public beta phase as a Google Labs project. It is difficult to predict its future pricing structure because no commercial rates have been disclosed.

 

Make.com operates with an operations-based pricing system which includes a free plan that allows 1,000 operations per month. The platform offers different pricing plans which start with hobby use and progress to enterprise solutions through operation-based pricing instead of workflow-based pricing.

The n8n UI.


Hosting Options 🏠

n8n provides users with complete hosting flexibility since it supports both self-hosted deployments and cloud-based options. The Community Edition of n8n allows users to deploy n8n on their infrastructure on Docker or VPS or Kubernetes systems for full control over their operations. All plans from n8n support deployment through cloud hosting and self-hosting options for users who want to keep their data more private.

 

The OpenAI Agent Platform operates exclusively in the cloud through API requests that direct data to OpenAI's processing infrastructure. Organizations need to transmit their data to OpenAI servers because there are no available self-hosting options.

 

The entire system of Google Opal operates within Google Cloud infrastructure. Users cannot host the web-based Google Labs experiment because it only operates through opal.withgoogle.com and requires Google account authentication.

 

Make.com operates as a cloud-based SaaS platform which does not support self-hosting capabilities. Users access Make.com through their web interface while the service runs all workflows on Make's infrastructure.

The OpenAI Agent Platform UI.


Core Features ⚙

n8n stands out through its 400+ pre-built integrations and its visual node-based editor which supports complex logic, conditional branching and custom JavaScript/Python code execution. The platform features built-in database functionality and headless browser automation which makes external service usage optional.

 

The OpenAI Agent Platform enables developers to access OpenAI models through its API endpoints. Developers can create custom agents through function calling and assistants API and retrieval-augmented generation methods. The platform requires developers to write code for implementation instead of using visual interfaces.

 

Users can describe workflow requirements through natural language in Google Opal which then creates visual workflow designs automatically. The platform allows users to work in both conversational and visual modes while supporting file uploads and Google Drive connections and YouTube URL processing. The platform includes pre-built templates which help users complete common tasks and finished applications can be distributed through Google account sharing.

 

The platform Make.com provides 1,500+ integrated applications through its visual scenario building interface. The platform provides simple access to users who lack technical skills yet offers complex features for data transformation and error management and scheduling capabilities. The platform added AI functionality to its recent software updates.

The Google Opal UI.


Flexibility (LLM Choices) 💪

The LLM flexibility of n8n stands out because it provides users with access to a wide range of integration options. Users can access OpenAI, Anthropic ClaudeGoogle PaLM, Hugging Face models and custom API endpoints through the platform. Organizations can select their preferred LLM models for specific use cases because the platform supports multiple models and allows seamless provider transitions without workflow reconstruction. The OpenAI Agent Platform operates exclusively with OpenAI models from the GPT family.

 

The Google Opal platform operates exclusively with Google AI infrastructure and models. The platform utilizes Google multimodal features but users cannot add third-party LLM providers or select different models.

 

The AI modules of Make.com now support multiple providers but the level of LLM flexibility depends on the specific integration. The platform initially focused on app-to-app automation but AI features became part of its offerings after its initial launch.

The Make.com UI.


Conclusions ❗

These platforms operate for different functional requirements. 

n8n provides technical teams with flexibility and self-hosting capabilities (with increased privacy and security), and LLM provider independence which makes it suitable for organizations needing complex automation and compliance solutions.

The OpenAI Agent Platform provides developers with access to advanced models through programmatic interfaces while requiring them to work within OpenAI's system framework. 

The experimental phase of Google Opal provides non-technical users with easy AI application development for prototyping, yet its production readiness remains uncertain. 

The platform Make.com provides businesses with user-friendly automation features and broad integration options while delivering average AI functionality.

The critical differentiators are control versus convenience. 

Self-hosting and LLM choice make n8n most adaptable, while managed platforms (Make.com, Opal) lower technical barriers. 

Organizations should weigh infrastructure preferences, required integrations, LLM strategy, and team technical depth when selecting their automation platform.


Book a call to learn more on how to choose between these platforms.


Watch videos of these platforms below.


n8n

OpenAI Agent Platform


Google Opal


Make.com





massimobensi.com


Frequently Asked Questions (FAQ)

Q: What platforms are compared in this article?

A: The post compares four major platforms: n8n, OpenAI Agent Platform, Google Opal and Make.com. The comparison covers pricing models, core functionality, hosting/flexibility and support for large language models (LLMs).

Q: Why compare n8n vs OpenAI Agent Platform vs Google Opal vs Make.com?

A: Because businesses and automation practitioners need to choose the right automation/AI-agent platform based on their specific needs: whether visual workflow building, multi-LLM support, self-hosting, or enterprise scale. The post helps highlight where each tool excels or lags.

Q: What are key differentiators between these platforms?

A: Key differentiators include:

  • Hosting/flexibility: whether you can self-host or require platform-managed environment.
  • LLM options: support for models beyond one provider (e.g., OpenAI, Google, custom).
  • Workflow automation vs true agentic behaviour (tool selection, memory, decision making).
  • Visual builder and integration ecosystem: how easy it is for non-developers to build.
  • Pricing and execution limits (e.g., workflow runs, agents, steps) as noted in the article. 

Q: When is n8n the best choice?

A: n8n is ideal if you want: self-hosting or open-source flexibility, the ability to integrate many systems/tools, multi-LLM support, and you don’t mind a bit more technical setup or logic wiring. The article highlights n8n’s strength in integration and agentic work.

Q: When might OpenAI Agent Platform or Google Opal be better choices?

A: If you prefer a more managed, simplified interface, or you are already embedded in the provider’s ecosystem (OpenAI or Google) and value rapid deployment rather than maximum flexibility, these options may be more suitable. The article touches on how the platforms differ in ease of use and hosting model.

Q: What should I check before choosing an AI-agent or workflow platform?

A: You should consider:

  • Do you need self-hosting or managed hosting?
  • What LLMs and tools do you need to integrate?
  • How many executions / workflows / agents will you run (pricing/execution limits)?
  • How technical your team is (visual no-code vs low-code vs code).
  • What integrations and automation logic you need (e.g., conditional logic, memory, tool chaining).

The article lays out these criteria for evaluation.

Q: Are there trade-offs between “agentic AI” platforms and standard workflow automation tools?

A: Yes. Agentic AI platforms (those labelled “agent” rather than “just workflow”) tend to focus on tool selection, memory/context management, reasoning and decision-making, whereas traditional workflow automation tools focus on triggers/actions/integrations. The article points out that if you need only “agent” behaviour (autonomous decision making) you may pick a different platform than if you just need “automate tasks”. However, the flexibility of automation platforms allows to connect any agent to specific workflow tasks.

Q: How does the article address pricing among these platforms?

A: It provides an overview of the pricing model for n8n (e.g., unlimited workflows/steps in plans, self-hosted community edition free) and discusses how pricing may differ across the platforms.

Q: Will these platforms lock me into a single AI model or provider?

A: It depends. Some platforms are tightly coupled to a provider’s model (e.g., OpenAI) whereas others (like n8n) emphasise model-agnostic flexibility and integrations with many LLMs. The article mentions the importance of avoiding vendor lock-in if flexibility is critical.

Q: Can non-technical users adopt these platforms easily?

A: The ease of adoption varies: platforms offering no-code visual builders are easier but may sacrifice flexibility; platforms like n8n offer more power but require deeper logic/flow setup. The article assesses these tradeoffs.


Monday, October 6, 2025

Docker Magic Explained: Simplifying App Deployment for Business

Containers Delivery

What is Docker?

Docker is a software that since 2013 enables software developers to create self-contained application packages which include all dependencies through its container technology. 

I blogged back in 2017 about Kubernetes (container orchestrator, check it out), and often benefitted from this technology.

Docker allows your applications to run identically on all platforms (Windows, Linux, Mac) thanks to its lightweight portable containers, which include all necessary components to run the app. 

This means a very broad choice of hosting providers and methods are available, making it extremely easy to deploy Docker containers on production servers (on-premise or cloud based).

Docker Diagram

A Simple Concept

Imagine that your business has products that need to be shipped worldwide. 

Without standardized packages and containers, each single shipment would require custom packaging, creating delays and errors. 

With standard shipping containers, everything fits neatly and securely, no matter the destination: operations are smoother. 

Docker works the same way for applications—universal containers that guarantee smooth, predictable delivery (deployment) to and running on nearly any environment. 


No, Really, It’s That Easy!

The main advantage of Docker is in its simplicity. In just a few steps, a business can go from an idea to a live running app in the cloud—without complex CI-CD pipelines. 

I will demonstrate it with this basic example.


TL;DR example

I started with a basic Node.js Express Hello World application as the fundamental web server example for this process.

NodeJs Express App

I built the Node app as usual, making sure that all its dependencies were pulled in, and the app was able to run locally.

The Docker setup process is then straightforward, by simply running the bash/dos command:

docker init

This command will create the necessary Docker files around your app source code.

Docker Files

The compose.yaml file is where you can configure your container settings, such as the web server port and environment (this one also specified in the Dockerfile).

Docker Compose file content


Now, by running this command:

docker compose up --build

Your Docker container will be up and running, and your app immediately accessible from your browser.


You can check the container and image settings from within the Docker Desktop software.



One argument that I have often heard about Docker usage from developers approaching it for the first time, is that it adds a layer of complexity to your application, by "hiding" your files in the container.

Actually I must say this is not really the case, as we can easily run our app from within the container, and even see changes in real-time.

This is achieved with a simple configuration addition in the Docker Compose file.

Docker Watch


I also installed the NodeJs NPM package nodemon as a DevDependency, and then ran the command:

docker compose watch

Now I can immediately see my code changes in the browser, despite having the app running inside a Docker container.

This feature is only available in Development and not in Production for security reasons. 

However it is possible to have entire local directories being seen as container directories by Docker, allowing full troubleshooting also on live environments.

 

Why Does This Matter?

Docker brings immediate business value to your organization:

  • The deployment of new features and applications becomes faster and safer.
  • Docker provides uniform operation across all deployment environments, by eliminating the difference between local and server environments.
  • The container technology allows deployment on-premises and in the cloud as well as hybrid environments, providing flexible scalability.
  • The system decreases operational expenses through reduced bugs, shorter system downtime and less engineering time spent on troubleshooting deployments.
  • Docker enables businesses to deliver software more efficiently which results in accelerated innovation and decreased operational risks and affordable IT expenses.
  • Wrap your complex AI systems together in Docker images and containers to speed-up the go-live of innovative AI business services.

 

Company Repository: Secure & Professional

Developers can store their containers within GitHub repositories for version control and publish images on Docker Hub (to promote public open-source apps of yours) or on their company private Docker registry (for security and compliance).

This provides:

  • Centralized control over business applications.
  • Versioning and rollback capabilities for safer updates.
  • Security and compliance by restricting who can access or deploy company containers.

By adopting Docker, businesses gain a professional, enterprise-ready way of shipping software, with full transparency and reliability.



Final Thoughts

Businesses benefit from Docker beyond its value to developers. 

The implementation of containerization technology enables businesses to achieve faster operations while minimizing risks and expanding their operations across the world.

The adoption of Docker container technology for business operations represents a strategic move toward digital economy readiness rather than an IT trend. 

Docker magic operates with both simplicity and remarkable power.

And even in these days of AI Vibe Coding, when many are wildly letting AI systems build and deploy their apps “transparently” (sometimes blindly), having instead full control of your deployed applications and systems, still represents a business advantage, especially when the Vibe Coding fails, and AI deletes the production database, and then lies about it... ;) 


Talking about AI, Docker has introduced a Model Runner tool to manage LLMs from the Docker Hub.

Docker Hub AI Models


Have questions about Docker and how it can benefit your business?

Book a call now!


Learn Docker in 1 hour in this video:





massimobensi.com

Frequently Asked Questions (FAQ)

Q: What is “Docker magic” as discussed in the article?

A: In the blog post, “Docker magic” refers to how the container-based platform Docker streamlines application deployment by packaging code, dependencies, and environment into a standardized container. This removes many deployment headaches like “it works on my machine” and makes apps portable across development, test and production environments.

Q: Why is Docker important for business app deployment?

A: Docker is important because it helps ensure consistency, reliability and speed when deploying applications. It reduces environment-specific issues, improves scalability and simplifies operations—so businesses can deploy faster, maintain stability and lower risk of drift between development and production. The article emphasises these business benefits.

Q: What are the key concepts of Docker that the post covers?

A: The post covers major Docker concepts such as:
  • Container vs image: packaging of the application + dependencies.
  • Dockerfile: how to define the build instructions.
  • Docker Compose or multi-container orchestration: how to define services.
  • Portability and isolation: how containers run the same everywhere.

Q: How does Docker simplify the deployment process?

A: Docker simplifies deployment by encapsulating everything an application needs into a container image, removing dependency mis-alignment, providing consistent runtime across machines, allowing easy versioning and rollback, and enabling faster starts of services. The blog post explains this through business-friendly language and illustrations.

Q: When should a business consider adopting Docker containers?

A: A business should consider Docker when:
  • They are facing environment-inconsistency issues (“works on my machine”).
  • They need to deploy the same application across development, staging, production.
  • They want faster, repeatable deployments and easier rollbacks.
  • They have multiple services or microservices that benefit from containerization.
The article advises assessing whether Docker solves a real problem rather than adopting it as hype.

Q: Are there trade-offs or challenges with using Docker?

A: Yes. While Docker provides many advantages, the blog post also highlights that there are trade-offs: teams may need new skills (containers, networking, volumes), orchestration (if multiple containers) adds complexity, infrastructure costs might shift, and containerization isn’t always the right tool for every scenario. It emphasises making informed choices.

Q: How do I get started with Docker as per the blog post?

A: The post suggests starting with a simple application, writing a Dockerfile, building an image, running a container locally, and then moving to multi-container setups or Docker Compose. It emphasises consistency, testing and gradually increasing complexity rather than jumping into heavy orchestration immediately.

Q: Will using Docker guarantee faster time-to-market for apps?

A: While Docker alone doesn’t guarantee faster time-to-market, by reducing environment friction, standardising deployment and enabling repeatable builds, it can significantly speed up deployment cycles and reduce incidents. The blog post places Docker as a facilitator rather than a silver bullet.

Q: What business value does Docker provide beyond technical benefits?

A: From a business perspective, Docker helps by lowering deployment risk, improving application uptime, enabling easier scaling, reducing “dev-ops” friction, making releases more predictable, and supporting modern architectures (microservices, cloud). The article connects these technical advantages to business outcomes.

Q: How does this blog post help non-technical readers or decision-makers?

A: The post is written in accessible language aimed at business leaders, product managers or technical leads who need to understand why Docker matters—not just how. It frames containerization in terms of cost, risk, speed and business agility rather than just developer tools.


Wednesday, October 1, 2025

Web Product Catalog in 48 Hours with AirTable Extension!

Web Product Catalog in 48 Hours with AirTable Extension!

I recently used AirTable while creating a new n8n workflow, and I was impressed by the intuitive UI and powerful features, including the AI Assistant chat.

So I decided to test it a bit further with a fun project, using it as the main database of a Web Product Catalog.


Start: Product Information

The first step in creating any web catalog is gathering the product information.

As often is the case, the product information coming from businesses is, let's say "less than ideally structured".

Many companies still do not use PIM, DAM or standard tools to structure and manage their products life-cycle.

So for the sake of this exercise, I gathered hundreds of internet images of dirt bikes, and used them as "products".


Define and Implement the Workflow

Every product catalog is unique, and so are all the steps required to bring it live to its best.

The first challenge in this scenario, was that each image was containing two different products (in this case bikes).

This was easily tackled with a small python script, that takes the image height and cuts it in half, and then saves two separate files.


AI to the Rescue

I then used the Google Cloud Computer Vision API in another python script, to do some bulk OCR on those images, extract all text from them, representing the product information (brand, model, engine displacement, etc.), and format it as a structured JSON file.

This file was easily imported into MongoDB, my favorite database for most applications, where I was able to do the first round of data cleaning (removing duplicate data, adding missing information, etc.).

I then exported the entire MongoDB collection into a CSV file, that has then been imported into AirTable, creating the table DirtBikes.

Here I did a second round of data cleanup, thanks to the intuitive UI of AirTable.

I used also a bit the AirTable AI Assistant, to fill data automatically in related columns, and even to create a new table automatically: Brands, containing the unique motorcycle brands out of the hundreds of images.


Image Optimization

As in most new web product catalog, the product images used by business are not "web-friendly", being used on all sort of marketing and branding material, they are generally in a very high quality, with some fancy background, or in some cases (as in this one) they contain text overlays covering parts of the product.

So before they can be used for a web catalog (or a Ecommerce website) they need further editing steps such as removing all text and the background, as well as optimizing the image file size for the web (all done with some free online tool).


UI: Web Catalog

The final step was to provide some nice-looking UI, as the default Gallery interface offered out-of-the-box by AirTable is a bit limited, when it comes to styling.

So I adventured into AirTable extensions (React apps that are embedded in AirTable UI), and quite quickly I was able to create a fancy web catalog, that could easily be the start of a new Ecommerce shop, or a PIM/DAM system! 



Conclusions

AirTable is a very powerful tool to quickly import/export and analyze data, making is a sort of Database As A Service.

I was able during the weekend to implement the AirTable extension using only the free tier, and despite not being able to publish it live, I could run it on my machine, while it displays embedded in the AirTable UI, making it very easy to implement and troubleshoot.


Find out more about AirTable Extensions, or how to have a web product catalog/Ecommerce in no time! 

Book a call now!


Learn why AirTable beats Excel in this video:




Frequently Asked Questions (FAQ)

Q: What is the “48-Hours” product catalog solution described in the article?

A: The blog post outlines how you can set up a web-based product catalog in just 48 hours — by leveraging pre-built templates, streamlined import processes, automation of product data and images, and an efficient deployment workflow.

Q: Who should consider using a rapid “web product catalog in 48 hours” approach?

A: Businesses preparing for an upcoming launch, seasonal campaign, or trade show, and who need a functional online product catalog quickly — such as small to medium ecommerce brands, distributors, or B2B suppliers — will benefit most.

Q: What are the key steps to launch a web product catalog that fast?

A: Important steps typically include:
  • Gathering product data and images in advance.
  • Using a catalog platform or template that supports quick deployment.
  • Configuring navigation, search/filter, and category structure.
  • Testing and publishing within the 48-hour window.

Q: What technical or organizational prerequisites are needed?

A: You’ll need:
  • Clean and consistent product data (names, SKUs, descriptions, images).
  • Access to a web host or catalog platform.
  • A clear category/structure plan and user flow.
  • Someone responsible for rapid iterations and deployment (either internal or via an agency).

Q: What are the benefits of having an online product catalog quickly?

A: Some benefits include:
  • Faster time-to-market for product launches.
  • Improved accessibility of your product line online.
  • Better marketing opportunities (shareable link, embed on website).
  • Reduced dependency on PDFs/static-lists; interactive catalog elements.

Q: Are there any risks or trade-offs with a rapid catalog build?

A: Yes — a “quick launch” means potentially less refinement: fewer custom integrations, limited advanced features (e.g., complex filtering, search optimization), minimal polish of UI/UX, and possibly less thorough QA or data enrichment. But it gets you live fast and you can iterate afterwards.

Q: How can I ensure the catalog remains maintainable after the rapid launch?

A: You should:
  • Use a system or platform that allows ongoing updates of product data/images.
  • Define roles/process for catalog maintenance.
  • Plan for future improvements (e.g., deeper search filters, analytics).
  • Ensure the architecture is scalable (so you don’t need to rebuild later).

Q: What features should I consider adding once the immediate 48-hour launch is complete?

A: After the initial rollout, you may want to add:
  • Advanced search, filters and facets (by price, attributes, availability)
  • Analytics on product engagement and performance
  • Integration with inventory/ERP systems for up-to-date availability
  • Multi-channel syndication (to marketplaces, PDF catalogs, mobile)
  • Rich media (videos, interactive 360° views)

Q: How can a web product catalog support my marketing or sales efforts?

A: A web catalog becomes a central hub for product discovery — you can link from emails, social posts, embed in website or share internally. It helps sales teams reference current product data, and supports marketing by showcasing full range, high-quality images and structured descriptions.


Step-by-Step Guide to Fine-Tune an AI Model

Estimated reading time: ~ 8 minutes. Key Takeaways Fine-tuning enhances the performance of pre-trained AI models for specific tasks. Both Te...