How Theories Group Uses Portkey + n8n to run 800+ Websites

How Theories Group Uses Portkey + n8n to run 800+ Websites

Powering a fully automated AI content pipeline across 70+ markets, Theories Group transformed how they deliver localized content at scale, without compromising on control, visibility, or speed.

Powering a fully automated AI content pipeline across 70+ markets, Theories Group transformed how they deliver localized content at scale, without compromising on control, visibility, or speed.

About

Theories Group is a Sweden-based digital media company that creates and runs digital channels for brands in 100+ languages.

Industry

Digital media

Company Size

11-50 employees

Headquarters

Sweden

Founded

Early 2010s

Why Portkey:

Prompt library, full observability, easy integrations

0+
Websites
0+
Prompts
0BN+
AI Requests
Managing complex AI workflows

Theories Group is a Sweden-based digital media company operating two major brands, Clicker and Deployr. Together, they run over 800 niche content websites in 70+ languages.

As demand for localized, intent-based content grew, so did the complexity of their operations. The team needed a way to automatically generate, review, and publish tailored content on the websites.

That’s when they turned to automation, to run the entire content pipeline using LLMs.

See what Portkey can do for your AI stack

See what Portkey can do for your AI stack

See what Portkey can do for your AI stack

A system that manages websites, end-to-end

To efficiently handle this vast operation, Theories Group has developed an autonomous ecosystem that automates content creation, localization, and publication. 

Their team built a fully automated AI content pipeline using n8n to orchestrate the entire workflow and Portkey to manage all LLM interactions. Every website element is generated and processed through this system.

Here’s an overview of the content workflow:

  1. Research and blog idea generation
    n8n kicks off the flow, often triggered by new trends, market signals, or performance data. An LLM prompt (via Portkey) analyzes the data and generates blog topic ideas tailored to specific markets and intent types.

  2. Cover image creation
    Using Replicate, the system creates a relevant visual asset to go with the article.

  3. Outline generation
    A second Portkey-managed LLM call takes the selected topic and generates a structured blog outline. This ensures consistency and SEO-aligned formatting.

  4. LLM-based review
    The outline is then passed through a review stage, another LLM prompt that checks for missing sections, inconsistent tone, or localization issues.

  5. Full article writing
    Once reviewed, another LLM generates the complete blog post. Portkey routes this call through the preferred model (GPT-4, Claude, etc.), maintaining all prompt history and observability data.

  6. Content pushed to CMS.
    Finally, n8n sends the article, cover image, and metadata directly into Deployr, their custom-built CMS, where it's scheduled for publishing.

Each of these steps is a node in their n8n flow, and every LLM call goes through Portkey. They are currently using Portkey’s prompt library to house 100+ prompts that are used in the whole workflow. 

“We use it every single day. We couldn’t be without Portkey.”

— Stefan, Co-founder, Theories Group

Managing prompt libraries, model performance, and observability at scale

Running AI workflows is all about stability, governance, and experimentation.

That’s where Portkey plays a central role.

  • Prompt management: 100+ prompts are versioned in Portkey’s library. Changing tone or format for a region is as simple as updating one prompt, no need to alter n8n logic.

  • Full observability: Every call is logged with latency, token usage, model metadata, and output quality. The team can track underperforming calls and compare across providers.

  • Provider flexibility: With Portkey’s forward-compatible architecture, switching or trying out a new model takes minutes, no flow changes needed.

The impact: rapid experimentation, global scale, zero friction

See what Portkey can do for your AI stack

See what Portkey can do for your AI stack

See what Portkey can do for your AI stack

The only reason why I think we move so fast is that… we just try new things every single day. That’s the only way we work. It’s neck-break speed right now… O3 Mini came out two, three days ago, and we need to be up there.” — Stefan, Co-founder

— Stefan, Co-founder

Theories Group has created an LLM-powered content engine that is:

  • Modular with reusable prompts and node-based orchestration

  • Easily scalable to 800+ websites across 70+ languages

  • Forward-compatible with any new LLM that enters the market

  • Fully observable, so performance and cost are always in check

They went from managing content manually to running one of the most ambitious AI publishing setups — with complete control and visibility at every stage.

Lessons for AI teams scaling their operations

As AI workflows become more complex, developers need tools that help them move fast without sacrificing visibility, control, or flexibility. Theories Group is a clear example of how combining n8n for orchestration and Portkey for LLMOps lets teams ship faster, localize better, and keep up the speed.

Portkey is pushing the forefront of technology here - Stefan, Co-founder

By pairing n8n’s workflow power with Portkey’s governance and observability, they’ve built a setup that’s:

  • Modular and repeatable

  • Scalable across markets

  • Forward-compatible with the latest models

  • Transparent and easy to monitor

If you’re using n8n to run AI workflows and want more control over your LLM usage, prompts, and model behavior, Portkey plugs right in. If you'd like to explore Portkey, book a demo with us today.

Build your AI app's
control panel now

Build your AI app's control panel now

Build your AI app's
control panel now

Manage models, monitor usage, and fine-tune settings—all in one place.

Manage models, monitor usage, and fine-tune settings—all in one place.

Products

Solutions

Developers

Resources

...
...