In the last three weeks, the two biggest names in PIM both bet their 2026 roadmap on agents.

Pimcore shipped Platform 2026.1 and the Agent SDK at Inspire on April 14, then dropped a 28-minute technical deep dive on April 27 that finally explained the architecture in plain language: NodeJS agent server, MCP tool groups, a proxy MCP server, a proposal workflow that keeps the LLM out of the database. Akeneo shipped its Spring 2026 Release a few days earlier with Ask Ziggy, Responsive Catalog Modeling, an AI Discoverability Bridge, and BYO LLM support. Two completely different vendors, two completely different design philosophies, two completely different go-to-market stories.

And the same blind spot.

Look, I run a Pimcore Platinum partner. After 70+ PIM implementations, I have an obvious bias here. But this isn’t about which release is better. It’s about a question neither vendor is answering: where does the product data come from in the first place?

Two Releases, Two Architectures, One Identical Choice

Let me start with what each vendor actually shipped, because the press releases blur it.

Pimcore 2026.1 + Agent SDK. The platform release is genuinely a turning point. Studio is now GA, the legacy UI is gone, versioning is unified across bundles, the installer is rebuilt. On top of that, the Agent SDK runs in a separate NodeJS container with no direct database access. The agent server talks to the Pimcore backend through three REST endpoints (sessions, proposals, agent configuration) plus MCP tools. Pimcore grouped MCP tools into “MCP server groups” so the LLM context window doesn’t drown in 47 tool descriptions. They added a proxy MCP server (Pimcore Metaproxy) with three meta-tools: discover, describe, execute. The LLM can ask the proxy what tools exist and only load descriptions for the one it picks. And then there’s the proposal workflow: when the agent wants to update data, it doesn’t write directly. It creates a proposal stored on the Pimcore side. A human approves. The conversation lifecycle layer applies the change. No LLM in the loop at apply time.

That is, honestly, well-thought-out enterprise engineering. It treats agents as governed actors with identity, permissions, context, and accountability. It keeps the data layer trustworthy.

Akeneo Spring 2026. Different story, different audience. Ask Ziggy is an in-PIM AI assistant for guidance and Q&A. Responsive Catalog Modeling watches marketplace rejections and search trends and suggests changes to the underlying product information model. The AI Discoverability Bridge connects external AI search signals back into internal product attributes. BYO LLM lets enterprises plug their own approved models in. There’s “vibe coding” support for natural-language custom logic. And the framing in the Akeneo Spring 2026 release coverage is explicit: a continuous feedback loop, where product data adapts to how products perform in the market.

That is also a real architectural shift. The PIM stops being a static record. It becomes a learning system fed by downstream signals from search, AI discovery, and marketplace compliance.

Two completely different bets. Pimcore is building agents as first-class platform participants. Akeneo is wiring its product record into the post-launch market signal loop. Different problems, different solutions.

But circle the boundary on each architecture. Where does the data enter the system on day one? In both cases, you assume it’s already inside.

What Pimcore Actually Shipped (And Where the Boundary Sits)

I want to be specific here because the Pimcore Agent SDK announcement and the 2026.1 platform post are full of phrases like “Agentic PXM” and “Data Spine” that sound enormous and end up being abstract.

The technical deep dive at Inspire spelled out the architecture. From the Pimcore Agent technical session:

  • The agent runs in its own container, separate from the Pimcore backend.
  • The agent has no direct database access. Every read or write goes through MCP tools.
  • MCP tools are grouped into smaller MCP servers per agent purpose. You configure which tool groups each agent can see.
  • A proxy MCP server (Metaproxy) loads tool descriptions only on demand, so the context window stays manageable.
  • Updates flow through the proposal workflow. The LLM creates a proposal; the conversation lifecycle layer applies it after human approval. The LLM is not in the apply path.
  • The agent loop is built on the GitHub Agent SDK with OpenAI-compatible endpoints. No vendor lock at the model level.
  • Authentication is enforced at the MCP server boundary so agents only touch what their permissions allow.

That is a serious answer to the question “how do I run AI agents inside my PIM without breaking governance?” It addresses the things enterprise architects actually lose sleep over: who can read what, who can change what, what the audit trail looks like, how the model can be swapped.

Now read that list one more time and ask yourself: which of those tools, servers, or proposals deals with a PDF supplier catalog landing in a shared mailbox at 3 a.m. on a Tuesday?

None of them. The architecture starts the moment the data is in Pimcore. Everything before that, the part where a category manager opens a vendor’s spreadsheet and starts copy-pasting attributes into a template, is outside the boundary. That’s the entire point of the boundary. It’s a clean enterprise design choice, and it leaves the most expensive part of product data work outside the platform’s responsibility.

What Akeneo Actually Shipped (And the Same Boundary)

Akeneo’s release is positioned more for the business user. Ask Ziggy lives in the PIM and helps with guidance. Responsive Catalog Modeling watches market signals and recommends changes. AI Discoverability Bridge mirrors that pattern from the discovery side. BYO LLM lets you bring your own model. Vibe coding lets a marketing ops person describe logic in natural language and have it generated.

Look at what those features all share. They assume the catalog already exists. Ask Ziggy answers questions about products that are in the system. Responsive Catalog Modeling tunes attributes for products that are in the system. AI Discoverability Bridge optimizes how products that are in the system show up in external AI search. BYO LLM enriches data that is in the system.

The Akeneo bet, in one sentence, is: make existing catalog data more performant in the market.

That’s a perfectly good bet. There are real ROI numbers behind it. If marketplace rejections drop and search rankings improve because attributes adapt to actual buyer behaviour, that’s money on the table.

And again, the boundary. Where does that catalog come from? Akeneo Supplier Data Manager (SDM) is meant to handle supplier intake on the Akeneo side, but it’s a separate product, has its own pricing and governance story, and even when you have it, the heavy lifting of mapping a vendor’s freeform Excel into Akeneo’s structured attributes still falls on a human or a custom integration. Spring 2026 didn’t fix that. It made the post-intake life of a product better.

The Intake Layer Neither Vendor Will Own

I want to put numbers behind this because it isn’t an aesthetic complaint. It’s a budget complaint.

In our 70+ implementations, the cost pattern is consistent enough to plan against. Onboarding 1,000 products through manual workflows lands between EUR 14,000 and EUR 22,000. That’s specialist time, supplier follow-ups, mapping, validation, deduplication, image preparation, and the inevitable rework when somebody discovers half the values are wrong. We see roughly 95% time savings when AI handles the parts that are pattern-matchable: format conversion, attribute mapping, image classification, gap detection.

That 95% does not happen inside Pimcore Studio. It does not happen inside Akeneo’s Ask Ziggy. It happens upstream: at the moment a supplier file lands and someone has to decide which column maps to which attribute, which units convert to which standard, which images go with which SKU, which values are valid versus garbage.

CapabilityPimcore Agent SDK 2026.1Akeneo Spring 2026Where openProd fits
In-PIM agent / chatPimcore Agent + MCP tool groupsAsk ZiggyOut of scope - openProd does not replace this
Data quality on existing recordsProposal workflow + MCP toolsResponsive Catalog + AI DiscoverabilityComplementary - openProd hands clean records to either
Supplier intake (PDF, XLSX, images)Not addressedNot addressed in Spring 2026Core focus
PIM-agnostic operationPimcore-bound by designAkeneo-bound by designWorks with both, plus Ergonode and others
Pre-run cost estimate before mapping startsNoneNoneStandard

The real kicker is that the cleaner Pimcore and Akeneo make the inside of their platforms, the more obvious the before state becomes. A platinum-engineered governed agent layer cannot save you from a CSV that has units mixed in with the values. A continuous feedback loop on market signals cannot save you from a supplier who labels every weight in pounds when your system expects kilograms. The architectures are getting more sophisticated; the front door has not changed.

So who owns the front door?

What “PIM-Agnostic Onboarding” Actually Means In Practice

This is where I have to be honest about positioning. OpenProd.io is the layer that gets supplier data ready before it ever reaches your PIM. Pimcore, Akeneo, Ergonode, doesn’t matter which. Format normalization, attribute mapping, image processing, validation, and a pre-run cost estimate so finance knows what they’re approving.

The term “PIM-agnostic” gets used a lot. In practice it means three things:

  1. Your AI investment doesn’t get stranded if your PIM changes. A bank we worked with replatformed twice in five years (cost reasons, then compliance reasons). The mapping logic survived both moves because it lived above the PIM, not inside it. If their AI agent had been bound to Pimcore Agent SDK or Akeneo Ask Ziggy, every replatform would have been a rebuild.
  2. Multi-PIM enterprises stop paying twice. Manufacturers with brands on different PIMs (acquisitions, regional autonomy, legacy systems) currently run separate onboarding teams per platform. PIM-agnostic intake collapses that into one workflow regardless of which PIM the cleaned record is destined for.
  3. You can switch your “in-PIM” AI later without losing your onboarding velocity. If Pimcore Agent SDK matures faster than Akeneo’s stack, or vice versa, you swap the in-PIM layer without touching how products enter your business.

This is not a knock on either Pimcore or Akeneo. Honestly, the Pimcore Agent SDK design is the most disciplined enterprise agent architecture I’ve seen ship from any PIM vendor. The Akeneo continuous feedback loop is a smart bet on where the value of “good product data” is actually proven (in market behaviour, not in completeness scores). Both are doing their job.

The job they are not doing, and have signalled they will not do, is building a vendor-neutral intake layer for the supplier file that arrives before the PIM ever sees it.

What This Means For Your 2026 Plan

If you’re a CTO or Head of E-commerce running Pimcore: budget for 2026.1 migration and Agent SDK pilots. Treat the agent layer as the in-platform productivity move. Separately, decide who owns the intake step. If it’s a custom Symfony pipeline plus a couple of category managers, you already know what your annual onboarding cost looks like.

If you’re on Akeneo: Spring 2026 is going to make your existing catalog perform better in the market. That’s real money. Separately, ask the same intake question. Akeneo SDM is the official answer; the practical answer in most teams is still spreadsheets and overtime.

If you’re a CFO trying to model this: the “AI in PIM” line item is now its own budget category. It’s also separate from the “supplier onboarding” line item. They look like the same problem in slide decks. They are not the same line in your P&L. Run the PIM ROI calculator on both before signing anything.

A defensible business case for 2026 has two columns: in-PIM agent productivity, and intake automation. One vendor will not give you both. Pretending one will is how you end up with EUR 14k per 1,000 products buried inside your “AI strategy” that nobody costed.

When “Best in Class” Means Two Vendors, Not One

The PIM market is consolidating around two architectural visions: governed agents inside the platform (Pimcore), and adaptive product data wired to market signals (Akeneo). Both are legitimate. Both will have customers in 2030.

Neither is the right vendor for the question of how supplier data becomes structured product data in the first place. That layer is a different problem with a different architecture and a different total cost. It happens to be the one where 95% time savings are still on the table because nobody at the platform level is reaching for them.

If you want to own your product data narrative, you have to own the upstream first. That’s where the Excel hell lives, where the EUR 14k per 1,000 SKUs hides, and where the next twelve months of competitive advantage is sitting unclaimed by either vendor.

Compare openProd to Pimcore and to Akeneo on the specific intake dimension. Or go deeper on what each release left out: the Pimcore Agent SDK intake gap and the Akeneo Spring 2026 operational gap.

Sources and Further Reading