Skip to content
Home » Insight » Why “Best in class” PIMs still fail

Why “Best in class” PIMs still fail

When you see software with awards to its name, it can almost feel like risk reduction. After all, analyst rankings, shiny badges, and recognisable logos create a comforting narrative: “Pick the best on the market, and you’ve done the hard part!” Eighteen months later, team adoption has slowed, data quality hasn’t improved noticeably, and the commonest user feedback is that the system is “too complex.” But there’s nothing wrong with the software itself. The basic problem is the assumption that reputation acts as a substitute for contextualisation.

“Best in class” is a category. It’s not a guarantee. It offers plaudits for a platform’s capability set in the abstract. Your business’s outcome from PIM is determined by to what extent the capabilities it offers match the way your organisation actually produces and publishes product information.

The red flag: your PIM goes live… and work happens somewhere else

Weak user adoption in a new, technically-live PIM is characterised by surreptitious re-routing of workflow:

  • Product teams update attributes “later” because it takes too long to do “properly”
  • eCommerce teams keep a parallel sheet “just as backup for launches” (but used as first port of call) because the workflow won’t hit the deadline otherwise
  • Marketing enriches content in a CMS because the PIM UI isn’t where their work actually happens
  • Merchandising makes its trading calls based on whatever dataset is freshest, not what the PIM says is ‘the single source of truth.’
  • IT maintains connectors which, although they look right on paper, fail in practice, causing dropped attributes, stale assets, and broken category mappings

The PIM turns into a compliance superficiality rather than being a true production system. Usage is performative: There are enough updates to say it’s in play, but not sufficient to make it what it should be – the operational centre.

Awards measure features. Adoption measures friction

PIM quality rankings tend to reward breadth of features – elements like:

  • Flexible modelling
  • Sophisticated workflow engines
  • Syndication options
  • Governance controls
  • Enterprise-grade approval tools

The trouble is, none of this impressive array of features answers the only question that matters once you’re live:

What’s the marginal cost of getting a product to publishable quality inside this tool?

If ‘best in class’ capability ends up increasing the effort your teams require to do routine tasks, instead of getting higher quality, you get avoidance. If a PIM needs specialist configuration for everyday changes, it just creates a queue. That creates workarounds. Workarounds indicate lack of confidence in the tool. Finally, that lack of trust loss kills adoption.

This is why, when all’s said and done, calling a PIM “too complex” is not a complaint about software, but a commercial observation, as in: the cost of doing it properly in the PIM exceeds the perceived value of doing it properly at all.

Failure mode 1: the complexity tax becomes your operating model

Top-tier platforms are built to serve many contexts, which means abstraction. You see it when simple requests (add an attribute, adjust a validation rule, tweak a workflow step, to name three) turn into multi-week cycles involving configuration layers, permissions, testing environments, and sometimes even third-party consultancy support. The business spirals into a scenario of “permanent development, zero deployment.”

It’s entirely possible for a ‘best in class’ PIM system to be technically elegant but commercially unusable if each small change consumes a disproportionate amount of time, attention, and budget.

Failure mode 2: capability outruns usability, so enrichment becomes unfunded work

It’s not a question of merchants failing to grasp “data governance” as a concept. Where they fall down is when it comes to funding the hours of labour it implies.

A powerful PIM can model everything: variants, bundles, technical specs, multi-language, channel-specific fields. However, that modelling is only valuable if someone can consistently populate and maintain it under the real-life pressure of a launch date. When its usability is low, every data field becomes a negotiation among teams, and dealing with every SKU turns into a mini project.

You can recognise this mismatch when:

  • The ‘Single source of truth’ exists, but decisions still happen in meetings because no one fundamentally trusts the record
  • Teams argue about data ownership because enrichment tasks are time-consuming and nobody’s model for capacity includes them
  • Suppliers are blamed for poor-quality data, but at the same time, the internal standard for ‘complete’ product records keeps getting bigger.
  • The system amplifies disagreement background noise – not just what the product is, but who is accountable for saying so.

In other words: the platform assumes that a mature operating model is in place, with:

  • Clear role assignment
  • Decision rights
  • Time allocated to enrichment

However, what happens when the organisation is still running on informal negotiation and perpetual deadline-driven exceptions?

Failure mode 3: integration exists, but the depth isn’t real

Demos claim that the PIM “Integrates with ERP/DAM/commerce,” but in production, integration lives on the following spectrum of criteria:

  • Depth
  • Reliability
  • Maintainability

Even though you now have a connector which, technically, syncs data, it’s certainly not the same as having an integration you can trust during peak trading periods. Moreover, when you’re unable to reference integration in your stack (and typically, this can be a slightly ageing ERP, a bespoke taxonomy, and some idiosyncratic asset naming conventions) the data flows you rely on will worsen in quality and efficiency. Attributes drop. Assets drift. Categories mis-map. Workflow steps break down.

That creates a vicious cycle of unreliable integration leading to more teams fixing issues locally – and the more they apply local fixes, the less the PIM reflects reality, and the less it reflects reality, the less anyone wants to use it. And round and round you go.

The procurement trap: buying ‘the compelling story’ instead of the right fit

As we’ve seen many times, the most revelatory moment in many PIM programmes isn’t go-live. It’s the demo. It’s not because demos prove capability beyond reasonable doubt, but because they reveal whether the vendor (and the buyer, for that matter) can articulate an end-to-end data journey in your context.

When a PIM demo becomes a guided tour of features or relies on jargon to gloss over disconnected portals and brittle workflows, it signals something important: No-one is actually anchoring the system in the actual work the client’s people have to do so they can ship products with confidence.

Awards can persuade you to shortlist a PIM. It’s not for nothing they’ve won! However, they won’t tell you whether your organisation can afford the operating model which the platform assumes you’re working with.

The bottom-line mismatch

Despite being the ‘Best in class,’ a PIM will fail in its purpose when the organisation spends on enterprise capability to solve what’s a throughput problem…and then discovers the real constraint isn’t the software features, but the cost of enrichment and change at the pace the business needs so as to be competitive.

That’s the structural mismatch driving perpetual operational drag: a high-governance, high-configuration platform imposed on an operating model which can’t consistently fund, staff, or wait for the work it demands. Forcing users to adopt the tool cannot work if it is in fundamental opposition to the aims of the business. Those aims will always prioritise speed and certainty over compliance if the PIM makes “doing it right” slower than the legacy practices.

PIM readiness session

If your PIM is live but everyone’s circumventing it, reach out to us today at Start with Data. Our PIM readiness session is instrumental in pinpointing the specific context mismatches, such as what it costs you extra in complexity, an unfunded enrichment load, or lack of integration depth. Then, you’ll be equipped to stop treating adoption as a behavioural problem and start treating it as a constraint on your operational effectiveness.