Artificial stupidity

Artificial stupidity
Photo by Igor Omilaev / Unsplash

In the Netherlands, we have our pensions reasonably well organised. The only catch is that they are invested. And speaking of investing, there is a certain type of investor that you only see on three occasions:

  1. a wine tasting,
  2. an innovative networking event with bitterballen,
  3. and the moment a financial bubble emerges.

You can recognise them by their shining eyes and phrases such as: ‘This is different from dot-com’ and ‘The fundamentals are real.’ The latter is usually said just before someone uses “fundamentals” to refer to a PowerPoint presentation, and ‘real’ to mean: there are investors who don't yet realise that this is nonsense.

And now AI is the disco. Data centres, chips, mega deals, money circulating and companies financing each other as if they were each other's customers are everywhere. Even serious parties are now warning loudly about bubble behaviour—from IMF signals to large investors saying that financing is starting to look dangerous.

This bubble won't burst with a single bang, but with the sound of a thousand spreadsheets crashing at once.

  • Data centres are proving to be much less profitable than promised.
  • AI start-ups collapse as soon as ‘growth’ is no longer enough and ‘profit’ suddenly becomes a thing again.
  • Big tech players are pulling the handbrake, suppliers are taking a hit, and banks and pension funds are turning out to have more exposure than anyone felt ‘comfortable’ with.
  • Governments are waking up with the classic hangover line: ‘We must limit the damage to protect jobs and stability.’

And here it comes again: the old trick that we as humanity have elevated to an art form.

Private profits. Public losses.

Or as the ministry calls it: ‘temporary support measures’.

Because panic always falls in the same direction: towards the system that can pay. And that is usually the state.

But after 2008, we in Europe tried to break that reflex. With, among other things, the Bank Recovery and Resolution Directive (BRRD) and the bail-in principle: first losses for shareholders and creditors, only as late as possible for public money. The European Commission explicitly states that bail-in is there to spare the taxpayer.

However, with hype surrounding AI, there is a great temptation to say, ‘Yes, but this is strategic technology.’ And strategic is often a polite way of saying: too big, too interconnected, too embarrassing to let fail.

Europe (including your government) should be able to do two things at once (which is difficult, so perfectly European):

  1. protect the real economy and ordinary people (jobs, savings, public services),
  2. without automatically passing the bill on to everyone who wasn't at the party.

That doesn't require magic, but design. Here is a three-layer line of defence.

Layer I, a simple suggestion:

  • Transparency requirements for AI exposure: banks, insurers, pension funds and large companies must clearly report how much they have in AI-driven assets/loans (including data centre financing). What you don't see, you'll later “accidentally” bail out with public money.
  • Macroprudential brakes: higher capital buffers or stricter risk weighting for extremely cyclical AI loans (such as in real estate hypes, but with GPUs). Basel-like logic exists precisely to curb excessive leverage.
  • Reality checks in annual accounts: limit “mark-to-myth” (valuations based on future miracles).
  • Energy and grid congestion as a financial stress factor: AI infrastructure relies on power, cooling and space. Let regulators include this in stress tests; no fairy-tale returns without physical preconditions.

Low II European solidarity yes, blank cheques no.

If the government intervenes (to prevent chain reactions), then:

  • Bail-in first: shareholders and certain creditors pay before the taxpayer—exactly what BRRD is intended for.
  • The state receives equity and control: support = shares/warrants, so that public money can also receive public returns when it recovers.
  • Clawbacks and bonus bans: those who sold the hype should not be allowed to disappear with severance pay and confetti.
  • Public conditions: job retention where realistic, retraining where necessary, and no dividends/share buybacks while aid is in place.
  • Strict exit strategy: aid is temporary, with clear goals and deadlines (otherwise it becomes a permanent subsidy for failed business models).

In short: if you socialise losses, you also socialise profits and control.

Low III. Europe does not have to be anti-tech. On the contrary: it can pull tech out of the hype corner and put it into public value.

  • Public digital infrastructure: European (or national) cloud and computing facilities for government, healthcare, education and research—so you don't have to beg the same few players every time there's a new wave.
  • Targeted innovation policy: less ‘subsidies because it's called AI’, more ‘subsidies because it demonstrably improves productivity, health, safety or sustainability’.
  • Open standards and interoperability: so you don't buy lock-in with taxpayers' money.
  • Labour market shock absorbers: a solid safety net, active retraining, sector funds—because when the bubble bursts, it's not just investors who fall, but employees too.

And yes: regulation is part of that. With the EU AI Act, Europe has opted for a framework that addresses risks and responsibility. That is not the enemy of innovation; it is an attempt to prevent innovation from ending up as a bill for people who never received stock options.

With one simple, but politically difficult rule:

‘We protect people, not valuations.’

  • Protecting people: jobs, savings, public services, dignified transitions.
  • Not protecting valuations: no bailouts purely to keep speculation alive.

And if someone then says, ‘But that will scare off investors!’, you say:

‘Fine. Then we'll be left with investors who understand that an economy also has to deliver something.’

Because we've lost sight of that concept a little.