46% of Europe's Workforce Is About to Have a Very Different Job
According to the OECD's Employment Outlook 2019, 14% of existing jobs across member states face a high risk of disappearing due to automation within 15-20 years, and another 32% are likely to change radically as individual tasks are automated. That report came out before GPT-3, before Copilot, before the current generation of AI tools that can write code, summarise legal documents, and generate marketing copy. The timeline has compressed since then, but the proportions are probably conservative if anything.
The World Economic Forum's Future of Jobs Report 2025 projects 170 million new jobs globally by 2030, with 92 million displaced, for a net gain of 78 million. That net figure gets cited a lot because it sounds manageable. But averages are misleading when the people losing roles and the people gaining them aren't the same people, and often aren't in the same country, industry, or skill bracket.
For Europe specifically, the pressure sits in a strange place. There are millions of professionals across the continent who work with data as a core part of their jobs: analysts in financial services, engineers managing supply chains, marketers running campaign attribution. AI is automating chunks of what they do right now. Meanwhile, the specialist AI and data roles that organisations need to fill are going begging. 2024 Eurostat data shows that 57.5% of EU enterprises that tried to recruit ICT specialists had difficulty filling those vacancies (68% for large businesses). With the US Bureau of Labor Statistics projecting 36% growth in data scientist employment between 2023 and 2033 and European demand on a similar curve, you can't solve this by posting more job ads.
The people you need are already on your payroll
So you have roles being hollowed out by automation on one side, and unfillable specialist positions on the other. The usual response is to compete harder for scarce talent: raise salaries, poach from competitors, sponsor visa applications. That works for a handful of companies, not for an entire continent.
There's a more obvious answer, and it's the one that keeps getting overlooked because it sounds too simple: reskill the data-adjacent people you already employ.
A financial analyst who's spent eight years learning how credit risk actually behaves in your portfolio understands things a newly hired data scientist will take months to absorb. A supply chain manager who knows which suppliers are unreliable in Q4, or a marketing analyst who can spot a broken segmentation model before they've opened the data, brings domain knowledge that makes AI applications useful rather than just technically impressive. What these people lack is the practical ability to build AI workflows themselves.
Existing tools don't help as much as you'd hope.
Traditional training platforms like DataCamp or Coursera teach concepts and measure completion. You finish a course on random forests and get a certificate. But there's a big gap between understanding a concept in a tutorial and applying it to your company's messy data where the edge cases are your actual business problems.
AI coding assistants like Cursor or GitHub Copilot sit at the opposite end. They'll generate code for you in seconds, which is great for software development where you can usually see whether the output works. But data and AI pipelines are different. A data join that silently drops 15% of your records won't throw an error. A feature engineering step that introduces target leakage will produce a model that looks excellent in testing and falls apart in production. These errors are invisible in the code itself. They only show up in the relationship between the code and the data it operates on, and spotting them takes experience that someone in the middle of reskilling doesn't yet have.
So the person trying to learn gets stuck in a loop: the AI assistant produces something, they can't tell if it's correct, they lose confidence, and the reskilling effort fizzles out. That's the specific failure mode, and neither training platforms nor coding assistants address it well.
What we're building at Etiq, and what we're still figuring out
We've been working on this problem starting with Etiq's Integrity Layer. The core idea is to pair structured, progressive learning with a verification layer that checks the actual logic of what code does to data. So for the financial analyst who is looking to move into AI development, they can feel comforted that as they learn, as they pick up new concepts and apply them to their role, the code they now write is checked and validated. The kind of issues which may be picked up by an experienced data scientist, which wouldn't be picked up by a learner. With the right learning paths, and the right tools, they can start to move into this new world with confidence.
We've validated this in production deployments, including with a regulated financial institution and through the Digital Catapult Supply Chain Testbeds programme, where Etiq reduced debugging time by 45% (published by Digital Catapult). The early feedback from users has been that it's especially useful for junior and less experienced team members, which supports the reskilling case directly.
And it doesn't stop there, by applying the same principles of work-specific, domain-knowledge led learning and verification, you open up not just avenues into data and AI coding but understanding and knowledge in how a learner can properly use AI tools to enhance the roles that they currently do have, improving productivity whilst having the confidence to know that hallucinations are being reduced and driven out.
We'd be lying if we said we've solved the whole problem. Reskilling millions of people is a coordination challenge as much as a tooling one. But the verification piece, giving non-specialists a trustworthy way to check their own work as they learn, feels like a necessary first step. Without that, you're asking people to build things they can't verify, which is a recipe for abandoned training programmes and wasted budget.
The talent Europe needs to compete in AI is, for the most part, already employed in European organisations. They just need a credible way to make the transition, and right now, credible options are thin on the ground.





















