State of the cloud 2026: AI’s hype train continues, agents go primetime, and sovereignty shifts left

January 21, 2026
Sanjeev Mohan

In this first episode of 2026…

Vinay Joosery sits down once again with Sanjeev Mohan (Principal Analyst at SanjMo, former Gartner VP) for their annual “state of the cloud” conversation—this time through the lens of AI infrastructure economics, sovereignty becoming mainstream, and cloud’s evolution into an operating model that increasingly extends on-prem.

Together they grade last year’s predictions, unpack what really changed in 2025, and make practical calls for what to watch in 2026: reinforcement learning for reliability, sovereignty shifting from contracts to architecture, zero-copy/compute-to-data patterns, and energy as the new constraint shaping regions and capacity.

Key insights

AI Infrastructure Goes Circular (and Competitive)
Investment in AI infrastructure accelerated, but the conversation is getting more complicated: hardware competition is rising (TPUs, Trainium, Graviton, AMD) and the market is increasingly shaped by circular mega-deals and capital dynamics rather than straightforward demand curves.

Regulation Gains Teeth—Sovereign Cloud Goes Mainstream
Sovereignty is no longer niche. EU AI Act obligations continue to roll out, enforcement is tightening, and hyperscalers are now packaging “sovereign” offerings—either directly or via local partners—to meet regional expectations.

Cloud Cost Talk Misses the Real Question: Value
Yes, cloud spend continues to rise, but cost-only narratives miss the value equation. The bigger uncertainty sits on the AI side—where burn rates, massive commitments, and profitability timelines remain unpredictable.

On-Prem Becomes a First-Class Citizen Again (Not Just Repatriation)
2025 didn’t produce many new blockbuster repatriation stories—but on-prem regained relevance because organizations must unlock the huge volume of unstructured and legacy data that still lives outside public cloud.

Sovereignty “Shifts Left” Into Architecture
The big change for 2026: sovereignty stops being a residency checkbox and becomes a design requirement—keys, control planes, operational boundaries, and assurance are engineered from the ground up.

2026 = Reliable AI + Governance + Reinforcement Learning
After the “bigger model” era, the focus shifts to post-training and reinforcement learning to improve reliability. The goal: AI that’s stable enough to deploy without sleepless nights—unlocking a more real, scaled agent era.

Compute-to-Data, Zero-Copy, and the Slow Death of ETL
Data movement remains expensive and risky. The momentum is toward context reasoning, Iceberg-style table formats, and unbundled compute + storage across vendors—more flexibility, more control, and fewer forced choices.

Power Becomes the Limiter
Energy capacity, transmission constraints, and regional grid realities increasingly shape where data centers get built, who gets capacity, and how pricing evolves—especially as hyperscalers prioritize reserved commitments.

Episode highlights

💡 Did AI infrastructure investment accelerate? [1:24 – 3:06]
A clear “yes”—with more competition across chips and cloud hardware stacks.

💡 Regulation + sovereign cloud momentum in 2025 [3:31 – 6:40]
EU AI Act rollout continues, transparency requirements increase, and hyperscalers adapt.

💡 Cloud cost isn’t the point—value is [8:44 – 11:14]
A more useful lens on cloud spend: what value is being created, and where.

💡 Cloud repatriation vs. on-prem resurgence [14:23 – 17:44]
Fewer headline repatriation stories, but on-prem becomes strategic again.

💡 Why “95% of pilots fail” is the wrong takeaway [18:08 – 21:27]
AI is delivering real value—often quietly, in proprietary workflows.

💡 2026 prediction: sovereignty shifts left [26:58 – 29:58]
Sovereignty becomes architecture, not a contract clause.

💡 Kubernetes and AI workloads [33:00 – 34:43]
Kubernetes gets a second wind as AI orchestration needs explode.

💡 What is 2026 in AI? Reliability and reinforcement learning [35:12 – 37:58]
Post-training and governance drive more dependable deployments.

💡 The data layer is undergoing a revolution [38:05 – 39:03]
2025 company acquisitions indicate the market craves complete solutions rather than single features masquerading as a tool.

💡 Compute-to-data, zero-copy, and unbundled stacks [44:23 – 47:08]
Less ETL, less movement, more choice and control.

💡 Energy constraints shape cloud strategy [50:46 – 52:02]
Power and transmission become strategic limits on capacity and placement.

💡 DBaaS outlook: efficiency upgrades, not category shocks [52:30 – 53:39]
Vector indexing improves; ops gets smarter; fewer “new categories,” more refinement.

Transcript

Vinay: So, are we exiting peak hype?

Sanjeev: No, I don’t think so. I think peak hype will continue. But it’s not a bad thing! it sounds horrible, that it’s all hype, but it is delivering value.

Intro: This is Sovereign DBaaS decoded, a podcast for IT leaders and implementers looking to reliably scale database ops while maintaining control of their data stack. In each episode, we join industry experts to discuss the what and why of sovereignty and how you can implement the sovereign DBaaS concept of your own using open source databases. Deployment models, and tooling. Let’s get started.

Vinay: Hello and welcome to the 2026 first episode of Sovereign DBaaS Decoded. I’m Vinay Joosery, and this episode is brought to you by Severalnines. Our guest today is Sanjeev Mohan, principal analyst at SanjMo and former Gartner VP. Thanks for joining us today, Sanjeev.

Sanjeev: Thank you so much, Vinay. This is our annual tradition. This is how we start the year.

Vinay: It is indeed. This is the fourth time, I believe, you’re on the show. And you’ve been on the road quite a bit last year, right? So we look forward to having you here, sharing your insights. Let’s see what we can expect for today. We’ll get an update from Sanjeev on the state of the cloud. But first, we’ll recap the cloud and the database space in ’25. The major influencers, events, surprises. We’ll see what’s going on today and look forward to ’26. But before we do, let’s start with some of the 2025 predictions we made last year. Let’s grade ourselves. So, number one: Did hyperscaler and enterprise investment accelerate, especially for AI infrastructure? What do you think?

Sanjeev: Oh, absolutely. For AI infrastructure, there is no doubt. We saw not only a lot of investments increase, but they became a bit of trickery, to be very honest, because there was this whole circular motion of investments. The fact is that Nvidia has a lot of money that it’s generating, and it needs to protect itself because it’s no longer just GPUs from Nvidia. AMD was at last week’s CES conference in Las Vegas. They have the chips. Nvidia also bought Run:ai, which was a competitor on the inference side. But Google TPUs finally, I think in 2025, they came of age.

Sanjeev: And why I say that is because TPUs were already there for many years, but they were only meant for Google’s own consumption. But now even Anthropic is using TPUs. And then of course AWS, the Graviton, and then Trainium 3 was a big thing for them last year. So because of this increased competition on the hardware infrastructure, on the model side from model companies in China, US companies, it’s led to a lot of investment.

Vinay: Yeah. So I guess that was a good prediction. And it came in with the caveat that there are these circular transactions between some large players. So yeah, we’ll see how that pans out. Let’s see what happens in 2026, because I don’t know if we can say the same for 2026. But anyway, we’ll come to that. So the second one, we talked about the regulatory environment. That it will expand. And we see that the EU AI Act phased rollout continued during ’25. From training your staff to having responsible AI, limiting harmful AI practices, hidden manipulation, exploiting vulnerable groups, and real-time biometric surveillance.

Vinay: So there is a surge in these sovereign clouds in the EU. And this is since the new administration in the US, and the EU is making a concerted effort to become more competitive and reduce its dependence on hyperscalers. So, we at Severalnines, we are part of this initiative from the private sector, Eurostack, which actually was created sometime last summer, to ensure there’s European-made hardware and software alternatives. And by the way, this is not just in the EU that we see this sovereignty movement, right?

Sanjeev: Correct. Asia, Middle East. But again, coming back to the prediction, regulation is driving certain changes.

Vinay: What do you see?

Sanjeev: So I see both sides. In the US, for example, the new administration is trying to ban individual states—so we have 50 states in the country—from having their own regulations. And without getting into politics, I think having one standard is so much better than trying to adhere to 50 different standards. But the states are fighting back because they feel it’s their prerogative how they want to regulate AI. But the rest of the world, I think, regulation has literally taken off much faster than what I had expected at the beginning of last year.

Sanjeev: Even the US hyperscalers, they all now have European sovereign cloud, either their own, like Microsoft and AWS, or through a partner, like in the case of Google. And then the more these AI models are getting advanced and they have the capability of producing inappropriate content, do all kinds of things that are not right… The rollout of regulations is actually going up even in 2026. In the first week, we have seen the EU AI Act come up with a new set of policies: transparency on how the model generated its results.

Vinay: So I think this came through last year and it’ll continue even more this year. And actually, about a month ago, the EU Commission fined X, Elon Musk’s X, €120 million. For not being transparent on the platform, not allowing researchers access to see what’s going on. So I think that there’s definitely more and more… let’s say these regulations, they’re getting more teeth. They’re getting longer, sharper teeth.

Vinay: So prediction number three. Sanjeev, we have been talking every year. There’s always talk about, “Oh, enterprises’ top concern, CIOs’ top concern is cloud costs.” You know, this has been a long, long concern for ages. And that spending party, has it ended? We talked about probably people will try to do something. I was looking up some statistics. According to Gartner, worldwide spending on public cloud services was around 723 billion in 2025. And that’s up roughly 20% from ’24. Now the question is what portion of that is AI accelerated? And I saw something else from Goldman Sachs where they say 10 to 15% is actually directly due to AI workloads.

Vinay: So companies are spending more but not necessarily for traditional workloads. And I guess cost is still an issue for enterprises, but there is a case for optimizing, right-sizing, and managing usage more tightly. But we haven’t seen a reduction in cost. I mean, if ever, enterprises are spending more than ever on the cloud. What’s your take on this?

Sanjeev: So, Vinay, you know, we talk about cost quite a bit, but I think a focus on cost hides the value creation. So if companies are spending a lot of money, why is that the problem? Because they’re getting value out of it. On the data side, I would say not any different from last year. Last few years cost has been going up. But, you know, we also see there’s a lot of improvement in utilization. I see a lot of performance enhancement capabilities. The big cost factor is actually AI because, on a daily basis, OpenAI probably loses hundreds of millions of dollars every day. And so they talk about losses in billions of dollars.

Sanjeev: But then they expect that AI is going to get so entrenched and there will be agents running around doing so much work, and there’s going to be so much consumption of tokens that they will break out and be profitable. Actually, Anthropic is pretty gung-ho about it. OpenAI doesn’t seem to be that concerned because, to our first point, the investments have been so high that they can ride this wave. But I don’t see any concern for the data side. For AI, all bets are off. We don’t know which company will survive. So our assessment would be yellow.

Vinay: Yellow. Okay. First one was green, then this would be yellow. So yeah, I mean for sure, looking at the numbers, 20% growth. People spend more and then, you know, how much of that is AI and actually traditional workloads? And you would probably argue that AI is where people are doing the innovation. That’s a lot of the new value being created. But for stable, static workloads that don’t change that much…

Sanjeev: I think open source also is marching on strong and steady. For example, Vinay, I saw you some time ago at KubeCon, and you saw how many observability companies there were. Each observability company would have its own engine, its own way to ingest data and store data, process it. But now, what I’m seeing in the observability space, for example, and I’m moving away from my core data but into infrastructure observability… Now there’s a standard called OpenTelemetry, which means it’s the same format and same way of processing from ingestion, from any kind of source in any language, whether it’s a C program or Rust. It doesn’t matter. It’s just there’s a standard for logs, metrics, traces.

Sanjeev: And then when you write it, you write it on an object store in a Parquet file with Iceberg, as your table format. And that means that any compute engine, open source or not, can analyze that data. So there are these subtle changes happening, not just on the data side, but pretty pervasive. And I think some of these will reduce cost because they’re removing that vendor lock-in.

Vinay: Yeah. And even if you look at, for example, the protocol to speak to MongoDB. Right, you have DocumentDB. And where they actually do that, they’re trying to standardize like SQL. Use SQL to talk to relational databases; well, you need to have a document protocol to talk to document databases. And that also helps when it comes to maybe vendor lock-in, be able to move from a proprietary solution to an open source solution and save money.

Sanjeev: Yeah. So in relational we’ve always had Postgres. In document databases, JSON, we now have DocumentDB. In key value, we have something called Valkey, which is a spinoff of Redis. And in search with Apache or Linux Foundation, OpenSearch open source, an offshoot of Elastic. So you see there are these open source versions available. And then of course, we’ve always had Cassandra for columnar. So in every stack, every type of data storage, there is an open source version available.

Vinay: Now, prediction number four, Sanjeev. Cloud repatriation becomes legit. And the question is, we were kind of expecting there would be more repatriation. And this is a tough one because we don’t see much evidence on it publicly. But we had Dave Linthicum on this show last year. And according to him, there’s quite a bit of repatriation going on. But they are not publicized. So enterprises treat repatriation as an internal infrastructure strategy rather than something to announce publicly. But if we look at, like, Flexera’s 2025 cloud survey, it shows that roughly 21% of workloads have been repatriated back to private data centers. What’s your take?

Sanjeev: Those like 37signals, a lot of those use cases get circulated over and over. I think there’s another one, Dropbox. Which tells you that there are not any new mega repatriation use cases. Something strange happened in 2025: on-premises became, I think, on equal footing as public cloud, in my opinion. But it wasn’t because of repatriation. It is because there was this urgent need for companies to leverage all of the data they have on-premises in SharePoint and OneDrive, all these PDF documents, even paper documents that needed to be OCR’d because now multimodal models have become really good at extracting these entities.

Sanjeev: So I think a lot of workload has now moved to on-prem to a point where even AWS now has finally accepted that the world would be hybrid and multi-cloud. That was a big finding from AWS re:Invent, because for years they said, “No, everything’s moving to the cloud.” So, it may not be repatriation itself that is happening, but the fact that 70% of data is still on-prem and now is being unlocked by “cloud,” except that cloud is on-prem. And I know we’ll be talking quite a bit about sovereignty. That is the other use case. So there are all these use cases that are making on-prem equally a first-class citizen like cloud. So it used to be the second-class citizen. It used to be a swear word.

Vinay: Yeah, I see even companies like Cloudera’s big vision is private.

Sanjeev: Yeah. Because I think they realized that cloud is crowded and there’s just too much going on. People forgot on-prem and now they’re like waking up and going, “Oh, wait, all this unstructured data sitting on-prem.” So I need to come back and that’s where 70% of all data is… and I’m making up 70%. I’ve seen even 80, 90% as volume of data on-prem versus in the cloud.

Vinay: Talking about unstructured data, that takes us to the next one which is AI projects. AI use cases go beyond copilots. So we think that maybe there would be much more in production. Well, I guess you probably saw that MIT report from last August. 95% of generative AI pilots at companies are failing. So there’s a rush to integrate all these AI models. But 95% of these programs, they stall, they deliver little to no measurable impact on profit and loss. Why is that? Is it an issue of having data quality to be able to make something out of it? What’s your take on that then?

Sanjeev: I’m not a big fan of that report, to be honest, because it has MIT Lab stamped on it so it got a lot of coverage, but I think the sample size wasn’t as big, and there are just so many nuances. I was talking to a pharma company that was telling me how much AI is saving them. When you have a new drug development, it takes 10 to 20 years and costs $4 billion. 2,000 documents need to be submitted to the FDA in the US for approval. Different languages, slight modifications of formulas. It is such a massive effort that AI… and you know, AI is really good at document generation. So for AI, this is an easy task to take care of. So the savings are actually in millions, sometimes hundreds of millions of dollars.

Sanjeev: But because this is very proprietary, people don’t talk about it. So when I talk to end users, I find that people are using AI quite a bit. Like somebody asked me, is there a killer use case of AI? You know what my response is? What is the killer use case of the worldwide web, the internet? Just… It’s very hard to pin it down. So it changes everything. Sometimes it may not be a thing. It may be a process. You’ve made the process so automated that you’re saving money everywhere.

Vinay: In small amounts. It adds up, but it’s all proprietary knowledge.

Sanjeev: And no company wants to tell the world what’s going on there because of all the legal issues. So my point is that we moved… actually, a prediction last year should have been “will agents take off?” And agents not only took off, they just flew off the shelf. But we still have not reached a stage where we can confidently say I can put in a chatbot, an assistant, or an agent into a complex business logic for me—document generation and customer service. So we’re doing a lot of these very specific task-oriented jobs. And I think for that, AI has proved to be very successful. When it comes to automating the entire supply chain process…

Vinay: Through a multi-agent team? We are not there. And without the human being in the loop. So we didn’t do that bad actually last year. So looking back at some of the significant influences and events in ’25. So geopolitical: we have the AI supply chain. Chips, export controls, access paths. You know, governments putting these sanctions on, “these should not be exported” or “if you’re in the EU, if you’re part of these 12 countries, you can get the latest chips.” “If not, you can’t,” which kind of splits the EU because, for example, Sweden would get the latest chips but Poland wouldn’t. As like these T1 and T2 countries. So that’s kind of a weird one.

Sanjeev: Yeah, I wasn’t aware.

Vinay: In terms of economic rights, I mean, capital probably gets more selective, right? Show me the revenue, not just the GPUs. I mean, we’ve seen the financial scrutiny right on these giant data center bets. You’re an ex Oracle guy, right. So you probably read about the $300 billion deal that Oracle did with OpenAI over five years. To provide compute. I mean, you know Oracle, how do they fund that investment in these data centers? And how does OpenAI find $300 billion to pay for this? So I think there’s a little bit of scrutiny because I think all the markets took off when they are announcing these deals. Between chipmakers, between AI model providers, between hyperscalers. But it’s all circular. And now I think everybody’s saying, “Hey, wait a minute. That’s kind of cheating.”

Sanjeev: I mean, OpenAI has committed to a spend of $1.4 trillion. And what is the revenue? 14 billion? So there’s just like orders of magnitude more money they want to spend that they’re not making. It’s literally we are living in the craziest times. I give you an example. Last week at CES, as I said earlier in Las Vegas, Jensen Huang did his own thing, not CES keynote, that was by AMD CEO. So he sort of sucked the oxygen out of the room. So he does his own event and he starts talking about how there is this new Vera Rubin architecture that’s coming out in 2026, and it’s so advanced, there’s two miles of copper wiring inside it. Within minutes, the stocks of copper companies went up. Then he goes on to say that this is such an advanced architecture that it doesn’t even need much cooling. All of a sudden, all these companies that provide cooling equipment to the data center, their stocks went down. So everything… So we are living in these unprecedented times where the expectations are just so high from AI that it’s literally driving the entire world’s economy.

Vinay: Yeah. It’s that was the financial… there’s a lot of financial events going on along with geopolitics. And then if we look at the regulatory aspects. You mentioned the AI Act. These phased obligations. They are in effect. And that kind of… that’s driving these compliance workstreams. And I guess since last year, I mean, my LinkedIn literally is exploding with all these sovereignty talks, right? From all these companies and entities in Europe. And actually not just Europe. I mean, you have Canada, you have Southeast Asia. So everywhere is kind of like… you know, people are looking at procurement rules, sector rules, mandates to actually purchase local instead of maybe the default going to hyperscalers.

Vinay: If you look at Europe, I mean, I think 4 or 5 years ago, the hyperscalers, they had like 30 or 40% of the cloud market. The cloud market has grown very fast. And the hyperscalers, from last year, they had in Europe, I think they had 80% of the market. And it’s kind of like… it’s gone up and local providers are nowhere to be seen, pretty much. But we’ve seen a bunch of announcements last year, people investing, governments investing. You know, you have these gigafactories, AI factories that they’re building in Europe. So let’s see where that takes us.

Sanjeev: So if I were to jump to 2026 predictions, I would say one of my predictions is that sovereignty shifts left. And let me explain what I mean by that. To be honest, sovereignty was not a top-of-my-mind topic because I get very deep into all kinds of data, AI stuff, from a technology point of view. Actually, I would say Severalnines pulled me into this space more than anybody else. But now I can’t avoid it because every cloud provider event I go to and I do a briefing with them, sovereignty is really high on their list. Every one of them. I mean, when AWS announced their first data center was going to go live end of December 2025 in Brandenburg, which is part of ESC, European Sovereign Cloud, it is an initiative that is owned, operated, run, managed by Europeans.

Sanjeev: Yeah, not by Americans. So AWS had launched that. Before that I was at Microsoft Ignite. Microsoft Sovereign Cloud was a very big topic. Even there, I went, I did a briefing with Google Cloud, and it was all about sovereign cloud. But they’re taking a very different approach. They’re saying that let’s say a hospital wants to have their data in a cloud in Europe. They don’t want necessarily to go to a partner there. So what Google is doing is they’re partnering with local providers. Because these providers have regulation like SecNum, and they’re keeping up with everything new in EU AI Act. Just last week saw something new came out. And then India introduced… China has its own, like you said.

Sanjeev: So basically, my prediction that sovereignty shifts left—what I mean by that is, up to this point, sovereignty is considered to be a very narrowly focused data residency issue. Where is your data residing? But by shifting left, it becomes an architectural thing. It’s not just like, “Does your contract, Mr. Cloud Provider, have sovereignty?” It is: “Did I design it to be sovereign from ground up?” Are my keys in my own control, or are my encryption keys sitting in my cloud provider although my data is in-house? So, not just data operations, technology, assurance. All that gets designed into my architecture. So sovereignty becomes a design criteria rather than a contract.

Vinay: Yeah, yeah. And that was actually, you just mentioned one of the main events that happened also. AWS, Microsoft push for these concrete sovereign offerings. And kind of like this helps keep this podcast’s sovereignty thesis front and center. So we’ve been doing this for now three and a half years. We thought it might become a thing. But as you said it’s now… it’s growing and it’s kind of like mainstream. I mean, everybody’s talking about sovereignty now, even some of my neighbors who are not in IT.

Vinay: So I had some other events on my list. So, power becomes the new rate limiter. Grid rules, regulatory attention, interconnection, maybe co-locating next to generation. I mean, we see, like in Sweden, some areas people want to build power plants. Well, there’s no capacity. But then somebody else comes in and says, “Hey, I want to build a new data center, and I need like that much power.” Well, sorry, you can’t, because now we need to wait until we have either more transmission or new power generation. So that’s one of the… you know, it’s interesting how now power is suddenly becoming part of cloud strategy. You have AI, this kind of disciplined build out. I think that’s, you know, let’s say the scale of spending is enormous, right? The pace is unprecedented.

Vinay: And then you know VMware Broadcom… I think they’re still… I mean it’s still kind of reshaping private cloud. I mean that trend is still there. I was at OpenStack again last year at the OpenInfra conference. And there are so many new projects moving stuff off VMware into OpenStack. You have Nutanix growing. And obviously Kubernetes, you know, that’s another huge trend. But what do you make of this? I mean, looking at the later Kubernetes stuff, it seems there’s more and more AI taking over the KubeCons.

Sanjeev: I think Kubernetes started for stateless applications. And it took almost ten years for them to finally figure out how to do stateful application databases. But it took that long. And Kubernetes is only 11 years old, so this is the reality. But with AI I think Kubernetes gets a shot in its arm because it certainly… we need to provision so much of compute. And when you provision so much of compute, things go wrong. So you need them to self-start, self-heal, manage them, start up and shut them down quickly. These are also… this compute is also very expensive because it’s offered in GPUs. So I see there’s a lot of work that Nvidia has done in our work in this space. But I think one school of thought is that Kubernetes and AI are made for each other. I think Kubernetes benefits a lot from helping to orchestrate AI workloads.

Sanjeev: So that was a little bit of a review. Let’s shift gears to today. And AI is still the topic. So let’s start there and I will quote you, Sanjeev, right from your 2026 New Year blog. If 2024 was AI everywhere and ’25 was AI build out, what is 2026?

Sanjeev: Yeah. Good question. I think I would say 2025 didn’t really pan out as we had expected. We started the year thinking that there would be agents for everything. I made a prediction that next year, by this time when we talk, you all have a personal agent. Well, it’s my goal for 2026. But it didn’t happen last year. And the hottest topic, by the way, in AI is reinforcement learning. There is so much emphasis going on: how do you do post-training? Because we came to a conclusion that the models have grown bigger and bigger and bigger, and that’s pretty much it. They’re trained on everything on the internet that they could get their hands on. Not private data. But now, to improve reliability, we need to do post-training.

Sanjeev: And, you know, the teacher training model. RLHF, there are just so many techniques to do optimization of models. So I think what we’ve seen in 2025 is that models are going to get even better. They are already very good, I have to say. But they will never be deterministic. They were never supposed to be deterministic, but they will get to a level that’s good enough for people to deploy them without having a sleepless night and worrying about, “What if the AI says something stupid and makes me lose my business?” So I think we are in the era of building reliable AI. So, AI governance was a very big topic for me in 2024. It didn’t pan out because in 2024, we didn’t even know what we were doing. We were experimenting. AI governance will get established and hence we’ll see an acceleration of agents happen.

Sanjeev: Plus we’ve got a lot of pretty solid use cases already. Coding is a solid use case. No question about it. Customer service is to some extent. But now see commerce. We see the new commerce protocols that have come out, agent to agent, MCP. All of these are standardizing a lot of missing pieces. So I think 2026 is when it will all come together.

Vinay: So are we exiting peak hype?

Sanjeev: No, I don’t think so. I think peak hype will continue. But it’s not a bad thing! it sounds horrible, that it’s all hype, but it is delivering value. In fact, if you look at some of the new benchmarks surveys that have been done… the podcast guy… he says 70, 75% of people are saying they’re getting value out of AI.

Vinay: Yeah, I can say that. So I guess, yeah. Governance, reliability. These are things that are needed to actually make sure that this area flourishes, so to speak. What does it mean for the database? Stack consolidation…

Sanjeev: Yeah. So the data layer is actually undergoing quite a revolution. 2025… I don’t think we saw it coming, but for years and years we’ve been saying there are just way too many small companies. There needs to be some sort of consolidation. And it hardly happened. One acquisition here or there doesn’t count. 2025 saw some big moves and I think 2026 will see even bigger moves. What the businesses are telling the industry is that we are done buying small features that are masquerading as a category. These all need to be like… for example, observability. Is observability a category? Even data catalog. For years and years we’ve been pushing data catalogs. People say, well, these are so fundamental to what we do that they should really be part of the infrastructure rather than a standalone piece. Lineage. A lot of governance pieces.

Sanjeev: So, the power right now is with mega vendors, hyperscalers, companies like Snowflake, Databricks, even Palantir. So the bigger companies are actually seeing the advantages. Salesforce, for example, they all want to be your end-to-end data and AI company that’s it. They originate the data. They transform and store it in Iceberg. You bring your analytical engines, but the data does not leave their premises. Their perimeter.

Vinay: What does this mean for, you know, the meat and potatoes spend? You know we talk about… because you have the AI projects but then you have all the infrastructure, the old boring static stuff that’s there that probably people haven’t maintained for a while. I mean, are these like table stakes for this AI era?

Sanjeev: So it’s a… as you can see, I’m an AI optimist. So there are people who think that they wake up one day and people will come to the realization AI was overhyped and it’s gone. It’s like saying you’re going back to pre-internet days. It’s impossible. This is how we do business. It’s from Spotify to taxes to emails to LinkedIn. So I don’t think we are ever going to go back. Why I’m an optimist is because we have a lot of junk that we have collected over the years. But we didn’t know what to do with it because people who knew COBOL have retired. So. And it works. And if it works, then why fix something that ain’t broken is a concept. Well, AI can now literally unlock it, can help me migrate from an old version of a dead programming language. By the way, nothing ever goes away; it lingers around.

Sanjeev: But now I can… what I need is somebody to train a model in an older version of, let’s say, Java, read enough programs, and read enough documents and then say, “Okay, I’m going to go from this old version of Enterprise JavaBeans into Rust.” Something new or a new version of Java. And I think we are going to start seeing that AI is going to be put to use… and it’ll help us clean…even data quality. Data quality is a huge problem. We’ve never been able to fix data quality ever since data has existed. But because the way we try to fix it is so difficult and onerous… But if AI can say, “You know, step back humans, I can go analyze this data, find correlations, find similarities, find patterns that it takes human beings years and years of doing a data model…” And then expensive thing. I can find this. And by the way, you’re in charge.

Vinay: You know I think we talked about bringing compute to data or do you move your data to compute. I mean you know we mentioned earlier that there’s a lot more on-prem stuff because of all the data that’s there. So it means people need to build. At the same time, you know, cloud services cloud spend grew 20% last year over the previous year, 10 or 15% of that was allocated to AI. So, I mean, you know, what’s winning?

Sanjeev: So compute to data. Absolutely. Moving data was never a good idea. In fact, I would even say… I don’t think any time soon, but ETL will finally become a thing of history. When you transform, when you extract and transform, load it to a different place, now you’ve got overhead, all this movement, skills are needed. A lot of technologies are now getting old. So now that has to be modernized, again a task for AI to modernize my transformation. But then there’s latency of moving data. There’s governance that is needed, from source and target. So we are now moving into an era where we are building, in fact, context graphs that is the hottest topic in my space in 2026.

Sanjeev: And we are only into the first few weeks of 2026, and it’s already blowing up on social media. So if I can create a context graph and I can have an LLM go figure out where my data is, I don’t need to necessarily move that data. So I think we… this is going to be a very big thing where ETL or ELT is going to be replaced. But first, it’s already been replaced by the way, in 2025 by zero ETL, but that’s still… zero ETL is still extract and load. It’s just you’ve taken it away from the end user and made it part of the system. But now zero copy is a big thing. Putting stuff in Iceberg, like I said, and so now I can bring any compute engine, whether it’s Spark, Pandas, DuckDB, anything that’s Iceberg-REST compatible. I can bring it. If I’m in AWS, I can use Athena. I can use Redshift.

Sanjeev: So that’s the trend: not just disaggregation of compute and store, but unbundling of it. So the compute and store are not just disaggregated. They’ve come from different vendors. And that gives the end user more control and flexibility, which is what they’ve been asking for all this time. Even sovereignty and all of these topics are all about control. How does an end user control their data and bring the right compute engine, not the one that’s mandated by the cloud provider?

Vinay: Now the next thing: cloud as an operating model. Has that taken hold? I mean, the right workload at the right place as a policy, right? Whether it’s in a public cloud, hosted, private, colo, co-location, on-prem Kubernetes.

Sanjeev: I would say so… I think cloud many years ago went from being a destination, i.e. AWS or Google Cloud Storage, Azure Blob Storage, ADLS… to being an operating model that can be brought to on-prem. So I get the benefit of serverless. So a cloud native things… I can do it. But I know I can only scale up to the hardware I have. So I know there’s a ceiling for cost. Now, of course, there are options where you can jump in to public cloud so you can configure it in different ways. But cloud is definitely an operating model. And it makes it easier if you have Kubernetes as the underlying engine because it’s all much more modern and understands all that good stuff.

Vinay: So looking forward to 2026. The road ahead. What do we watch specifically in infra workload placement? So hybrid becomes the default architectural pattern for AI and data, right? Not multi-cloud.

Sanjeev: For Hybrid multi-cloud, I think cloud providers are all signing up these fast interconnect deals with each other. So it doesn’t matter. Like if there are not egress costs from Oracle to Microsoft Azure, then I can leave my data in Oracle and I can use Power BI or Microsoft Azure ML or AI. The same thing just happened between AWS and Google and will happen with Microsoft in 2026. So that’s one of the predictions: that hybrid multi-cloud, that’s going to be big. I would state that where your data resides is going to become less and less important.

Vinay: The other one, which you actually already predicted earlier on this… is, let’s say the AI offerings… sovereign AI, sovereign cloud… it will mature from these announcements to these enterprise rollouts. So, and then we expect buyers to demand operational sovereignty, not just residency. Because these are two concepts. That’s what people have been confusing for a long time. Residency versus sovereignty.

Sanjeev: Sovereignty. Yes, I love that. And then number three is that AI will see a massive investment in reinforcement learning to improve its reliability and accuracy.

Vinay: What about you know, I mean, there’s a lot of announcements for data centers, but the number of actual data centers being built is like 17% of what’s been announced. That’s what I read. So how much do you think power and energy constraints will shape cloud regions and pricing and workload placement?

Sanjeev: So, you know, it’s a tough question to answer because Canada, which sits to the north of where I live, has plenty of space and power, or at least natural power. But do we have transmission? And you mentioned transmission earlier. You see, American companies are going to Saudi Arabia, which has a lot of oil and money, land, it’s all there. But how do you get that power? So, I think power is going to be a constraining factor in the US. It will lead to data centers coming up. In fact, you see, AWS, or all the cloud providers, have publicly said we will not have capacity for you consumers. We are going to sign up with these companies.

Vinay: In Australia, there’s a very famous company there. These are all, by the way, Bitcoin, crypto mining companies. And now they’re creating these… they’re turning their crypto mining into a data center and leasing the capacity to hyperscalers. What should we watch specifically in the database as a service, the DB Ops stack? I mean you know we talked about vector… is it a feature. It’s not a product category. Even MariaDB introduced vector last year. I mean we know pgvector. Very popular. How do you see this moving?

Sanjeev: It’s so… I don’t see any like major moves in DBaaS, database as a service, but I do see they’ll get a lot more efficient. For example, having vectors is one thing, but how good is your vector index is a work in progress. So DiskANN from Microsoft, ScaNN from Google. So they are all making it better and better. So there are less tokens needed, there’s more caching of prompts… so there’s less roundtrip to an LLM. All that we will see. AI being used to optimize my database like query optimization planner, for example. All of these backups and recovery and intelligent storage. So based on the usage patterns it can move it to a cheaper storage. So we will see DBaaS become a lot more sophisticated.

Vinay: Yeah. But nothing… no major thing that comes to my mind right now. As far as new features. Another. What you already mentioned is the architectural pattern, the sovereignty architectural pattern. So we have managed keys from customers, you know, isolated control planes and operators. Although the fact is, if you are still on a US hyperscaler and you get sanctioned, then even if you have the key, if they shut down access…

Sanjeev: Correct. You are in trouble, right? Yeah. Exactly. So there’s customer managed key, but in the cloud provider, may not be such a good idea. In such cases, the companies that are hugely regulated… I don’t want to make it into a scare thing, but if you are hugely regulated, then you should probably manage your own keys.

Vinay: And finally, you also mentioned that already. You believe that we will see these agent-friendly operations, rights and policy approvals, audit trails, you know, reliability, human in the loop kind of thing. Well, let’s wrap up. Well, Sanjeev, thanks again. This is great. So we’ve looked at 2025. It’s been, you know, a year of crazy investments between these AI model builders, hyperscalers and chipmakers. There’s also these massive geopolitical changes. Which is kind of fueling the sovereignty debate. Hybrid seems to be the way to go for cloud and AI. I’ll leave you the last words.

Sanjeev: Buckle up is all I can say. This is going to be the year when shit hits the fan.

Vinay: I’m not sure if I’m supposed to say that, but I did, so you can edit it if you want. Well, thank you. You know, thank you so much, Sanjeev. And thank you all for listening and have a great 2026.

Sanjeev: Thank you, everyone. Thanks for joining once again. Bye.

Guest-at-a-Glance

Name: Sanjeev Mohan
What he does: Sanvjeev is the principal analyst at SanjMo.
Website: Sanjmo
Noteworthy: Sanjeev has been in the data management space since the beginning of his career. He has worked at Oracle, and before SanjMo, he was a vice president at Gartner Research.
You can find Sanjeev Mohan on LinkedIn