Cloud reinterpreted – from destination to philosophy and OpenStack

May 16, 2024
Todd Robinson

In this episode, we explore the shifting paradigms of cloud with Todd Robinson, President and Co-founder at OpenMetal, an award-winning open-source cloud. He joins Vinay to unpack how traditional views of the cloud as a destination/location are evolving and as a result, has lit a fire under businesses to adapt and rethink their cloud strategy.

They discuss cloud as a philosophy, the ongoing open-source debate, recent market developments that have motivated organizations to reconsider their strategies, how OpenStack has matured and how that has and will continue to improve accessibility and much, much more. Let’s dive into some standout moments:

Key insights

⚡Cloud as a philosophy, not a destination

Historically, ‘cloud’ has been synonymous with public cloud services like those provided by the big hyperscalers, e.g., AWS and Google Cloud. However, there’s a growing advocacy for a broader definition that views the cloud as a versatile, philosophically-oriented operating model. This approach is increasingly relevant today as the localization of services and data sovereignty become central to enterprise strategies.

⚡Evolving business context drives a rethink 

Changing business contexts and technological advancements have created demand for a more flexible and financially viable approach to cloud. The dramatic increase in server capabilities and the decreasing costs of hardware are increasingly influencing companies’ to rethink traditional IaaS implementations and consider more controlled, versatile ones.

⚡ Open-source solutions are enabling independence

The increasing accessibility and capabilities of open-source solutions are now enabling more companies to break away from a hyperscaler-only implementation strategy. Tech like OpenStack and Kubernetes provide the foundation for many large companies and are now more accessible to smaller organizations. These tools make it easier to operate cloud environments independently – giving all businesses the chance to achieve high availability, scalability, and cost-efficiency while maintaining control over their infrastructure.

Episode highlights

💡Cloud as a philosophy [00:03:36]

Vinay focuses on cloud as an operational model and Todd as a philosophy. Whatever your noun, one thing is clear — cloud is no longer a destination. This approach breaks away from the traditional notion of the cloud as a destination that is controlled by a few hyperscalers. By reframing it this way, there is no false cloud / no cloud dichotomy; instead, organizations can apply a more nuanced, sophisticated implementation. 

💡Where the wind is pointing for hyperscaler pricing [00:16:28]

Historically stable, cloud service prices from major hyperscalers have seen slight increases over the years. In today’s economic climate, with rising inflation and high interest rates, are we expecting to see price hikes? And with significant pressure on egress fees from the market and government, will this be where the hyperscalers bend?

💡The open-source viability debate continues [00:30:41]

The shift to the cloud has left open-source vendors scrambling, who weren’t prepared for the SaaSification of database ops. Add massive injections of venture funding to the mix, forcing a re-evaluation of their business models, and the pressure is on!

Licenses like MongoDB’s SSPL raise tough questions about balancing open-source ideals with the need for a pay-out. More recently, the Linux Foundation’s backing of the Redis fork, Valkey, has only turned up the heat. Todd and Vinay discuss the need for taking a more nuanced approach, with both big and small actions, to balance open-source accessibility with the necessary support to ensure long-term viability.

💡 So you want an OpenStack cloud? [00:44:23]

Challenges and complexities abound in designing and deploying a large-scale, production-ready OpenStack cloud from scratch; BUT, Todd contrasts this with the ease of running a well-built system, which requires minimal resources. He advocates for leveraging automated deployment tools and professional services on the front-end to get you where you need and highlights what that looks like.

💡The future of open-source cloud [01:12:35] 

The talk then takes a look at the future, with a key focus on expanding diversity of cloud models for organizations of all size, especially given recent market and technological developments, and underscoring the importance a) continuous development of expected native features in software like OpenStack, b) adding value-add services, and c) the community’s role in that.

💡Recommendations on devising cloud strategy [01:23:13] 

We wrap up with a request for Todd to offer a recommendation for those devising cloud strategies. Fundamentally, he thinks everyone should be aware of, explore, and seriously consider the alternative cloud deployment models now available, mentioning that open-source solutions have matured right along with hyperscalers and can enable them now easier than ever before.

Here’s the full transcript:

Vinay: Hello and welcome to another episode of Sovereign DBaaS Decoded. Brought to you by Severalnines. I’m Vinay Joosery, co-founder and CEO of Severalnines. Our guest today is Todd Robinson. 

Todd is president and co-founder of InMotion and OpenMetal. Thanks for joining us today. So, can you tell us a little bit about yourself and what you do?

Todd: Thank you for having me. I’m excited to be on with Severalnines, because we both are, joint kind of customers, you could say, of each other. I am very excited about how Severalnines has helped out the OpenStack world with providing both inspiration, I guess you could say, for a good database as a service, and of course, actually running databases as a service for customers. 

So, I’m excited to be here, but my background actually, yeah, I come from 20-some years in hosting, and at that time we were always automating infrastructure and trying to automate it as best we could. 

And so we actually created a company outside of that, based upon the automation that we were using inside of InMotion, as you mentioned, and so, yeah, I’m kind of here now, today, talking about OpenMetal, and my views, I think you could say, are really educated by the years in hosting and in automated infrastructure and thus, coming in from OpenMetal.

Vinay: So, cloud, I mean, you know, you’ve been in the business for, you know, two decades, I don’t know if you agree, but cloud is kind of thought of as a destination. Right? Because we often think public cloud, we often think, you know, hyperscalers, but does that still hold true today? Right. 2024.

So, that’s kind of what we want to discuss in this episode: How conceptions of cloud need to go beyond narrow definitions based on location and, you know, expand to be thought of as an operating model. That can be applied across multiple infrastructure.

And, you know, that’s the thing, right? How can we apply it across multiple environments and how is open-source helping to make that happen? So, let’s jump right in. I mean, is it changing, right? This kind of historical idea that cloud has been thought of as a destination. Is it changing?

Todd: I think you’re hitting that right on the head there, Vinay. And it is a historical reason that people are thinking of it as a destination. And, you know, cloud, back in the day when the big, mega clouds, were first coming out, they really had a superior process. They had a superior product, you know, so they had great automation. 

And so it kind of became synonymous with, like, oh, if I need to go to the cloud, I’ve got to move out. And to AWS, of course, is the original. And that has been pervasive even though at very similar times the cloud was being built, OpenStack as a great example, by some original founders, Rackspace and of course, the government, NASA.

But the hyperscalers they had a great product, admittedly so, and so over the years, people said, okay, I’ve got to move to the cloud to get this great automation. Well, of course, that’s been a long time since that occurred.

And cloud systems, cloud software, OpenStack, CloudStack, various systems like that have dramatically matured. And so these systems are available in a mature fashion today. And so now it’s a lot of times educating people to go like, hey, look, you got to remember, underneath any of those hyperscalers, it’s all the open-source stuff. 

I mean, it really honestly is. And many of them have obviously taken a lot of effort to hide that it’s actually the open-source systems that are underneath it that are providing their automation and providing this great user experience. But in reality, it is the open-source systems underneath. And so there are many providers that you can go to nowadays that can provide you very, very similar experiences. You know, OpenMetal just being one of them.

And so it is, I think, now up to us, who are not in that field of AWS and GCP, to help people understand that they framed it that way intentionally. 

It’s really like a marketing ploy, almost – but you don’t have to move to them. You can bring the cloud to you. The cloud is a philosophy. It’s like, yes, automate things. Make them easy for your developers to get a hold of resources trivially, give them APIs, give them all these things that allow them to very quickly have the resources that they need.

This is very common in any of these automated systems. You know, OpenStack again, just being one of them. But OpenStack provides the foundation for many large companies. And now it’s much more available for smaller organizations to get a hold of.

So, yeah, Vinay, I’ve been, I think, hitting on the spots that typically people do and say like, hey, it’s a philosophy, you know, and then you hear people say, well, I’m going to cloud repatriation. Well, cloud repatriation, you’re not really going to leave the cloud, because it’s a philosophy. It’s not a location.

Absolutely, your systems teams, your engineering teams, your development teams, they should all have easily available automated infrastructure accessible to them. That’s the cloud. It’s not where you are. It’s the philosophy that you’re using.

Vinay: I mean, in a way it feels a little bit, I don’t know, strange that the hyperscalers kind of, you know, they took the whole narrative, the whole cloud narrative. They owned that whole narrative for, for so long. But I think, you know, we can agree that, you know, this notion of cloud as a destination is kind of changing.

So, we can agree to that. Now, why is that? I mean, I think, you know, you did mention there are open-source components. There’s a lot of automation software available today that maybe wasn’t available, you know, 15 years ago.

But today, I mean, is it a changing business environment, where cost is being more maybe, you know, scrutinized, or the risks like, increased regulation in data privacy?

Todd: You know, I think, privacy, of course, and regulation, I think in particular, too, sometimes in the United States, we’re a little bit distant from that because we, I guess, are just less sensitive, partly because the hyperscalers are kind of born out of the United States. But if you were to go around the world, I think you’re going to find a very different take on it. 

The European Union and its member states they’re really saying like, no, we take our privacy more seriously than those in the United States do. And they’ve created a lot of laws and moves and even funding for local organizations to help, you know, kind of raise up their infrastructure providers.

And the skills and the companies coming from those regions. So, I would say, yes, of course, that is some of it. And it’s just different regions in the world putting down their foot and saying like, “no, you’re going to store our information here.” Stop putting it over here in the United States.

But then I would also say, I mean, cost is always a pain point. And I think the big clouds, the original narrative was very much like, move to the cloud. You’re going to have great agility and you’re going to save money. Well, as it turned out, one of those was true. You had great agility, and the public cloud for many things is wonderful, and we have customers that are public cloud providers that run on top of OpenMetal. 

And it’s great for certain use cases, it’s literally one of the best things for getting things really quickly up and running, doing your POCs, understanding how things are going to work. Public cloud is brilliant, and it does absolutely save money in those cases.

But once you hit a tipping point, when you get to a scale where you’re now paying for automation, every single time you touch something, that becomes very, very expensive and again, at scale, typically where now you’re not getting the benefit of that additional scale, where normally your engineering costs drop dramatically, right? 

Like, oh, I developed, I built this thing and now I get to use it or sell it many, many times. I get the advantage of the big scale. Well, when you’re on the public clouds, they get the advantage of your big scale, and I think it’s now a lot of the CFOs are starting to get angry at the CTOs, going, like, how come this thing just keeps climbing, you know? Can’t we do something about this? I’m used to it as I get greater scale, I get greater efficiency. 

And in many cases, that’s not the case with the big public clouds. And so I would say it’s mostly the pain that is really causing this is that, as it turns out, the mega cloud’s narrative about cost savings, you know, it’s not true once you hit a tipping point.

And so that’s I think what’s really pulling people back to go like, “wait a minute!” I think I need to look at the new cloud deployment models once they’ve kind of realized like, no, no, I can move from the public clouds and still be cloud native. 

Vinay: Now, what are these other cloud deployment models that are available to me, you know, and then that’s where people start down that path of like, okay, now I have to go and look and really understand how modern the systems are. And the providers that are using those systems. And what are my options nowadays?

So, what do you make of the, you know, I mean, you talked about the cloud costing a lot. I guess 10, 15 years ago, there was a difference between what kind of hardware you would get then and what you could get now. 

But you know, there is a discrepancy now between the hardware costs, right, and the cost of cloud. I mean, you know, I don’t know, 10 years ago, maybe average servers could have an average of, you know, four cores, eight gigs of RAM. I mean, nowadays the servers that you can buy, right, easily, you know, 30, 30 to 64 cores, hundreds of gigs of RAM, you know, server and storage density means we can pack quite a lot of processing power, you know, into smaller spaces.

And in the last decade or so, you know, the cost of, of solid state, right, has gone down by like 10x. So, you’re getting it, you know, effectively you’re getting 10x more processing power for less, right? If you buy hardware. 

At the same time, if you look at the IaaS prices, they’ve kind of gone up over time. And that discrepancy, right, between the hardware cost and the IaaS, right IaaS that makes enterprise wonder if they should do their own thing. I mean, you know, can you talk a bit about that.

Todd: Yeah, there’s always a couple things that I bring up when we’re looking at those things. So absolutely, the hardware has dramatically improved, and from a cost standpoint, I would almost say hardware has in some ways stayed at a relatively similar cost. 

But the size and the capabilities that, like you said, the cores we’re even now looking at, at 256 core ampere now, it cores, literal cores, not just not threads, but actual cores, 201 type one CPU. But adoption of that type of technology does require a pretty decent amount of engineering work. 

And so there is always going to be this, you have to be at a certain scale in order to get the benefits and realize the cost savings in that, you know, just negotiating, getting the right stuff, figuring out that it’s going to work properly with KVM or your virtualization system. There is a certain level of complexity inside of that.

And so adopting these new technologies, there is this cost. That’s why I kind of typically go back to there’s always a tipping point for workloads, you know, and it’s a public cloud there’s definitely things there. I again, we’re like a mixture company where on-demand private cloud or on-demand bare metal, that people turn into essentially a public cloud of their own. 

So, they’re like, I’m going to have basically my own AWS when I come to OpenMetal. And so for us, we’re very driven now because we have essentially a fixed cost model. So, we’re trying to take that hardware, sell that hardware at the hardware level to a customer, include all the engineering that I’m talking about, and having the cloud and having it be cloud-native, already on top of that.

But our model is a fixed-cost model.

We want companies, and we believe that it’s super important inside of our, cloud field to have this fixed cost option so that these larger organizations, once they hit that tipping point, they can go, oh, wait a minute, I can get a hold of a cloud-native system at a much more predictable cost. 

For me, it is. It is, yes, the big cloud company, I’ll pick on AWS just because it’s an easy one. AWS provides a large portion and in many cases, in some years, the only profit that Amazon has as an overall organization. 

And so partly the reason why the public cloud costs don’t go down, because it’s a cash cow for AWS or for Amazon. And Amazon actually, as a business, they use that to fuel their competitive Amazon.com shop against other companies, which I still find bizarre when I see sometimes these big retailers, they’re like, “we’re super excited”.

We just went on the AWS cloud and I’m like, you know, they’re taking your money and funding the Amazon shop with the profit margin of AWS. I’m like “uhhh, wait a second, guys.”

This is why I say like, I get pretty fired up to say that there needs to be a competitor like this, and we’re not the only ones there. Luckily, there are other competitors like this that are coming up to say like, no, once you hit a certain scale, this is the right place to be. And honestly, stop buying from your competitor. That’s never a good idea, right? 

Like, this is one of those things where you’re like, you probably should know better. And so yeah, I think for me, I think I’ve segwayed a little bit, I don’t know if I exactly answered your question, so Vinay please ask that again if I have gone off in left field on you here.

I would just say yes, there’s tons more power, tons more resources. AI is now in, you know, you need to think about that. You need to understand what’s going to happen there. But the same thing can get very, very expensive. You need to understand that there’s a great time to be in the public cloud providers for it. 

But you also need to know that we as the private, infrastructure people, we’re also going after it. We’ve got a private AI cluster coming out shortly. We know that there needs to be a balance within the ecosystem for everybody to have the benefits of the cloud, but also at the reasonable price once you hit that tipping point.

Vinay: Now, something a little bit more high level, you know, I mean, if you look at the current economic environment, right, there’s been increased inflation, high interest rates, I looked at the historical price, you know, the, you know, prices from the hyperscalers and they’ve kind of gone up, you know, slightly over the years.

I think they were high like 12 years ago and then it came down and then it’s been kind of flat, and then it’s kind of gone up, you know, slightly.

Will we see sharp increases in cloud service prices? Do you think?

Todd: That’s an interesting, I don’t know if I could look forward. This is one of the things that I would expect to happen. The three big ones here in the United States, right, they clearly, they don’t like actually collude or anything like that. 

But when you look at their pricing structure, it’s very clear that they’ve just kind of said like, well, don’t be too much different from that one or this one or this one, because they want those, they want those margins in the long run. I would think you’re gonna, what I think from the, the hyperscaler side, they’re probably most concerned with the pressure on the egress, the money that they make on egress. 

And it’s simply because, yeah, most like a lot of competitors like us, we don’t, egress is a very inexpensive thing nowadays. It has been actually inexpensive for a long time. And so for them, the margin is very, very large, but also they’re used to it. But there’s, there’s a lot of pressure.

We put the pressure as much as we possibly can. We want people to go like, wait a minute, why am I paying this crazy number for egress when egress should be basically nothing? So, my guess is they’re going to, that’s the place that they’re going to have to react. 

And so that you will see that egress pricing come down from them. But I don’t think you’re going to see it in the other tax, you know, like AI stuff and things like that. I think you’re going to see they’re going to stick with their model. And underlying, of course, they’re making their own things less and less expensive. And that’s what’s helping them with their margins.

But I, that would be the only place I might, maybe like maybe two years from now, you might expect them to start to react to the market and start to push their egress pricing to be somewhat more competitive. 

I kind of hope they don’t, honestly, because that’s one of the things where people go, you know what? “I’m out of here!” I got to go to somebody that I have a reasonable egress cost from. Yeah, yeah, that’s what that would be my guess. I don’t I don’t have enough, probably info to predict too much for that. 

Vinay: And actually, you know, there is the, you know, there is the sort of, the Data Act, the European Data Act, right. And since the beginning of this year, you I think, I’ve seen Microsoft and Google, remove egress costs when you’re leaving them. When you’re leaving them. I think, I think the, the sort of EU Data Act kind of probably prompted that. Right.

Because they want to nurture competition. Right. You know, so, so you know, reduce lock-in effects, you know, between, you know, from a large provider and, and you know, there’s there’s a lot of scrutiny there.

So, I think we have seen one small aspect, but, you know, it’s not like, you know, somebody can just say, hey, I don’t like the egress, I’ll just move everything. And yeah, it’s free for us to move the data out. Right. Because it’s still.

Todd: This is honestly on our side, sometimes we basically have to give credit to the customer that’s incoming because we know they’re going to have this problem on the way out, and so sometimes you just have to… But they, you know, they know the deal like a company has always known, like there’s going to always be a certain amount of vendor lock-in and move out, cost associated with it.

And so usually you just got to talk to the company that’s helping you out when they’re trying to solve that.

I mean, here’s the thing I and I will, I will go too far, but this one is I mean, in the big cloud world, all the big clouds have cross connects with each other. That egress in most cases, it’s not even going on to the internet. It’s like hopping down a couple switches and routers and going in to the other location. It’s not egress. 

This is because they have cross connections with each other. And they don’t pay, they just take the money as it moves back and forth, you know? So anyways, I hope I don’t get yelled at for that one because that I think people I, everybody, I think knows that, but it’s kind of one of those things that until you’re like really faced with that go, oh wow, really? They’re charging me egress. But it’s actually not going on the internet. It’s just going into a cross connect in one of these meet-me rooms.

Vinay: So, now a cloud is not just hardware, right? I mean, you have the infrastructure automation tooling, right? Which is available today that makes it, you know, feasible. Do you see the increasing cost discrepancy and the available tooling, right, making, let’s say, repatriation more attractive now?

I will take your point, right. You mentioned, it’s not really repatriation, because it’s from cloud to cloud. Right. But let’s say, what I’m saying, you know, what I mean by repatriation is moving it from a hyperscaler to somewhere else.

So, you know, do you see that kind of discrepancy in the tooling making those cases more attractive? And actually, do you see an increase in your business in those type of cases?

Todd: Depending on the use cases of the customers, you know, different providers like, like OpenStack and actually, Vinay if you wouldn’t mind, just because people that I’d like to share this video with don’t know as much about Severalnines. So if you don’t mind, I’m going to ask you a question just to kind of help fill this in. 

But for example, in the OpenStack world, we don’t have a strong database as a service, right? And there’s a system called Trove that’s kind of born out of the open-source side of it. 

But expertise, in database as a service, is both the automation of creation of it and having the resources be in the right place. But you must also have really solid knowledge of the databases that you’re automating.

And so the OpenStack world actually, too, doesn’t have as strong a database as a service as some of these other providers.

Now, given those other providers are typically just automating on top of an open-source version, or they’re now, you know, like 100% PostgreSQL compatible, even though it’s been built a little bit differently. So if you wouldn’t mind, maybe I’ll kick that back to you because I think you guys in particular are very important for the OpenStack world because of the work you do.

But yeah, if you don’t mind, run through that a little bit just so that viewers of mine are familiar then with what you all are doing.

Vinay: So, I mean, you can get Database as a service, right, from AWS, Google Cloud… And the problem is when you move it to your OpenStack or even, you know, to a local location where you’re using some of the software, right, you don’t have that. You don’t get that kind of automation. So yes, you can do your, you can do it yourself, right? You can do Ansible, you know, write your own scripts. If you’re in Kubernetes, some people are using these operators there. 

Although, all of the operators they don’t have the full kind of Day 2 operations. They do a couple of things, but they don’t help you do everything you need. And, you know, in the previous podcast, we had a Kubernetes expert on this, you know, as a guest and, you know, basically we found out that probably you’d have to fork some kind of operator out there and then just, you know, build on it. And then when, when, when there are updates you need to keep, keep that going.

So yeah, so what we do is, you know, we help, you can take, our software and do DBaaS. You can do it in the hyperscalers, you can do it on your OpenStack if you have barebone, you can do that here as well, right. On prem, traditional systems… 

So, we’re kind of an automation layer for database, which is agnostic of the actual underlying environment. And it helps because the problem is if you’re in for example, AWS and you’re stuck into Aurora, then you can go, you know, out of AWS and expect the same Aurora. It doesn’t exist, right. Then you got to pick up some existing tools.

OpenStack has been very good, you know, I mean, it’s very popular. It’s been a very good, let’s say, you know, infrastructure play for us to have the, you know, the database as a service on top of, and it gives you the same kind of, you know, database services that you can get from the hyperscalers.

Todd: No, the and so the familiarity with that, if you would, you speak to maybe some of the databases themselves, just for familiarity and then the ones where you would say like, yeah, we do this great, and you’re going to love this part of what we do with this database.

Vinay: So, we have, you know, the common ones, traditional ones: MySQL, MariaDB, PostgreSQL. We have the NoSQL ones: MongoDB, you know, Redis, you know, Elasticsearch. And we also actually do, you know, SQL server, MicrosoftSQL server, right. And then there’s a bunch of others like, you know, PostgreSQL has some extensions, like Timescale is a popular one for time series… pgvector.

For, you know, AI kind of application so, so, so we do those as well. And the other thing is, you know, you, we can also see some of the providers changing the licenses, like MongoDB changed their license, right. And it’s not open-source, it’s, you know, SSPL, so, Redis has also just changed their license, you know, OpenSearch. 

Sorry, you know, Elasticsearch also, you know, has a different license. But as an enterprise, you can actually use our software to automate these databases inside your own data centers or inside your own, you know, kind of clouds and private clouds.

So, from that perspective, even though it’s not open-source, it’s source available, you as an enterprise, you can still use it for internal use. So, that’s another advantage of, you know, having your own automation, because I think what MongoDB, Redis, you know, stuff, you know, so Elastic…

I mean, these companies, they are concerned about the hyperscalers, you know, making huge amounts of money, right. And that’s why they’ve kind of changed their license.

I mean, if you look at, you know, I read, I think it was a Gartner, you know, report from last year. RDS does about, I don’t know, $7 million, $7 billion for AWS. Aurora does $4 billion. Now, if the database market is 100 billion, that’s a 10th, a 10th of the industry of the database industry, running on, you know, Aurora and or RDS, right? That’s crazy.

So, you know, so I can understand that, you know, all these other database vendors, you know, they’ve kind of I mean, you know, they’ve kind of spent a lot of time developing the software. They want to, you know, make money out of it. But, you know, if the hyperscalers, all the ones doing all the monetization, then it’s a problem. So, yes, they’ve changed the licensing and that’s, that’s been that. 

But anyway, coming back to, to us so, so you can, you know, automate pretty much most of the, you know, most of the most common, open-source databases and also SQL server, we don’t do Oracle, I will say yet. Because Oracle customers, we’ve seen, have their own bubble in a way. They have all the tools. But you know, we are not against adding that in the future.

Todd: Now and again, I would say to you, you all fill a gap for us, you know, as in the OpenStack community. But I think you’re, you’re dead on being able to help people run those databases at scale and have options compared to, no, I’ve just got to go to hyperscale or mega cloud I like calling them. 

And honestly, like, I know we don’t have probably the time to get into the business, source license there, which I still honestly myself, I’m learning and I’m trying to use some of the things that have been happening lately to get my own, position on it, but I was, to be honest, the Redis, change, I thought, and, the Linux Foundation’s involvement with GCP and AWS essentially, immediately forking and spawning this Valkey.

That to me, doesn’t feel right, to have the Linux Foundation, what feels like, you’re working for GCP and AWS, I’m kind of like, wait a minute, what’s going on?

Also, I’ll also put out there, like, I don’t think I’m familiar enough with, the you, I think you used a little bit different phrase, so maybe you could explain to me, but the business source license, just being able to make sure I even understand, you know, maybe the changes versus open-source. 

But I would also say, like, if you don’t have viable business models for these great software companies and engineers and passionate people about these things, if you don’t have good business models, you’re not going to have good products, right?

Like there has to be a place in there for solid business models that encourages this type of stuff to be built and to be open-source, and to be available, for people to build upon and just grab and learn and, you know, and make a life out of it because it was something easy they could grab and learn.

I don’t know. I wish I don’t have a view of that, and maybe someday we’ll, we could chat again on that, on that type of thing, because I think you’re probably very familiar, especially coming from the database world.

Vinay: I think, you know, I think the main, the main change has been, you know, with the, you know, the there’s the business source license, there’s the SSPL from MongoDB, you know, the problem is you’re forbidding a category of users from, from basically using it, so, so and that and you can’t do that if you open-source, you can say this category, you don’t get these rights. 

I think you have rights according to open-source, and these rights are universal, right. And that’s kind of the problem. But I think from the from the other, you know, from the database companies typically, you know, you know, venture-funded, well, they got to make money, they’ve got to monetize.

So, from that perspective, it’s like, okay, if you’re going to offer this as a service, then you got to have, you know, open-source your whole backend, pretty much, according to the SSPL. So, Amazon will have to, you know, basically open-source their whole automation magic, cloud automation magic. Which they ain’t going to do.

Todd: Open-source the skin that they’ve put on all the other open-source that they, that they’re using. 

And again in many cases these companies are also great for some of the open-source work that they do, of course. Right. You know, and I think, there’s good, rich history in many of these. I mean, Red Hat took a lot of heat lately for switching out CentOS and making it just more complicated for people to use CentOS. But they’re still also at their roots, many of them do wonderful things in the open-source world, and so there’s so much nuance in there. 

But I would also say it does feel like there should be some changes, because again, if you, if whatever the system is created where, like you said, the open-source, kind of fundamental is anybody can use it for anything, but what in a practical sense of what happened is one organization who is… or three that is massive and dominant, has now kind of carved out and started making their own rules, around it, and then it’s used the like the Linux Foundation as part of that approach.

Then you got to kind of go, well, uh oh. Is that working the right way? And so maybe, you know, maybe there does need to be some changes to how kind of open-source views it. Again, for me, I think, I try to just use these things to challenge myself to go like, okay, wait a second, because I’m a pretty open-source person where I’m like, yep, put it out there.

If somebody takes it, somebody takes it, and hopefully they do good work with it, you know, and that a lot of times the open-source people understand that, like you have to give back, at least if you’re not going to work on it and give back, maybe figure out how to buy a support contract from them.

Even if your team is super good at it already, that’s fine. Buy something from them because it encourages their business model. But that’s easy to say as a, as a, you know, as a relatively small organization compared to somebody like these big hyperscalers where, you know, that kind of culture can be lost.

If they have that attitude, then they would immediate, am I understanding I think is, didn’t Microsoft and Azure license Redis, they I think actually got on board with it I think they did.

So, maybe that’s the philosophy is someone at the top said like, you know what, maybe they, maybe they got it, got infected, I guess you could say by the GitHub world, right where they’re like, no, you need to be more open-source, and maybe Microsoft takes that more seriously and it’s maybe more in their culture, which is just a bizarre thing for me to say out loud, right? I realized that as I’m saying it, but that maybe they have gotten that, in their culture and they said, you know what, let’s just do business with them. 

And yes, I get we’re going to have to give, you know, 100 million of our 7 billion or whatever you said that we’re making off of some of those databases. We’re going to have to give 100 million of that to another company, like, oh, no.

Vinay: So, I want to talk about mind shift. You know, I mean, all this to say that, you know, cloud is no longer a destination, but it’s an operating model, right. But what are the implications?

For that shift in mindset, right. I mean, the, you know, for me, a cloud operating model implies that I would want to use the same automation, right, in my data center as well as in other environments.

Whether it be public cloud, private managed cloud, right. For instance, maybe use technologies like OpenStack, like Kubernetes, right. Because these technologies would support a shift in that mindset, right. Cloud as a destination, cloud as a, as an operating model. What’s your, you know, what’s your what’s your take on you know, that mindset shift from you know, from enterprises.

Todd: I think it’s, I think it’s here, and I think it’s here because years ago, some, some of the developers said we want to be API first. Our applications are going to be API first. And that, philosophy, I’m 100% in that, in that camp, is that these systems, they’re behind the scenes, they can be many things, and they can be running and doing their own, approach or philosophy. 

But in the front there’s an API presented and people have followed so many standards in a great way that the API is there, which allows things like… So Terraform, for example, can be used to automate infrastructure on pretty much any provider that’s out there, including OpenStack. And so with a few basic changes, you can take a infrastructure that’s been built on cloud A and replicate it over on OpenStack with some basic Terraform changes. 

And so that’s really powerful, but that’s born out of that Terraform knows how to talk to APIs, knows how to automate things, and it’s just as simple as all little infrastructure as code on location A, B, and C actually represents itself to the developer or to whoever’s kind of designing the infrastructure, in a very familiar fashion with very little changes.

And so but yeah, behind the scenes, you know, maybe where OpenStack is, is great with using Open vSwitch, like a highly automated networking, virtual networking system. But this is actually in use all over the place. I don’t know honestly what’s exactly behind some of AWS stuff, but I would suspect it’s open virtual openly switch or OVN is also the other. But, of course, then you’ve now put a layer on top of it, but then you’ve now presented it with an API. 

So, as long as the APIs are consistent, people continue to publish and follow that pattern, which I think everybody’s in that boat. I would say we’re in that direction where it’ll become easier and easier to move around from different providers, and it will also enable smaller providers to not have to invent the wheel when they’re doing that; that is, we can just rely on something like Terraform or OpenTofu. Now, boy, it’s hard to keep track of all this stuff that’s happening, honestly, right now. 

As long as you take a little bit of time yourself for the modifications that you’ve made, specific to you, submit a pull request so that your stuff can be automated by the common tools, like Ansible. Another one I mentioned before is RedHat. I think RedHat’s done a brilliant job with Ansible and having that out there as a core open-source infrastructure tool is really valuable for everybody. The same thing, you can just release as a company, release a little bit of guidance on how you can automate against our stuff.

Ansible is not necessarily API first, whereas Terraform is, but yeah, I think that’s, I would fall in that camp that I think we’re in a good direction here. But it does mean that providers really do have to work out what’s behind their API, but also remember, don’t forget, it’s going to have to be accessed through an API. Sometimes hardware automation people kind of forget that, no, actually, you’re going to have to provide an API for that.

Vinay: I mean, I think that requires a bit of engineering thinking. I remember back in the day, people would say use ODBC or JDBC when you’re talking to a database because it’s a franca lingua, right, of databases. You use one API to talk to multiple databases, then it’s easy to switch, right? 

And it’s kind of, you know, I guess, maybe I don’t know if the younger generation doesn’t have that memory, right, because some of them are born in the cloud and they don’t understand what’s gone on maybe before, right. And, you know, when you’re new in the industry and then you see Amazon and, you know, AWS, and that’s it, that’s the world, right. 

I think I was reading somewhere that, you know, a network engineer who knows Amazon’s networking maybe will earn more than a network engineer who knows networking and can build you a network in any data center.

So, Todd, thinking, coming back to sort of OpenStack a little bit, right. As a provider based on that technology, right, can you enumerate on the benefits an organization would get from basing their infrastructure on OpenStack?

Todd: For us, we’re both course in an automated OpenStack as a service provider, but also a bare metal provider. And we also do large storage clusters based on a technology called Ceph. Actually, what we’ve found is that it’s a blend that companies are getting from us, because sometimes big data doesn’t really need to exist inside of a cloud. It loves hardware itself. 

Most of the big data systems were born kind of prior or alongside the transition to the cloud. So they actually have great hardware management systems themselves, and they have their redundancies and they know how to use the hardware directly. In some cases, we train, you know, you don’t want to get in the way with a cloud for that.

Sometimes the customer’s like, cool. We work with a great company, that’s public on our site, hunt.io. They have these huge Clickhouse clusters.

Let Clickhouse do its thing directly on the hardware. For automation, it really becomes like, okay, actually, Clickhouse, for example, likes to use object storage as a back end for its less than hot data. Oh yeah. Guess what. We have giant object storage clusters, you can just spin those up and start using those object storage clusters. 

For us, it’s both the philosophy that we’ve carried, which is like cloud is not a destination, but it does set expectations from users, particularly when they’re coming from a mega cloud. And you say you need to, I think, advice for providers that are out there, you need to conform with that. 

You can’t make it hard for people to buy from you, for example.

If they’re like, hey, I need to use some object storage. They’re used to spinning up some code and calling a bucket and putting some information up there. So you need to be very familiar to them that way.

Now, you don’t have to do it instantly, but you need to be nearly so because a developer who is like an engineer or something, is like, I’m on a mission right now. I want to get this piece done. I’ve got to expand this cluster or I need these additional resources or something’s too slow. I need to horizontally scale this

You do need to respect that cloud is now a mental expectation for how your system is going to work. I would say that’s something that we ourselves are learning every day, honestly, is like to say, how can we conform most predictably to this expectation.

Part of the cloud is like, let’s say I have a CTO at a company, I want to speed up my team, right? I want to empower them. I want to give them access to this stuff. Now, I’m also a good CTO and I want to be friends with the CFO, which means you can’t blow your budget. You cannot, you can’t just give so much authority, that accidents happen in a way that’s really negative

So for us, we’re constantly balancing like we need to have fixed cost products, so that companies can be innovative and not accidentally damage themselves, hurt themselves with this stuff. But you have to figure out how to meet the expectation of a cloud native user, who is going to want to just hit some buttons and have resources available to them, or grab an API key and just start using things.

That’s very much what we’re doing and I can’t say that we’re perfect at it, by any stretch. But this is a philosophy that we prescribe to. Bare metal, how quickly is the bare metal going to come online? 

Cloud-init would need to run to load up whatever somebody is expecting to have on there. We also need to provide a suite of the most commonly needed things to do, so that somebody doesn’t have to invent that themselves or reinvent that themselves. A lot of our engineers spend time on how we can make the standard use cases meet a cloud native user’s expectations. That’s a pretty hard thing.

In a good way, cloud is not a destination, but when you prescribe to that, you then have to say most common use cases should have very straightforward ways of doing it, great documentation if they’re going to be following something. 

These users may be following something for the first time. It needs to be very good documentation, and it needs to be a relatively short time between the idea of doing it and actually executing it. So yeah, it’s a heck of a challenge for us. We face it every day, because someone comes in with something new and we’re like, oh boy, we hadn’t thought of that. Okay, let’s get on that. Let’s automate that, make it easier for them.

Vinay: In a way, that’s the promise of OpenStack, right? As an enterprise, you want to be able to create your own cloud and have access to all these, as you mentioned, cloud cloud-native functionality. That flexibility also makes it complex. 

You mentioned you guys are really trying to automate all these things to give your customer the perfect experience, right. But it’s tough, right? It’s complex to manage. It’s complex to use to upgrade. I don’t know if it’s historical, but are these common issues that users still run into when building and running their own cloud using OpenStack?

Todd: Yeah, I would say so. If you are going to engineer, design, create your first OpenStack for large-scale production use, that is by far the hardest part compared to running the OpenStack. Running a well-built OpenStack, you need very few resources to do so. You don’t spend a lot of your system administration, operations people’s time on running a well-designed OpenStack.

But a well-designed OpenStack, building it for the first time, is very difficult. For us, this is what we took out of the mix. Ours is an automated deployment. In 45 seconds, you can hit the button and you have a three server OpenStack backed by Ceph with object storage, block storage, and even file system storage ready to go. You can immediately log into it. 

Spin up a project, which is analogous to a virtual private cloud. You can spin up your virtual private cloud.

You can make your routers, make your networks, put your VMs on there. By the way, I would, of course, recommend you doing this with Terraform. You can just hit your Terraform button and it’ll spin all those networks up. It’ll spin up the security groups, the firewall, it’ll do all those things for you.

That part, once it’s built and it’s a well-running system, OpenStack is brilliant. Making it for the first time, I always tell people, don’t, don’t do that. Go to a professional, go to somebody like us. There are some great providers out there that can get you from zero to a properly engineered OpenStack in a relatively short period of time. We’re still kind of the only OpenStack on-demand company, I think, that’s out there. But there are great providers, and the ecosystem is really strong

But don’t try to go from zero to a well-designed OpenStack yourself. It’s actually quite difficult. But running a well-built one is not difficult at all. Then you’d asked about upgrades. We use something called Kolla Ansible. Kolla Ansible is a containerized control plane. When you’re running a VM, that’s going to be running in the traditional way as a KVM, down to the hardware virtualization.

There’s nothing in between that. But when you’re running things like the routers and the switches, those are going to be in a container, running on various members of your nodes inside your OpenStack cluster. It’s actually become much easier now to upgrade because we’ve taken advantage of the OpenStack ecosystem. 

The Kolla Ansible team is a brilliant team, has taken advantage of all this kind of containerization move to make these upgrades much, much easier. So yeah, I would say it used to be upgrades used to be like an eight out of ten on the difficulty scale, ten being bad. It’s now down to the three or even two range. 

For us, before we release upgrades, we’re running them and validating them first on our own test systems, then on our own, we run our own production systems on our own platform. Then we run them on our own production systems. We use a set of very close customers to then decide, how is this going to go on for the customers before we release it for the general customer base. The general customer base is using a vetted upgrade process. It’s just not anywhere near as hard as it used to be.

But if you’re running on an old version, you don’t have the luxury of that kind of move. It’s quite difficult to get up there.

We’re running Bobcat, which is quite a recent version. It’s great. But if you have to go from, like, Rocky, a lot of people got kind of stuck on Rocky. That’s a version from, to be honest, I’m not sure how long ago. Quite a long time ago. 

It’s a heavy lift to get off of that older style onto the newer style. But once you’re in the newer style, it’s really nicely done. I definitely think the Open Infrastructure Foundation has done a great job with how they have where they’ve landed with the releases and how it’s being done. So, definitely compliments to them, and it’s a lot easier process today.

Vinay: Well, I remember being in the early OpenStack conferences, you know, and that was a thing, you know, upgrades, common. You know, very often you would have upgrades and also it’s a big project. It’s a lot of things, a lot of components. What do I use? Which one will stick around in 2 or 3 years? These are the things. 

The industry has had time to mature. That’s, that’s. Yeah. Now we have OpenStack, which is kind of maturing and it’s used in production by a lot of businesses and telcos. What about Kubernetes? Is it a viable alternative to OpenStack?

Todd: So for us, for Kubernetes, we support kind of the popular ways of doing Kubernetes. The ones that you might expect, but then also inside OpenStack, Kubernetes is also supported with something called Magnum. Magnum is an OpenStack specific Kubernetes orchestration system. 

But you use it the same way. Once it’s, if you were a public cloud provider, you can have Kubernetes clusters spun up on-demand, based on OpenStack. That’s one route customers can go. They can use the native Magnum, but if they have their own flavors that they love, you can do it kind of two ways. 

But with OpenStack, you can of course just spin up containers inside VMs. So it’s common to just say, give me a server and server in this case is just a VM. Then any of the orchestration systems can just simply get API first. This is a core tenant of this whole system. They can just automatically place their various different resources, containers all the way up to pods, etc. they need to be doing. Then you can also use stateful storage available through the API, also from OpenStack.

Inside OpenStack, but interestingly, I’ll follow this one to the very other extreme. OpenShift, as an example, is one of the Kubernetes automation flavors. I’m not sure if that’s the right term for it or not, but they also offer, of course, natively, OpenShift can natively use OpenStack, at the VM level, but also natively use OpenStack at the hardware level. 

I was just watching a recent, OpenShift discussion, and inside of that, the team was talking about, hey, we actually run a lot of our Kubernetes on bare metal. You kind of go, oh, well, interesting, so you are automating the bare metal. Of course, about halfway through the discussion, the term Ironic came up. 

Ironic is part of OpenStack. And, but it’s the more fundamental part of OpenStack that’s typically used to automate at the data center level. And this is at the bare metal automation. So, actually underneath OpenShift’s bare metal installation supporting Kubernetes is actually OpenStack Ironic.

And so OpenStack is, it is mature, it is here to stay. And it’s also one of those things is like build on it. Like, when I first started watching the thing, I’m like, I’m pretty sure it’s going to be underneath OpenStack. 

But I was just waiting to be sure, because what you don’t want is people reinventing the wheel. This has been solved, at the hardware level. 

And Ironic is a brilliant and mature and stable system that the hardware vendors are very familiar with, and they know that, hey, look, I need to make sure that my systems are going to follow the standards, the open-source standards that we have set at the hardware level, because other software depends on that being a standard and Ironic is a big one, so that it can consume the hardware, properly, IPMI and all these kinds of things properly, so RedFish and all these things are kind of in the mix.

So, I’m not as, good at the most fundamental hardware level. I definitely kind of trust my team in that Ironic, team, that, you know, and Ironic, of course, is what we use, to be able to solve these. But I would also say it’s like it is a great system. 

It’s that it’s been out there for a long time. And again, it’s in use, for things all the way from me having to run my Kubernetes on bare metal. Well, guess what? You’re actually using OpenStack underneath. So yeah, I think, OpenStack, I’m very excited about how mature it is. It’s, there’s more and more additions to it, but it is also like a lot of times, just remember to go and look at OpenStack. 

I think for the general population, OpenStack, people wouldn’t think of that as like a common household name. But if you’re in the infrastructure business, it is a vibrant and growing and successful software system backed by a great team at OpenInfra, with a lot of things that are there. Now, maybe the documentation could be a little bit better. I’ll always throw that out there, but the capabilities are really, really strong nowadays.

Vinay: Switching gears a little bit, you know, prioritizing open-source, right, through SaaSification, it comes with trade offs. I mean, for example, for databases, right, we’ve been talking about workload access. Like, for example, if I have an audio service, I don’t have root access to the database server. I can just access the database, you know, APIs to, to, you know, to, to do queries and things. 

But it means, you know, I can’t do much if the thing is down. I can log into the machine, check what’s going on. I have to call, you know, you know, AWS support, right. So, you know, that’s a trade off. The other trade off is, you know, there is, let’s say portability issues, right. RDS runs only on AWS. 

So, if you want to go move it over to some, you know, to some OpenStack based cloud provider, you won’t get that right. So, for those who are over the fence, over implementing a, let’s say, a private cloud model, right, versus the hyperscaler route, what considerations should they make for IaaS?

Todd: Yeah, I think so for a lot of the common things and that, that you’re using and again, a lot of this is driven by cost, is, they’re going to be very, very similar. So right, using a VM, they’re using a VM on, you know, kind of any solid, cloud system, OpenStack being one of them, is not too different. 

And I think I’d have to defer to you all, I think, when it comes to the databases, because I know many, many times the, RDS or something like that is going to be advertised as, you know, compatible with such and such. And they do that, of course, for ease of moving in.

But you can also say as long as you don’t start getting into their specific things that are not no longer, you know, PostgreSQL compatible or whatever is, that you you can also be portable out, now, but I, I think I would turn to you, Vinay, and your team and companies like yours to say like this is how you would actually navigate that. 

But I would also, I think, as I mentioned this before, for OpenStack, database as a service, the orchestration system is fine, but expertise at the database level is not as strong inside of OpenStack. So, in a good way, I think you all help solve that, for companies.

Vinay: Well, it’s probably my guess is, you know, it’s a bit harder for the database layer, but on the VM layer or IaaS, it’s maybe more portable. So, let’s say an organization decides to go with a pure IaaS model, right. And I guess the everyday operational experience, you know, would that differ between an OpenStack based cloud and a hyperscaler, or, you know, is there even a difference? 

Todd: I use a very obvious one: development workloads that, let’s say, need, just for ease I’m going to say 99.9% uptime. They need to be available and fast for when the developer’s using them and they’re running systems through that and through their pipeline. But it doesn’t need to be production grade. 

But when you buy something on the public cloud, you’re basically buying one type, right? Which is, it’s got to be production grade because that’s the highest level that it’s going to be used for. And so one of the easy examples we see this with, companies that come in to us first with production, with development, workloads. 

They’re like, hey, I’m trying to get away from this real big expense, and one of the first things we say is like, okay, your data, does it need to be on like, this super highly available, absolutely, totally redundant and spread across everywhere? And the basic answer is like no, because that’s actually a copy of the production data. And so if we ever really had a terrible disaster, we would just pull the production stuff back down again.

Like we have a copy, we have a copy already. And so don’t make a copy. 

Like so our block storage, for example, you can do it in triplicate if you, if it’s like super important, you can do it in duplicate. If it’s less so, you could do it in erasure coding if you need to be super efficient and you, and you don’t mind the a little bit of the speed penalty of a erasure coding. But there are very distinct differences in cost, because yeah, triplicate, well, guess what, one usable drive amount requires three drives to do so, duplicate only requires two. Guess what? You just saved 50%. Sorry, 33% inside of that.

So sometimes what we have is when companies are like, hey, this is going to be a big development system. Please give me the kind of the lowest cost storage that we can do that we trust at this level. So cool. 

We’ll turn down the system from triplicate to duplicate and there you go. You’ve got a bunch more storage space. Go ahead. Go for it. Yeah. And so at the most fundamental, I’ll include this one other point about kind of public cloud versus private cloud.

Most typically inside of a VM, workloads are going to be using somewhere around on average, let’s say 30%. This is on average if after you go over a full day’s worth of utilization or maybe a longer time period

But and so it’ll go like and then you’ll say, you know what I, this is how much I’m actually using is 30%. But I now, I need to have a 16GB, eight vCPU VM, because certain times of the day or certain operations, backups or whatever at night, it’s going to spike up and it’s going to use all 16GB and eight vCPUs.

You can I mean, like I’m kind of a green, green thinking person, you owe it to yourself to use that 70% instead of going and buying another box to run a bunch of VMs that are only running at 30%. So this, this terminology, people kind of use this like, oh, it’s oversubscribed.

No, look, this is an efficiency metric. You can use that other 70% yourself for your other VMs, which means, go ahead and spin up a bunch of those 16GB eight vCPUs. Go for it. In fact, you can kind of spin up three of them and they’re only at any given time going to be using, that 30% themselves. 

Now, you do need a certain scale to make that true. And so sometimes workloads, you have to be sensitive to your workloads and saying like, oh, well, actually I’m going to be running a bunch of math models all at exactly the same time, because that’s when the staff comes in and they’re going to run their simulations. 

That’s one of those cases where no, like for four hours, it’s going to want 100% of its resources, so you need to be wise to that. But and that’s why, so sometimes, for private clouds, the tipping point really does become important, because you do need a large enough, set of workloads to be able to take advantage of that.

But again, we talk to the customers when they’re coming in to tell them, like, okay, if this is your type of workload, we can tell you, like very quickly to go, like, you know what you can actually put 15 of those on there when you thought you could only put five.

And you can actually just grab stats. 

Most companies will typically have some kind of monitoring system that’s telling them inside of their servers what’s inside of their VMs, what’s actually happening. And then it’s pretty easy to go like, yeah, yeah, you’re, you’re way overpaid, because you’re only using 30% of it, on average, which is a pretty, pretty kind of typical average. 

So, that is one way that definitely companies say this is like instead of that 70%, going back to your mega cloud there and then selling it to somebody else, it comes back to you if you use it how you want to see.

Vinay: So, let’s move on to environments, Todd. I mean, we know about public cloud. We know about, you know, on prem data centers that, you know, an enterprise may own. And what are the other models in between?

There’s you know, there’s colocation, you know, you know, a managed service provider. I mean, can you walk us through, you know, all these different environments that, you know, sort of people have today, right?

Todd: So like, options like cloud deployment, models I think we were talking about is that, what are the different ways that you can get cloud again, accepting that everybody really does need to be cloud native. We’ve gone there. It’s you want to stay there.

So, of course at its most fundamental level you can build your data center, you know you can figure out your power, figure out your for weight factors. You can figure out all the things that are necessary to actually build, construct a data center safely. Get your connectivity, get your redundant, you know, A and B feed from your local, utilities. You know, that’s at its most fundamental level. 

Then you go up to colocation, where of course, you’re working with somebody who’s already done all the data center work and you’re going to lease a cage from them and, have your 16 racks or something like that in there, other smaller ones where it isn’t, you don’t see it quite as much anymore, but like where you can, you know, buy a half a cab or you can buy a cab, in there. 

But colocation, of course, would be one step up from, you know, kind of the on premise, do it yourself data center. Then you get into the automated bare metal world, and the automated bare metal world is, I think if you were to look out there and talk to the analysts, automated bare metal is really climbing.

And it’s essentially, I can get my, my hardware, quickly and easily with like, really robust networking, really solid automation and, and it’s like kind of like they call it ‘bare metal cloud’. 

You’ll hear people use this terminology. And then, yeah, I get my bare metal and I get my, I get my access to the box, you know, kind of at the as if I was in front of it and off I go, you know, so I don’t in the colocation space, of course, after you get your colo in agreement with your data center provider, then you got to get your gear in and you got to get your network in, you’ve got to figure out who is going to be your connectivity providers, you got to get your DDoS protection, etc. and that’s a lot of work, right? 

So the bare metal providers have kind of stepped in there, we’re one of those, has stepped into their to say like, hey, you can now just get these this hardware on-demand. And then you do what you want with it.

And then, so, in that bare metal world, it’s typically you’re not providing too much, application level, you don’t, in many cases you may not even know, of course, what the customer is doing with it. 

They’re just using it directly. And then one, one level above that is now this, this on demand private cloud. And essentially, because many times what people are actually doing with the bare metal is they’re virtualizing it, they’re turning it into a system, that they can actually talk to through the normal APIs.

And so they’re figuring out, I mean, everything from Proxmox you hear people like, I got to figure out how to get a cloud on, Proxmox is kind of like, is a born from the home lab, a great, great system, but born from the home lab, cloud system. And, but people are starting to now use it in the data center, because maybe they were familiar with it with their home labs.

But yeah, so the private cloud, is now a more common place to go. 

And then it’s getting more and more automated. Now with the VMware shake-up, because they were one of the players inside of this, like get my own cloud. With the VMware shake-up, it’s a little bit hard to tell also what they’re doing, because they shook it up and then they took it back. 

They’re like, okay, you know, all the small providers, you’re out of here. And they seem to be stepping back from that a little bit to try to help smaller providers. And small, in their case, is quite large, I think. So, but, there is this whole world of hosted private cloud that fits in between the bare metal and then essentially the public cloud providers. And I tend to put the mega cloud providers, maybe a little bit different, in a different scale than regular.

And, but people are starting to now use it in the data center, because maybe they were familiar with it with their home labs. But yeah, so the private cloud is now a more common place to go. And then it’s getting more and more automated. 

Now with the VMware shake-up, because they were one of the players inside of this, like get my own cloud. With the VMware shake-up, it’s a little bit hard to tell also what they’re doing, because they shook it up and then they took back. They’re like, okay, you know, all the small providers, you’re out of here. And they seem to be stepping back from that a little bit to try to help smaller providers. 

And small, in their case, is quite large, I think. So, but, there is this whole world of hosted private cloud that fits in between the bare metal and then essentially the public cloud providers. And I tend to put the mega cloud providers maybe a little bit different, in a different scale than regular public cloud, than some of the other smaller public cloud providers.

Again, so the models now, though, with the introduction of the bare metal cloud providers, this automated hardware, with really, really solid networking, and the hosted private cloud, those are really kind of the two new ones that people really need to be aware of.

Now, I think a lot of people are getting that idea to say, like, I don’t have to do colo, I can just get this hosted private cloud or this bare metal system. And you’ll also see, some of the colocation providers are now offering this themselves. They have clearly read the signals on the wall to say, like, hey, cloud native, I need to make my colocation product more accessible. So, they’re stepping into the bare metal cloud world, too.

Vinay: Yeah, yeah. So, and looking at the enterprise, right. From an enterprise perspective, typically, enterprises would use multiple environments. There’s no one enterprise that just does on-prem or that just does one way.

So, there might be a mix of, you know, own data centers, one or more public clouds. And, if I want to extend, you know, my operations across environments, right, so, hybrid bare metal, public IaaS one, right, now what are the challenges and opportunities of creating this, this, let’s say, super cloud.

Todd: All right, I like it, the super cloud. You hear people talk about kind of hybrid or multi-cloud, when they’re, you know, often trying to decide where workloads are kind of best fit.

What, I think in a nice way, most cloud systems have this situation in mind. And so even at the virtual private cloud level of OpenStack is like this, and depending on the OpenStack you role, we do role OpenStack in this way, is you can go into your virtual private cloud and there’s VPN as a service, which is a point-to-point VPN that lets groups at the virtual private cloud level decide what are they going to be connected to. 

So, that’s part of it, like you can say, okay, yeah, I can make sure that my environments that need to talk to each other do so trivially over a safe, over a safe connection. And so that’s kind of like one of the fundamental building blocks. 

These, all these cross connects, like I had mentioned before, in place, so if you’re actually moving data back and forth, in many cases, if you choose, like, a location that’s in the East Coast, it’s often in Ashburn, literally in Ashburn, Virginia, like there’s this is a giant suite of data centers.

There must be hundreds of them, all within, like a ten-square-mile spot. And so sometimes the location has been, solved a bit for you based upon the regions that you’re choosing. 

Introducing different workloads, you really want to look and say, you know, what kind of performance do I need? What’s the cost that I’m able to afford with this? Or, you know, and then what and then how am I going to take advantage of the scale as it goes up?

And so I would say a lot of companies, we actually, we got a customer recently that, they came to us because they were already a hybrid, enterprise type, business, and they had data center space and they had public cloud space, and they had, a lot of providers actually in the mix. 

But, one of the heads of sales came to us, and it was because as a company, the CEO from down had already set the policy, like, you cannot put it on public cloud anymore. We’re not making enough money there, like we’re in some cases losing money.

So, the director of sales there couldn’t close the deal because their data center was also full. So, their colo space had already been filled up and they didn’t have resources to do it. And they and already the company mantra had been like, you know, you can’t put it on a non-profitable, infrastructure and cannot put it on the public cloud. 

So, they came to us and just said, like, we can’t, I don’t have colo space, you guys do the same stuff that we do, they’re big into the automated infrastructure and their own, cloud systems, running in the colo.

And so the business came to us simply because they were already familiar with this multi-cloud concept, but they needed somebody that was closer to being cloud native. Like, I need to make the deal today, which means I need to spin up the hardware tomorrow. So, but I can’t in my colo. So, I think there’s going to be kind of all these challenges, I guess. 

And, but definitely the CTOs and CFOs have recognized you need to have options. You need to understand what your options are. So for us, I think it’s a big part of this is being out there explaining to people these different cloud deployment models that are available. And having, you know, for me, I’m excited because I think OpenStack is seeing this public-facing resurgence because of what VMware did, what the public cloud pricing and cost models are creating. 

And people are recognizing that there is an open, there is an open-source solution to this, and you can go back to your roots. And a lot of the buyers that we see, they want to go to their roots on this.

They grew up being an open-source person, and they would like to be closer to the technology again. So, I think for us, it’s exciting to see that, like, people are recognizing, oh, yeah, there’s great open-source solutions for this. And I get to go back to that. I get to feel good about that.

Vinay: And that’s interesting. I mean, you know, open-source cloud, right? I mean, what does the future of that look like? I mean, because I think one key thing you mentioned earlier was, you know, if you’re running the old OpenStack, that’s going to be tough.

You know, it’s harder. The new one is, you know, more packaged, more packaging, more automation. It’s easy to, you know, kind of runs at a speed, right. If you know how to put it together, maybe work with, you know, some experts, in the field like yourselves. I mean, will we see more people shifting over.

Diversifying their operational model, right, thanks to open-source. What, how do you see, you know, this going especially in the light of you know Broadcom, right, you know, you talked about that as well with you know, the change of, you know, I think the whole VCPP program was canceled, right. 

So, quite a few number of our, you know, prospects, they have been mentioning that they are starting to search for alternatives. Because VMware was in every, pretty much every single data center, you know, out there, you know, they’ve been so big. So, so what are the implications? You know, will we see more consideration for open-source cloud software like, you know, like OpenStack?

Todd: In a live, this is the live situation now, then the answer is “absolutely”. Yeah. It’s happening already. So, this was, OpenStack has always been backed by some of like the greatest, efficiency companies out there. Walmart for example, right. Big OpenStack, fan and runs huge OpenStack, infrastructure, you know, so you have somebody like that who is like, we care deeply about cost. And of course they’re going to do it themselves because they have the scale. 

And at the fundamental, of course, it’s a better way to do it. When you, when you have the commitment, I guess, to excellence, like they do, cost wise excellence. And so, yeah, the systems are absolutely stable and have been growing themselves behind the scenes. You could say from the consumer’s viewpoint from like kind of a household consumer, viewpoint. It’s always been growing and more and more cores, more and more infrastructure being run with it. 

But now OpenStack is coming back into the smaller and medium sized businesses’ views, because they need an alternative now to something like VMware. And yeah, absolutely, we’re getting those exits onto OpenStack. You can see companies like, there’s Haystacks, Storeware and Trilio, who actually is a new partner of ours, who facilitate easy transfers like virtual-to-virtual, cloud environments to facilitate people moving out of, yeah, out of VMware now. 

And it’s just simply because, in many cases they’re required. They’re like, you lost your ability to run our license, so you need to get out. Which is really again, I don’t think that move was, was quite executed the way that you think it would be. 

I get what they’re doing, they’re like, we need to raise the revenue, so we’re going to do that. And 100% they’re going to do that, Broadcom that is, but yeah, so I would say that it’s a live situation, OpenStack is ready to help VMware users go on and continue having successful MSP companies and SAS running on there, OpenStack is ready to help with that.

And there’s and, like I’ve mentioned, Trilio and Haystacks and Storeware and companies like this are ready to help people move into clouds like ours.

Now is there like a next, you know, unlock, right, for these open-source based clouds? Right. To even increase even more the adoption, like, you know, do we need somehow to provide more than just IaaS, right. So, you know, maybe platform as a service on top of that to make it easier for organizations, or maybe some kind of workload specialization. 

Vinay: I think you mentioned, you know, in some cases you don’t want to have something in between, right, between the application and the actual, you know, bare metal. You want to go more direct. So, where do you see the next unlock is, you know, when it comes to these open-source, you know, clouds.

Todd: Interesting. I think, I mean, you do see, in our space, there’s more in the, OpenStack and it’s typically Ceph-based clouds. They’re getting easier and faster to deploy.

So, I think, more and more of that will continue to occur as they get richer with features, you know, so like, today, you know, I picked on, database as a service is something OpenStack needs more of you guys inside of to help, you know, make that have closer parity with, some of the other providers. 

There’s a few other things that are inside of there that, you know, so, like native security applications. You know, so, like, for us, we, we have to, leverage in, some external open, open-source ones, but some external ones to help our customers because, OpenStack itself doesn’t have, like, a security center as one, as some of these other providers would have.

They don’t have as natively, a transfer and migration center, you know, like if you were to look in some of these other, big providers, you’re going to see these systems, like be really, natively integrated. 

So, I think that’s the next step is for the community to now recognize that this is happening and for the community providers that do those, they have to get in there. They got to get it with us and with other providers like us to say like, hey, you guys are selling this stuff, how do we help you fill these gaps?

Like for me is, I we don’t want to do everything. And, you know, and honestly, we can’t we can’t go ahead with AWS, like, come on, like, that’s just not a thing. We have to do it as a community, as an open-source community, as an open-source community, 100%, we have, dominant share of the world. 

But we just do need to come together as a group and have some of these platforms that we get behind. We’re hopeful, we have our community system, we’re hopeful that people do that with us because we can automate. We’re a fundamental long time in data centers. We know how to get hardware where it’s supposed to be at the time, it’s supposed to be it. 

And so that it’s like invisible to users of the cloud systems. But we’re not experts in database as a service. We’re not experts in this migration as a service. You know, we’re not experts in all these areas. And we really want to have, a group come together to help us have this parity that you have to have.

So, I think that would be one of the big shifts. Is that, really and maybe it’s led by OpenInfra, maybe it’s led by an organization like us or something, but, or as a community, we just decide we’re going to do this. That is one of the things, is like, if you want to be at parity with some of these trillion or whatever, they are trillion dollar companies, you’re going to have to do it as a group, as an open-source, hopefully, you know, group that accomplishes that.

Yeah. Yeah. So, one kind of a bit of a crazy question. I mean, it all roots back to hardware, right, at the end of the day. Are there any risks, you know, to this, let’s say ecosystem proliferation? I mean, will there be enough hardware, right, for, I don’t know, 10x more providers?

If you, I mean, you’ve actually kind of poked the bear with me because even today we are struggling a bit there. And so we, in our case, we actually have a rolling purchase process.

And we’re just guessing that six months from now, we are going to have this demand in this area, you know, based upon historical, but still it’s a bit of a guess which should not like doing because it’s taking so long to get the gear in, because somewhere some particular, component is not available. 

And it’s something you have to wait on. The box can’t ship or function properly unless you have this proper thing. And so you have to fight to get in line right now. And so it’s really complicated.

And we prefer, we actually use this really high and micron 74 or 50 Max SSD, and it’s awesome, it’s like the best thing you can buy. But other people know that also, so we’re in a queue all the time, sometimes with our providers to make sure that we can get a hold of the things that we believe in.

That’s been a pain point for us. We deal with it, but we have a great, backer, in this bank called Truist that helps us make sure that we can have the, the flow of hardware that we need in order to accomplish this. 

But, yeah, it’s like so sometimes, we’re just now getting some stuff we ordered in December and boy, it was close, you know, or so this is a bit of a trick, inside of there. So, the logistics associated with hardware is definitely a constant battle.

Yeah, and will there be enough? I mean, I think, you know, you’ve probably heard there’s a lot of investment now both government kind of mandated and funded, here in the United States to get chips manufactured, more of the manufacturing to come back to the United States. We have, TSMC even coming over here to build, in order to support that. 

So, I think that people are everybody’s aware that this is occurring. We feel the pinch. But I would also say, it doesn’t matter. You have to be cloud native. And as a provider, you’ve got to enable your cloud native expectation user to hit the button when they need it. And so we do that through all kinds of like predictive ordering, honestly

Keeps me up at night, though. You happen to poke on that one where I’m like, yeah, that is an issue for us. And sometimes it’s the littlest thing, like this little chip on the, the net card, you couldn’t get it, you know, and you’re like, hold on, please come on.

Can we get a different card? No. All the cards use that exact one, well Broadcom, honestly, they use that little chip in there’s something wrong with this factory. And so it’s not, it’s not in the pipeline. And so your box is sitting, you know, and then the manufacturers, of course, they’ll make the box and they have to pull it off and stick it somewhere for a while until that part comes in. And they could stick it back in the line and, you know, finish it and send it out. 

So, they’ve got all kinds of logistical problems too, warehouses filled with 99% build boxes that are missing some little component, you know, so yeah. All right, well, yeah, I think we covered a whole bunch of stuff today. 

Vinay: I think you know, this has been great. I think, Todd, to summarize, you know what would be your recommendation, you know, to enterprises when it comes to let’s say, you know, devising a cloud strategy.

Todd: Understand the new cloud deployment model options. And, and you can absolutely turn to open-source and open-source has matured right along with the big, mega clouds. 

And so your systems are available there. Look to good companies that have specialization in areas, you know, on top of the open-source stuff, yourselves as a great example. And then you can have cloud native infrastructure very easily from other cloud deployment models or methods.

That’s like I think I would say the big takeaway for CTOs is that, like there are options for you now, and they are fast and easy and reliable, cost predictive, all this kind of stuff, so yeah, that’s what I would say in summary. 

Vinay: Okay, excellent. Well, thank you Todd. It’s been great talking cloud and OpenStack with you, right. So, that’s it for today folks. See you all for the next episode. Thanks, Todd.

Thank you for having me, Vinay. And again, thank you for bringing my watchers up to speed too also on Severalnines. You guys have got a lot of good stuff going on over there. Yeah, all right. Take care.

Guest-at-a-Glance

Name: Todd Robinson
What he does: President and Co-founder of OpenMetal and InMotion
Website: OpenMetal
Noteworthy: Todd sets the strategic vision of the company, drives the product development of OpenMetal cloud services, and focuses on ensuring explosive growth.
You can find Todd Robinson on LinkedIn