Fabricated Knowledge
Fabricated Knowledge
An Interview with Wes Cummins, CEO of Applied Digital

An Interview with Wes Cummins, CEO of Applied Digital

How Crypto Datacenters are becoming AI Factories.

Think of this as a companion piece to my recent piece on Crypto Datacenter NAV. I walked through the in-depth changes happening at APLD 0.00%↑ in particular.

The transcript is edited lightly for increased readability.

Doug O’Laughlin: Today, on Fabricated Knowledge, I have the privilege of having Wes from Applied Digital talk about Applied Digital. Today, we will talk about the change in his business model. The company is experiencing many changes, and many new things are in the pipeline. And so I just wanted to sit him down and have an opportunity to chat about that. So first, Wes, how about you introduce yourself? Then, we'll talk about Applied.

Wes Cummins: Yep. Thanks, Doug, and thanks for having me. So, I'm Wes Cummins, the founder and CEO of Applied Digital. The company is three and a half years old now. Before Applied Digital, I had a long career in tech investing. I had my hedge fund, and I worked at a hedge fund before that as a TMT expert. I mostly worked in small cap-oriented technology companies, mainly in tech hardware.

So, I have a lot of expertise in tech hardware. I started the company three and a half years ago, and there have been many changes from the start until now. The company was started in a bit of a unique way. It began as a public company. So there's a lot of learning and bumps and things that happen along the way for companies that typically are private at this stage, but we're public the entire way. So, everyone's gotta watch the entire evolution.

Doug: Yeah. I mean, it's very different. Right? People usually don't launch into the public markets to begin with. But, yeah, let's talk about that because I don't even think Applied Digital was called Applied Digital when you first started. You guys were primarily focused on blockchain, right?

It's not a coincidence that the high-power data center stuff used for blockchain, like your previous business model, can be used for what you're shifting to. But I would like to talk about maybe the beginning of blockchain. Why do crypto mining? Because you guys have been focused on the pivot a little earlier than other companies. And I guess why pivot? What are your thoughts on that?

Wes: Sure. So when we started, let me give you the origin. I controlled a public shell from a failed investment in the past of something I mothballed. I put a lot of money into a company, and then it became dormant. And I had this public shell that is part of the story.

However, the fundamental part of the story is that we were looking around the crypto blockchain ecosystem in 2020, and there were a few publicly traded Bitcoin miners.

I have also done a deep dive into the industry. And what I realized then was that all the people who knew what they were doing in the sector were legitimate people working on, you know, quality projects. None of those were in the public companies or the ones about to become public. So there was a, you know, a significant number of Bitcoin miners that were about to come public. I worked out an idea of being a large-scale GPU poster, large scale, and it was going after Ethereum.

So, it was going after GPU-based proof of work network. So Ethereum was by far the largest, but there were many others. The idea was, as an asset allocator, an investor that could aim instead of dollars. You could aim to compute to those networks and get the highest value from each network with the software stack to move the compute. You can move it quickly, you know, within, you know, fifteen or twenty seconds to different networks.

We partnered with the largest Ethereum pool in the world at the time; they had 25% of the hash rate for Ethereum. We had a clear path to doing an industrial-scale GPU deployment for what I would call altcoins at the time. And so we started down that path, and we raised money.

We went out in April of ‘21. I'd made a deal with the company's name, Sparkpool, which was gone at the time of a Chinese company that was put out of business by the Chinese government and went to raise money. We're looking to raise $5 million. I did two days of investor calls, and this is where my background and kind of the shell come into play; it was much easier for the people I know to invest in something that was effectively private but had a clear road map to having registered and liquid shares because they're primarily public market investors.

After two days of a roadshow, we raised $5 million. We had $45 million in demand and took $16 million. We were buying GPUs that would be deployed with our partner in China, and that would be the business model. This didn't last very long because in April, at the end of May, China cracked down on crypto completely.

But that opened up a big opportunity for us, and I was already looking at sites in the US. I had a power guy I was working with. So, a guy who finds power for us is one of our initial employees. We had a guy for ops who had brought online one of the first large-scale Bitcoin mines here in the US. I believe it was the first large-scale Bitcoin mine here.

So, we assembled a team, and there was this mass exodus from China, and it had to go elsewhere. And a lot of that landed in the US, and we had the opportunity to build out Bitcoin data centers. We never mined Bitcoin ourselves, but we had the chance to build out data centers because of the big migration that had to happen.

We created this opportunity, put the right people together, and executed it. We signed our first ESA for 100 megawatts in July ‘21. We broke ground on our facility in September, operational by January ‘22. So, we were one of the few in the space who could execute on the timeline we thought we could. This opened up a significant opportunity for us to do more.

So we built 500 megawatts between September of ‘21 and September of ‘23, 500 megawatts of Bitcoin data centers, and 500 megawatts of anything is a big number. I was proud of the team for doing that. We had some hiccups along the way. The biggest hiccup we had was while we built it in that time frame, you know, because of the uniqueness of where one of the sites sat in Texas. The tie-up between being behind the meter and having two utilities in the area that both had sign-off on didn't come online as expected.

That was the primary issue we ran into. But then, in the first half of ‘22, I wondered what else we could do besides Bitcoin data centers with these significant power assets. And what we landed on was high-performance computing. Now high-performance computing was a much smaller industry at the time than it is today. We hired some guys to build data centers for Meta in the Dallas area.

Our company is based in Dallas, so we hired some guys to build data centers for Meta. We designed a very low-cost, highly efficient HPC facility in which HPC is just hosting GPUs. Right? It's GPU-based computing, but its high power density is similar to what we're doing with Bitcoin. But these were very different designs. Then, we started down that path. We're building a facility in ‘22. In October of ‘22, we built a software layer to run a cloud service.

We did that because I thought we'd have to be our first customer inside our facilities to show these facilities work so that we could get customers to lease space just like we did on the Bitcoin data centers. We worked with a company called Foundry. We helped the founder start the company. He was getting his PhD at Stanford and was either still working or had worked before DeepMind for Google, so he had much experience. He developed a software layer where he initially tried to capture the GPU hash rate or the GPU hash from the Ethereum network as it went from proof of work to proof of stake.

So, there'd be a lot of excess GPU capacity in the market. This software layer worked well inside our data center, where you weren't trying to grab GPUs from kids' or young adults' gaming computers or whatever they were and aggregate them well inside our data center, and we launched a cloud service in December of ‘22.

Most universities were using it at the time. The research department was doing machine learning and deep learning on the GPU clusters in Jamestown. Then, in December, ChatGPT hit, and that didn't change our world then. Then, the big change, NVIDIA h100, was introduced in March of ‘23.

What happened is that the data center that we were building, the HPC facility, was a perfect fit for the h100. So we were building to 50-kilowatt power densities in a rack. Those servers take about ten. So you could put four or maybe five servers in a single rack and keep these all close together. So we were out, you know, marketing this DC capacity. In late April, we were introduced to the AI lab, or what people call AI startups, Character AI.

And we signed a contract with them, but they wanted a cloud service. So we would buy the GPUs and put them in our facility. We put them in a third-party colo, which opened our eyes to this market. Everyone saw what was going on with OpenAI after ChatGPT, but it opened my eyes to the computing side of this market and the demand for computing. And the fact that the compute was going to take much, much higher power density than, you know, 99% of the data center capacity that the world could support, I saw we saw a huge opportunity.

So we did two things. We leaned into the cloud service business. We started contracting third-party colo in tier-three data centers to build out our cloud business. And then I went back to our real estate development team, and I said, guys, I know we were talking about, you know, 5, 10, 15 megawatt builds on our Bitcoin facilities, we need to start thinking about 100s of megawatts again and start building those out. And so we did, and then we redesigned the facility because we had more experience with the NVIDIA gear.

We started hiring people with experience, but we worked directly with NVIDIA's data center design team to design a new large-scale facility. So, when Jensen talks about AI factories, we did that last year. And then, in October, we broke ground on a new 100-megawatt critical IT load build in Ellendale, North Dakota, that is a fully enclosed building., We announced in April that we had signed an LOI and are exclusive to a hyperscaler for that building but that entire campus, which we believe is at least 400 megawatts of critical IT load.

And, Doug, there's a big difference when you talk about it in the data center world; we talk about utility power and then, you know, data center power. Right? And the data center power is the critical IT load. So, utility power is how much you need to get to that critical IT load. In Bitcoin land, we just talked about what data center guys would say is utility power because you didn't do any cooling or mechanical work. 100% of the power went to Bitcoin miners.

For Bitcoin data centers, you don’t think about PUE or any of these kinds of metrics. There's a little confusion in the market as Bitcoin miners have started looking more at this HPC part. But that's where we are, and our company had gone through an evolution, when we started, we started talking about retrofitting our Bitcoin facilities for HPC.

We went down that path for a while and then realized we would not retrofit Bitcoin facilities for HPC. We're going to build, you know, low-cost HPC facilities. And so we went down that path, and we built that, you know, 9, 10 megawatt building in Jamestown. And it works, and it's, you know, for specific loads, it's going to be just fine. But then we evolved even more where we're building tier three levels, type redundancy inside the data center, and different building styles. Then, we're building it so the Ellendale building is designed uniquely.

It’s designed so that all of the GPUs inside the facility, which totals 100 megawatts, can be interconnected on the same InfiniBand network, cluster, and supercomputer. I think it'll be unlike anything when it comes online late this year. So that's where the company stands right now. It's been a long process moving towards the HPC market and then changing the scale we were doing. But we're in a perfect position there.

Doug:  Yeah. That's awesome. I will have a few follow-up questions about the difference between a critical IT load and, at the moment, so clarify here. Previously, it was from the utility, say, 100 megawatts. Right? And then, do you need 100 megawatts of critical IT load? But that means you need more-

Wes: Yep.

Doug: in including the PUE. Okay. Cool.

Wes: What it means is that PUE is power utilization

Doug: Efficiency, I think?

Wes: Efficiency. And so if you want to get to depending on where you are in the country, mainly or the type of climate you're in, the amount of cooling you need, the amount of mechanical you need, the amount of networking, how much is being used besides, besides what the IT load is using to support the IT load. And PUE is standard at 1.35 to 1.5.

Doug: For the colos. Right? But then I think this hyperscaler self builds closer to 1.25, is my understanding. Then Google will say, like, 1.15.

Wes: it depends again on where you're locating these. So, as you can imagine, your PUE is going to be better in Phoenix, Arizona, than it is gonna be in Minneapolis, Minnesota. Because the cooling is the most significant portion of this. So with ours, at the scale we're building in North Dakota, we're going to be, the first facility is, like, 1.17, but we can get pretty close to 1.12 or 1.1 on the new one.

Doug: That’s good.

Wes: Yes, but the free ambient cooling in North Dakota helps that.

Doug: I was also going to ask here about the power density side of this because we briefly mentioned the difference. 50 kilowatts per rack. And that works well for, you know, the Bitcoin mining world. But we're talking right now even ahead of 50-kilowatt watts per rack. Right? I think that only works for the NVL72 or 36 next year. Right? That probably works well for h100s this year. That is the power density needed. What do you see in the long run here?

Wes: We're designing up to 150 racks right now. But I could see this going to 200. I've heard people talk about 300. I think we're a long way from that.

Doug: Yeah.

Wes: But I could see it going up to that, over the next, say, five years possibly. But we're what we work for with our facilities to try to, you know, future-proof the build so we can go to high and extremely high power density. If you think of these just on the h100, as we know, you have 10 kilowatts for the server with 8 GPUs in it. You want to stack as many of those in that rack as possible, and then you want to put another rack as close to it as possible.

You know that the magic number on InfiniBand is 30 meters from the network core. So, how many GPUs can you get within 30 meters of the network core, especially when looking at training? You want these massive supercomputers. And so for us, you know, in the first Ellendale build, you'll get 70, depending on utilization, 70, 75,000 h100 class GPUs, single cluster. I don't believe that exists in the world right now. 

Doug: So there are a few people who are pursuing the 100k, the 100k h100, but, like, you know, that type that that scale of cluster is as big as it gets. You know, you're starting to get to five people doing denser. So that's awesome. That cluster is to be done by one of the hyperscalers themselves. So, yeah, that's huge. 400 megawatts for, you know, 75, 000 h100.

Wes: 100 MWs will do this.

Doug: 100 megawatts. Sorry, yeah, but I knew my math was all there. 100 megawatts for the 75,000. But then you're talking about even further down the line for future-proofing to much higher-density racks. I think this conversation is because I don't think people understand the difference. Understanding and thinking about the future power density is like half the battle.

You're not a Bitcoin miner, which I think maybe the market still feels because it looks like, at least, how you guys have been talking about this. I want to transition you guys. Perhaps you hope to do a revenue crossover, probably pretty soon, sometime next year, regarding pushing over the cloud services side. Those hyperscaler contracts will generate much higher revenue per MW and profitability.

Is that correct? And then, I would love to, you know, my numbers here are something like you're making a little under $1 million a megawatt doing Bitcoin mining. But I understand you're making, like, 10 in terms of if you can get an excellent rental for multiple years.

Wes: You should think of it this way. You get to 10 a megawatt if you own the GPUs inside the facility. Let's do the following 3 steps to ensure you're right on the Bitcoin portion. And then, as you step up for Bitcoin hosting when we run the Bitcoin data center, we have 280 megawatts in North Dakota that we still run. That is the expectation that generates roughly $160 million of revenue annually. What is that? Like, $700,000?

Doug: Yeah. My calculation here is 0.62 or 0.6.per MW.

Wes: Let's go $600,000 a megawatt.  Then, as you step up for what we're building for hyper-scale, just if you're in on the rental if you're just landlord, then revenue generation at $2 million. So it's a big multiple-step up. You're not quite 4x, but you're significantly higher. The vast difference there, though, is the duration versus bitcoin mining and duration, where you know where the counterparty is,a hyperscaler, and you're signing 15-plus-year contracts with renewals. That's a big difference, but that's where you step up. If you go to where you own the GPU and provide the complete service, you're getting the $10 million.

Doug: It’s north. I think it’s north.

Wes: 1.5 megawatts will generate about $18 to 20 million, just like bare metal would generate about $18 to 20 million annually.

Doug: Wow, that's insane.

Wes: Bare metal. So, it's a significant step up. We do have a cloud service, but we've also focused on being a landlord for the developer and operator of the data center for the hyperscaler. That's the that's the Ellendale facility we're working on. That business has long-term contracts and significant margins for us, so it's the stability of that business that is super attractive. And then also when, you know, we have the GPU as a service business, we're providing bare metal and then, you know, expecting to step up to more services in the future. But you take a lot more technology risk. Shorter life cycles, those products.

Doug: I think the single most significant risk, and I'm sure you've done and thought a lot about this, is, like, the h100; you better have a multiyear contract.

Wes: Yep.

Doug: It makes a lot of sense if you're sitting here. If you have the energy and power, you should be coloing until you can get a b100 order, and then you can get in the front of the line for that cost curve. And then, you know, that cost curve will decrease as the market grows. And then, toward the end of the B100, you wanna get the front of the line for the R100 because it doesn't make sense. If you're sitting at the back of the line, what happens is, you know, a new product comes in, and it ruins your cost curve. So if you have that power today, you wanna do colo until you can get the new orders. Is that how you're thinking about it?

Wes: Yeah. That's generally the thought process there. I don't know if you know this, but we were one of ten companies mentioned as having early access to Blackwell. So we're focused on that as we go into the year's second half.

Doug: Mhmm. Yeah. That's huge because that's like a gold rush. Right? You have a cost advantage compared to everyone else. It'd be one of ten. So, yeah, that's helpful. It looks like, and just like, for the context of sizing on my side, most of this business today is coming down the pipeline on the colo side. After that, you'll start to have the services side. But all this is coming in at multiple revenues per megawatt, which is an uptick for your company.

Wes: Yes, it's a significant uptick. So I'll tell you it's interesting for public markets. Right? Because there's only a few hyperscale data center developers in the public market, and that's where, you know, the big boom is happening here. So, when you look at Equinix and DLR, they're smaller in that business., There's a lot of private companies that do it. As you know, QTS is owned by Blackstone, Aligned Data Centers, and Vantage, and Center Square is the new mash of Brookfield.

You have many of these that are all in the private markets, so we'll be the only public platform doing this on the hyperscale side. But this is where the massive growth is: the AI-focused data centers.

Doug: Yeah. That's cool. It makes a lot of sense, and I think it's interesting because, on the public side, I still don't think people are thinking about this at all. Like, I mean, I believe that the market is starting to wake up to it, but it's still, you know, we're talking about a meaningful inflection in terms of dollar per megawatt here. And I think that's helpful in terms of sizing how to model this business, frankly, for people.

Wes: Yeah. So what we've said publicly is that $2 million a megawatt number and 50 to 55% EBITDA margins is what we're looking at. The power is a pass-through, but generally, it's recognized as revenue. But it's zero-margin revenue, so that's the most significant part of the cost, Otherwise, you could if we didn't realize that as revenue, the revenue number would be lower, but the EBITDA percentage number would be significantly higher.

And just let me go back one more on that. Let's talk about $625k or so of revenue on Bitcoin facilities. This is just this data center. It won't compete with Bitcoin miners because you must go through the full stack for them. The EBITDA number out of that for us would be $220k—$250k per megawatt. So you're going 4 to 5 times on the EBITDA number per megawatt, which is the most significant uplift.

Doug Okay. That's helpful. And I guess you're talking about the same thing we had before: the sustainability of this is a lot higher. Right? Because you're sitting here and signing the leases with these guys. And, maybe at the end of the day, things get messy, say, ‘26 or ‘27 or something like that, but it's still like a lease. So your Sharpe ratio is even better on the lower side. Right? Because you're just landlords. You're high-powered landlords.

Wes: As our company goes through the next few years, it will be a complete shift to HPC for us. It's really about what,, it both in public markets and in private markets, what people are willing to pay for Bitcoin data centers, revenue streams versus,, data center revenue streams.  center revenue streams when your counterparty has long contracts and is highly credit rated, the multiple people are willing to pay, and the value creation for our shareholders will be significantly higher, versus,, the Bitcoin data centers.

The Bitcoin data centers are fantastic on a cash-on-cash return basis. It's just that you'll never get paid a multiple on that, like almost any other low multiple business, because you need to know what the duration is. If you knew the duration was 30 years, you would get paid a huge multiple. But I think people put a duration of two years, three years at best, on those.

Doug: I was going to have you answer the transformer question because I think it's publicly become a little bit of a black eye.

You guys had 47 transformers blow up. What exactly happened there? And then also, like, how is that? I guess it's just like an opportunity because I know that I'm thinking of this. An investor is going to ask this question. Yep. Right? What happened there? And I guess, is this impairing your ability to ramp North Dakota faster? Because I know the transformer backlog is just, like, absolutely backed up.

Wes: You're right about the transformer backlog. We didn't technically have 47 blow-ups, but we took all of them out of service. So we had transformers that we believe were not built to spec. We haven't said where they're from, but they're not US-based. Okay.

Doug: OK, that that helps.

Wes: We didn't use those transformers at our Jamestown site, but we did use them in consultation with other stakeholders. The company also ships many transformers to the US.

But they ran fine for nearly a year, and then we started having a problem with them. They began to fail, and we thought we had fixed it, bringing the site back up. We had some more failures.  We took it down, and then we just replaced all those transformers. Our team on that side did a fantastic job procuring replacement transformers.

We can get a bunch of GE ProLac transformers that were not used but previously owned. The issue we had to deal with on-site, though, was the transformers we had initially were 3.5-megawatt transformers, and the replacements were 2.5. So, we had to change some of the electrical architecture around the site to make that work. So it took some time, but given the lead times on transformers, I was just thrilled with how it went. Yeah. We announced yesterday that that site's fully back up as of the end of June. And it's running well. It didn't impact what we're doing on our Ellendale 2 build.

So, we're using a different class of transformer for that. It just needs to be. It's a different supplier. But think of it as we used some Toyota Tercels at the Bitcoin site and ordered some Lexus transformers for the data center site. So it's just a different class, and it won't have an impact. It hasn't affected any of our negotiations or diligence or any of those because, for anyone who knows about it, it's straightforward for them to understand.

Bitcoin sites are built with effectively 0 redundancy and power. And the data center site we're building now has a significant amount of power redundancy built into a hyperscaler spec and has backup gen, UPS, and all those things. But also, on the transformer side, there is a different supply chain for the transformer than what we did for the Bitcoin site. So, there's no concern there.

Doug: That's a very candid answer that explains why that happened. I'm just curious about your ability to ramp up the 600 megawatts. Right? What are the most significant and longest lead times? Everyone has publicly spoken about Transformers. But where are the bottlenecks in your business that prevent you from getting the megawatts up faster? I would love to hear about time links and stuff like that. I think it would be very educational for everyone listening.

Wes: This is another part where we're in a good place because we started this a while ago compared to a lot of other companies and even some of the data center operators. But we had an entire supply chain team built out for this. We've had orders in for a long time, so we're in a good place for the construction that we're doing.

But let me tell you, I think you're also asking a question more for information. You need to think about transformers in two different ways. There are the big utility transformers with a long lead time, and then there are step-down transformers you generally have. These sites need redundancy, so you have two large utility transformers to feed the site because you have to have the redundancy.

And then, it feeds into step-down transformers, and then you'll have, for 100 megawatts, you'll have, somewhere in the neighborhood of 20 to 30 of these step-down transformers. So, the step-down and utility transformers generally step down from whatever voltage is down to 3045kV. Then, you need step-down transformers that take that 3045kV and step it down to the voltage you will use inside the data center. And so those are long lead times as well, and those are, you know, in the two-plus-year lead time right now.

Doug: Jesus.

Wes: Then you have high voltage switch gear, probably in a year and a half to close to two years lead time. At this point, you have chillers for the cooling, so the chillers are also in that same type of lead time UPS. 

Backup generators also have a stretched lead time. There are more minor things inside, like the busway and other components, but those are really big-ticket, high-level items that all have significant lead times. Then we're going to move into liquid cool for this facility, and that's where the chillers come in, and you get a chilled water loop and all of those things. 

Doug: Can I ask one question? Is it so there are different levels of liquid cooling? Right? I have been learning about this a little bit. Is it liquid to liquid, like, all full stack liquid, or is it liquid to, I know there's, like, a liquid to air, then there's even air to liquid? What's the full, like, the inside loop versus the outside loop? Are you guys shooting for the liquid-liquid? That's how you get to 150 kilowatt, kilowatts per rack?

Wes: Yeah. So it does that, but it still eventually goes to chillers. We don't choose the actual methodology at the rack, so I'm curious to see how that goes and what the lead times are as that starts to be rolled out in a big way with Blackwell.

Doug: I'm much more familiar with this debate. CDU is in the door; dump it into the aisle, versus you not choosing that. That makes a lot of sense.

Wes: We don't choose that.

Doug: Yeah. You're just trying to give the operators a shell that will provide them with the opportunity to do that, and they decide if they'll, are they going to do, you know, liquid air, air to liquid okay.

Thank you. I have learned a lot in a relatively short amount of time. I think I will ask one set of future questions. I guess it's more like big-picture Applied Digital stuff because I want to ensure you can talk about your story. So I think right now, it seems like it's shifting to colo in the near term. Right? High-end color that has these long contracts, and you're gonna be able to have a lot of capital to reinvest, frankly. Right? 50% EBITDA margins.

Then, from there, most on the service side. And then, is that the evolution of the financial model that's, like, the opportunity? And, like, also in this process, my understanding is you're putting down a lot of capital and, I think you've talked about this on public calls, but I'd like to think about, like, how you're thinking about financing this because that's, like, a very hard that's, like, one of the other complex problems to solve.

Wes: And yeah. So it's been a complicated problem, and now it's, I think, becoming much easier for us. So, we stretched ourselves in any way possible to keep our  Ellendale building on schedule before having a contract. With the hyperscale customer there now, we've had a lot of discussions about financing. The financing options have opened up to us in a big way.  

And so the way these typically will be financed, and it's kind of a well-oiled machine in the industry, is, you know, you have the contract, you get construction finance, and the construction finance that we're expecting is somewhere in the SOFR plus 200 to 250 range.

So, think about 7.5% or so. Once the asset is stabilized, we see up to 90% loan-to-cost, so that's a massive help for us. And then those will flip to an ABS, where you can generally see how the industry works, take equity capital back out, and reinvest it in another project.

But in the meantime, to keep it on schedule because, you know, this is what's been in my mind for the last, you know, 6 to 9 months, even years we're going forward, but we've pulled the trigger on building in October. We're a small company. And how are we going to break into building data centers for hyperscalers? And the way we're going to do that is a speed to market and having it available because you can see public interviews from me or comments I've made for, probably, well over a year now where I was talking about how data center capacity, is going be the big issue for AI because initially, it was how do you get your hands on GPUs and the equipment.

But fixing wafer throughput or fixing advanced packaging or fixing, like optical transceiver manufacturing, can happen much faster than power permitting, supply chain, construction, and everything you need for the data center. And since I saw the big difference in the workloads and the need for different data centers, we pushed, and we did what, you know, candidly were kind of some uglier financings that I wouldn't have preferred to have done earlier this year.

But it kept us on schedule for that build and, you know, and we're in the, you know, finalizing in the phase of finalizing that contract, and I think it's gonna pay off for us in a big way. But what it does with that will change our ability to secure financing. At a site level, it'll change how the company can operate. So I think while those were painful for me and some of our shareholders at the time, I believe that you know, they have to serve.

It's a means to an end. Right? And as long as we get to the right end, the means will be blessed by everyone. So, but that's when you think about the financing in the future, think about, you know, a very high loan to loan to cost on-site level construction debt. So project-level debt that then flips to an ABS after the asset stabilized, and then you do it again and again and again.

Doug: Okay. Yeah. I know that, in the beginning, the first Core Weave debt offerings were very controversial. And now I think everyone is looking to do this in the marketplace. It's not controversial anymore.  Everyone's like, oh, actually, the economics make this work well.

What are the things that you are focused on going forward? The speed of the market seems like the thing that, you know, we talk about the ramping of the physical supply chain for making chips. If you know, we could probably do an order of magnitude more. I see this chip side, but the infrastructure side is a giant bottleneck. Is that, like, the focus? Just speed, speed, speed, speed, like, get these shells going, sign more up, and have more power. What are you thinking about for the future of the company?

Wes: One thing we didn't discuss is that we have a significant power pipeline. We changed how we looked for power in ‘22. There's one way of looking for power for Bitcoin sites, and then there's a difference for HPC. So we need firm power.

We need power located in a place with at least two diverse fiber routes from the location or where the fiber can easily be transported to the area. Three are preferable. So there are boxes we check when we look for that versus when we're looking for Bitcoin power. But we have an immense power portfolio. We've talked about this publicly: we have about 2 gigawatts in a power portfolio, a little over that.

And so the real focus is that we're hyper-focused on, you know, a) getting this contract across the finish line and then, b) executing that build and getting that asset stabilized. But once we get it, you know, across the finish line, then we'll have these other sites that will go out and start marketing. I'm focused on what we can as a company build and bring online between now and 2026 and even into 2027 because the demand is such that we can sell anything we can get online.

As long as it fits in those boxes that I was talking about, the fiber connectivity and the power, you need a specific size of sites that are even for hyperscalers to look at. We focus on 200 megawatts and above, but the bigger the better. And so, you need that type of scale, but we're very focused on really getting this one across the finish line. And then what can we contract and build between here and mid ‘27, really end of 2026, mid ‘27.

Doug: B’27.

Doug: Best luck with getting this contract across the line. But, yeah, I was gonna say the thing that makes this a little different is I don't know how many people have done a 100-megawatt cluster as you guys have. That's the differentiator.

I think most hyperscalers are probably not interested in anything under 40. Right? Like, they're just like, sure, whatever, that's unimportant to us. Well, you know, the place where we're going, frankly, sounds like, you know, gigawatt clusters in the very long and so being able to, you know, slug a massive cluster at the gate, and then have the proof that you can do it, then turn around and do more with your power portfolio sounds pretty compelling to me.

So, thanks. This is an excellent interview. I learned quite a bit. Are there any parting words before we head out? Is there anything you want to say to everyone?

Wes: No, we covered almost everything. Doug. Thanks for the time.

Doug: Yeah, thank you so much for the time. I learned a lot. Best of luck.

Fabricated Knowledge
Fabricated Knowledge
Let's learn more about the world's most important manufactured product. Meaningful insight, timely analysis, and an occasional investment idea.