Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2024 Earnings Call Transcript

Page 1 of 3

Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2024 Earnings Call Transcript May 8, 2024

Astera Labs, Inc. Common Stock isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).

Operator: Thank you for standing by. My name is Regina, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question-and-answer session. [Operator Instructions] I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.

Leslie Green: Thank you, Regina. Good afternoon, everyone, and welcome to the Astera Labs first quarter 2024 earnings call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President, Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final perspective relating to our IPO.

It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the day of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law.

A laboratory technician in a safety suit analyzing and conducting a series of experiments.

Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs.

Jitendra?

Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first earnings conference call as a public company. This year is off to a great start with Astera Labs seeing strong and continued momentum along with the successful execution of our IPO in March. First and foremost, I would like to thank our investors, customers, partners, suppliers and employees for their steadfast support over the past six years. We have built Astera Labs from the ground up to address the connectivity bottlenecks to unlock the full potential of AI in the cloud. With your help, we’ve been able to scale the company and deliver innovative technology solutions to the leading hyperscalers and AI platform providers worldwide.

But our work is only just beginning. We are supporting the accelerated pace of AI infrastructure deployments with leading hyperscalers by developing new product categories, while also exploring new market segments. Looking at industry reports over the past several weeks, it is clear that we remain in the early stages of a transformative investment cycle by our customers to build out the next generation of infrastructure that is needed to support their AI roadmaps. According to recent earning reports, on a consolidated basis, CapEx spend during the first quarter for the four largest U.S. hyperscalers grew by roughly 45% year-on-year to nearly $50 billion. Qualitative commentary implies continued quarterly growth in CapEx for this group through the balance of the year.

This is truly an exciting time for technology innovators within the cloud and AI infrastructure market, and we believe Astera Labs is well position to benefit from these growing investment trends. Against the strong industry backdrop, Astera Labs delivered strong Q1 results with record revenue, strong non-GAAP operating margin, positive operating cash flows, while also introducing two new products. Our revenue in Q1 was $65.3 million up 29% from the previous quarter and up 269% from the same period in 2023. Non-GAAP operating margin was 24.3%, and we delivered $0.10 of pro forma non-GAAP diluted earnings per share. I will now provide some commentary around our position in this rapidly evolving AI market. Then I will turn the call over to Sanjay to discuss new products and our growth strategy.

Finally, Mike will provide additional details on our Q1 results and our Q2 financial guidance. Complex AI model sizes continue doubling about every six months, fueling the demand for high performance AI platforms running in the cloud. Modern GPUs and AI accelerators are phenomenally good at compute, but without equally fast connectivity, they remain highly underutilized. Technology innovation within the AI Accelerator market has been moving forward at an incredible pace and the number and variety of architectures continues to expand to handle trillion parameter models, while improving AI infrastructure utilization. We continue to see our hyperscaler customers utilize the latest merchant GPUs and proprietary AI accelerators to compose unique data center scale AI infrastructure.

However, no two clouds are the same. The major hyperscalers are architecting their systems to deliver maximum AI performance based on the specific cloud infrastructure requirements, from power and cooling to connectivity. We are working alongside our customers to ensure these complex and different architectures achieve maximum performance and operate reliably even as data rates continue to double. As the systems continue to move data faster and grow in complexity, we expect to see our average dollar content per AI platform increase and even more so with the new products we have in development. Our conviction in maintaining and strengthening our leadership position in the market is rooted in our comprehensive intelligent connectivity platform and our deep customer partnerships.

The foundation of our platform consists of semiconductor based and software-defined connectivity ICs, modules and boards, which all support our COSMOS software suite. We provide customers with a complete customizable solution, tips, hardware and software, which maximizes flexibility without performance penalties, delivers deep fleet management capabilities and matches space with the ever quickening product introduction cycles of our customers. Not only does COSMOS software run on our entire product portfolio, but it is also integrated within our customers’ operating stacks to deliver seamless customization, optimization and monitoring. Today, Astera Labs is focused on three core technology standards: PCI Express, Ethernet and Compute Express Link.

We’re shipping three separate product families, all generating revenue and in various stages of adoption and deployment supporting these different connectivity protocols. Let me touch upon each of these critical data center connectivity standards and how we support them with our differentiated solutions. First, PCI Express. PCIe is the native interface on all AI accelerators, TPUs and GPUs, and is the most prevalent protocol for moving data at high bandwidth and low latency inside servers. Today, we see PCIe Gen 5 getting widely deployed in AI servers. These AI servers are becoming increasingly complex. Faster signal speeds in combination with complex server topologies are driving significant signal integrity challenges. To help solve these problems, our hyperscalers and AI accelerator customers utilize our PCIe Smart DSP Retimers to extend the reach of PCIe Gen 5 between various components within heterogeneous compute architecture.

Our Aries product family represents the gold standard in the industry for performance, robustness and flexibility, and is the most widely deployed solution in the market today. Our leadership position with millions of critical data links running through our Aries Retimers and our COSMOS software enables us to do something more, become the eyes and ears to monitor the connectivity infrastructure and help fleet managers ensure their AI infrastructure is operating at fleet utilization. Deep diagnostics and monitoring capabilities in our chips and extensive fleet management features in our COSMOS software, which are deployed together in our customer’s fleet has become a material differentiator for us. Our COSMOS software provides the easiest and fastest path to deploy the next generation of our devices.

We see AI workloads and newer GPUs driving the transition from PCIe Gen 5 running at 32 gigabits per second per lane to PCIe Gen 6 running at 64 gigabits per second per lane. Our customers are evaluating our Gen 6 solutions now, and we expect them to make design decisions in the next six to nine months. In addition, while we see our Aries devices being heavily deployed today for interconnecting AI accelerators with CPUs and networking, we also expect our Aries devices to play an increasing role in backend fabrics, interconnecting AI Accelerators to each other in AI clusters. Next, let’s talk about Ethernet. Ethernet protocol is extensively deployed to build large scale networks within data centers. Today, Ethernet makes up the vast majority of connections between servers and top of rack switches.

Driven by AI workloads’ insatiable need for speed, Ethernet data rates are doubling roughly every two years, and we expect the transition from 400 gig Ethernet to 800 gig Ethernet to take place later in 2025. 800 gig Ethernet is based on 100 gigabits per second per lane signaling rate, which is facing tremendous pressure on conventional passive cabling solutions. Like our PCIe Retimers, our portfolio of Taurus Ethernet Retimers helps relieve these connectivity bottlenecks by overcoming the reach, signal integrity and bandwidth issues by enabling robust 100 gig per lane connectivity over copper. Unlike our Aries portfolio, which is largely sold in a chip format, we sell our Taurus portfolio largely in the form of smart cable modules that are assembled into active electrical cables by our cable partners.

This approach allows us to focus on our strength and fully leverage our COSMOS software suite to offer customization, easy qualification, deep telemetry and field upgrade to our customers. At the same time, this model enables our cable partners to continue to excel at bringing the best cabling technology to our common end customers. We expect 400 deployments based on our Taurus smart cable modules to begin to ramp in the back half of 2024. We see the transition to 800 gig Ethernet starting to happen in 2025, resulting in broad demand for AECs to both scale up and scale out AI infrastructure and strong growth for our Taurus Ethernet Smart Cable module portfolio over the coming years. Last is Compute Express Link or CXL. CXL is a low latency cash coherent protocol, which runs on top of PCIe protocol.

CXL provides an open standard for disaggregating memory from compute. CXL allows you to balance the memory bandwidth and capacity requirements independently from compute requirements, resulting in better utilization of compute infrastructure. Over the next several years, data center platform architects plan to utilize CXL technology to solve memory bandwidth and capacity bottlenecks that are being exacerbated by the exponential increase in compute capability of CPUs and GPUs. Major hyperscalers are actively exploring different application of CXL memory expansion. While the adoption of CXL technology is currently in its infancy, we do expect to see increased deployments with the introduction of next generation CXL capable datacenter server CPUs such as Granite Rapids, Turing and others.

Our first to market portfolio of Leo CXL memory connectivity controllers is very well positioned to enable our customers to overcome memory bottlenecks and deliver significant benefits to their end customers. We have worked closely with our hyperscaler customers and CPU partners to optimize our solution to seamlessly deliver these benefits without any application level software changes. Furthermore, we have used our COSMOS software to include significant learnings we have had over the last 18 months and to customize our Leo memory expansion solution to the different requirements from each hyperscaler. We anticipate memory expansion will be the first high volume use case that will drive design wins into volume production in 2025 timeframe. We remain very excited about the potential of CXL in datacenter applications and believe that most new CPUs will support CXL and hyperscalers will increasingly deploy innovative solutions based on CXL.

With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy.

Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. Astera Labs is well positioned to demonstrate long-term growth through a combination of three factors. One, we have a strong secular tailwinds with increased AI infrastructure investment. Two, the next generation of products within existing product lines are gaining traction. And third, the introduction of new product lines. Over the past three months, we announced two new and significant products that play an important role in enabling next generation AI platforms and provide incremental revenue opportunities as early as the second half of 2024. First, we expanded our widely deployed field proven Aries Smart DSP Retimers portfolio with the introduction and public demonstration of our Aries 6 PCIe Retimer that delivers robust, low power PCIe Gen 6 and CXL 3 connectivity between next generation GPUs, AI accelerators, CPUs, NICs, and CXL memory controllers.

Aries 6 is the third generation of our PCIe Smart Retimer portfolio and provides the bandwidth required to support data intensive AI workloads while maximizing utilization of next generation GPUs operating at 64 gigabit per second per link. Fully compatible with our field deployed COSMOS software suite, Aries 6 incorporates the tribal knowledge we have acquired over the past four years by partnering and enabling hyperscadeless to deploy AI infrastructure in the cloud. Aries 6 also enables the seamless upgrade path from current PCIe Gen 5 based platforms to next generation PCIe Gen 6 based platforms for our customers. With Aries 6, we demonstrated industry’s lowest power at 11 watts at Gen 6 in full 16 lane configuration running at 64 gigabit per second per lane, significantly lower than our competitors and even lower than our own Aries Gen 5 Retimer.

Through collaboration with leading providers of GPUs and CPUs such as AMD, ARM, Intel, and NVIDIA, Aries 6 is being rigorously tested at Astera’s Cloud-Scale Interop Lab and in customers’ platforms to minimize interoperation risk, lower system development cost, and reduce time to market. Aries 6 was demonstrated at NVIDIA’s GTC event during the week of March 18th. Aries 6 is currently sampling two leading AI and cloud infrastructure providers, and we expect initial volume ramps to begin in 2025. We also announced the introduction and sampling of our Aries PCIe and CXL Smart Cable Modules for Active Electrical Cables or AECs to support robust and long reach, up to 7 meters copper cable connectivity. This is 3x the standard reach defined in the PCIe spec.

Our new PCIe AEC solution is design for GPU clustering application by extending PCIe backend fabric deployments to multiple racks. This new Aries product category expands our market opportunity from within the rack to across racks. As with our entire product portfolio, Aries Smart Cable Modules support our COSMOS software suite to deliver a powerful yet familiar array of link monitoring, fleet management and rack tools which are customizable for diverse needs of our hyperscaler customers. We leveraged our expertise in silicon, hardware and software to deliver a complete solution in record time and we expect initial shipments to begin later this year for the PCIe AECs. We believe this new Aries product announcement represents another concrete example of Astera Labs driving the PCIe ecosystem with technology leadership with an intelligent connectivity platform that includes silicon chips, hardware modules and COSMOS software suite.

Over the coming quarters, we anticipate ongoing generational product upgrades to existing product lines and introduction of new product categories developed from the ground up to fully utilize the performance and productivity capabilities of generative AI. In summary, over the past few years, we have built a great team that is delivering technology that is foundational to deploying AI infrastructure at scale. We have gained the trust and support of our world class customer base by executing, innovating and delivering to our commitments. These tight relationships are resulting in new product developments and enhanced technology roadmap for Astera. We look forward to continue collaboration with our partners as a new era unfolds driven by AI applications.

With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and Q2 outlook.

Mike Tate: Thanks, Sanjay, and thanks to everyone for joining. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and the related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q1 of 2024, Astera Labs delivered record quarterly revenue of $65.3 million which was up 29% versus the previous quarter and 269% higher than the revenue in Q1 of 2023. During the quarter, we shipped products to all the major hyperscalers and AI accelerator manufacturers.

We recognized revenues across all three of our product families during the quarter with Aries products being the largest contributor. Aries enjoyed solid momentum in AI based platforms as customers continue to introduce and ramp their PCIe Gen 5 capable AI systems, along with overall strong unit growth with the industry’s growing investment in generative AI. Also, we continue to make good progress with our Taurus and Leo product lines, which are in the early stages of revenue contribution. In Q1, Taurus revenues were primarily shipping into 200 gig Ethernet based systems, and we expect Taurus revenues to sequentially track higher as we progress through 2024, as we also begin to ship into 400 gig Ethernet based systems. Q1 Leo revenues were largely from customers purchasing pre-products volumes for their development of their next generation CXL capable compute platforms expected to launch late this year with the next server CPU refresh cycle.

Q1 non-GAAP gross margins was 78.2% and was up 90 basis points compared with 77.3% in Q4 2023. The positive gross margin performance during the quarter was driven by healthy product mix. Non-GAAP operating expenses for Q1 were $35.2 million up from $27 million in the previous quarter. With non-GAAP operating expenses, R&D expense was $22.9 million, sales and marketing expense was $6 million and general and administration expenses were $6.3 million. Non-GAAP operating expenses during Q1 increased largely due to a combination of increased headcount and incremental costs associated with being a public company. The largest delta between non-GAAP and GAAP operating expenses in Q1 was stock-based compensation recognized in connection with our recent IPO and its associated employer payroll taxes and to a lesser extent our normal quarterly stock-based compensation expense.

Non-GAAP operating margins for Q1 was 24.3% as revenues scaled in proportion with our operating expenses on a sequential basis. Interest income in Q1 was $2.6 million. Our non-GAAP tax provision was $4.1 million for the quarter, which represents a tax rate of 22% on a non-GAAP basis. Pro forma non-GAAP fully diluted share count for Q1 was 147.5 million shares. Our pro forma non-GAAP diluted earnings per share for the quarter was $0.10. The pro forma non-GAAP diluted shares includes the assumed conversion of our preferred stock for the entire quarter, while our GAAP share count only includes a conversion of our preferred stock for the step period following our March IPO. Going forward, given that all the preferred stock has now been converted to common stock upon our IPO, those preferred shares will be fully included in the share count for both GAAP and non-GAAP.

Cash flow from operating activities for Q1 was $3.7 million and we ended the quarter with cash, cash equivalents and marketable securities of just over $800 million. Now turning to our guidance for Q2 of fiscal 2024. We expect Q2 revenues to increase from Q1 levels within a range of 10% to 12% sequentially. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q2. Within the Aries product family, we expect the growth to be driven by increased unit demand for AI servers as well as the ramp of new product designs with our customers. We expect non-GAAP gross margins to be approximately 77% given a modest increase in hardware shipments relative to standalone ICs. We believe as our hardware solutions grow as a percentage of revenue over the coming quarters, our gross margins will begin to trend towards our long-term gross margin model of 70%.

We expect non-GAAP operating expenses to be approximately $40 million as we remain aggressive in expanding our R&D resource pool across headcount and intellectual property, while also scaling our back office functions. Interest income is expected to be $9 million. Our non-GAAP tax rate should be approximately 23% and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of approximately $0.11. This concludes our prepared remarks. Once again, we very much appreciate everyone joining the call. And now we’ll open the line for questions. Operator?

See also 20 Biggest Grain Exporting Countries in the World and 25 Most Dangerous Crime Lords in the World.

Q&A Session

Follow Astera Labs Inc.

Operator: [Operator Instructions] Our first question will come from the line of Harlan Sur with JPMorgan.

Harlan Sur : Good afternoon and congratulations on the strong results and guidance post your Q1 as a public company. As you guys mentioned, many new AI XPU programs coming to the market, GPU, ASIC AI chip programs, accelerators. In terms of total XPU shipments this year, I think only half is going to be NVIDIA based, so it is starting to broaden out. The good news is, obviously, the Astera team has exposure to all of these XPU programs. It does seem that the pace of deploying these XPU platforms has accelerated even over the past few months. So how much of the strong results and guidance is due to this acceleration, broadening in customer deployments? How much is more just kind of higher content of Retimers versus your prior expectations? And then do you guys see the strong momentum continuing to the second half of this year?

Mike Tate: Thanks, Harlan. This is Mike. We started shipping into AI servers really in Q3 of last year, so it’s just in the early innings. Lot of our customers have not fully deployed their AI system. So we’re seeing incremental growth just from adding on the different platforms that we have design wins in. But it’s on a, in a backdrop where there’s clearly growing investment in AIs as well as overall unit growth is also playing out. As we look out to the balance of this year, there’s still a lot of programs that have not ramped yet. So we have the highest confidence that the Gen 5 Aries platform has a lot of growth ahead of it, and that continues into 2025 as well.

Harlan Sur : And as you mentioned, there’s been a lot of focus on next gen PCIe Gen 6 platforms, right, obviously, with the rollout of NVIDIA’s Blackwell based platform? And, obviously, with any market that is viewed of as fast growing, you are going to attract competitors. We have seen some announcing by competitors. We know most of the Gen 5 design wins have already been locked up by the Astera team. You’ve been working with customers, as you mentioned, on Gen 6, for some time now. Maybe how do you compare the customer engagement momentum on Gen 6 versus the same period back when you were working with customers on Gen 5?

Sanjay Gajendra: Good question, Harlan. This is Sanjay here. Let me take that. So like you correctly said, Gen 5 is still a lot of legs on it. Let’s be very clear on that. Like Mike noted, we do have platforms that are still ramping and still to come. So to that standpoint, we do expect Gen 5 to be with us for some time. And in terms of Gen 6, again, it’s driven by the pace of innovation that’s happening on the AI side. There is, as you probably know, there’s GPUs are not fully utilized. Some reports put it at around 50%. So there’s still a lot of growth in terms of connectivity, which is essentially holding it back, right, meaning there’s a pace and a need to adopt faster speeds and links. So, with NVIDIA announcing their Blackwell platform, those are the first set of GPUs that have Gen 6 on that.

So for that standpoint, we do expect some of those deployments to happen in 2025. But in general, others are not far behind based upon public information that’s out there. So, we do expect the cycle time for Gen 6 adoption to perhaps be a little bit shorter than Gen 5, especially on the AI, server application, more so than the general purpose compute, which is still going to be lagging when it comes to PCIe Gen 6 adoption.

Operator: Your next question will come from the line of Joe Moore with Morgan Stanley.

Joe Moore : Following on from that, can you talk about PCI Gen 5 in general purpose servers? It seems like if I look at the CPU penetration of Gen 5, we’re still at a pretty early stage. Do you see growth from general purpose and what are the applications driving that?

Sanjay Gajendra: Absolutely. And primarily on the general purpose compute, the main places where the PCIe timer gets used tends to be on the storage connectivity where you have SSDs that are on the back of the server. So to that standpoint, again, it’s, there are two things that have been holding it back or three things perhaps. One is just the focus on AI. I mean, most of dollars are going to the AI server application compared to general compute. The second thing is just the ecosystem readiness for Gen 5, primarily on the SSD side, which is starting to evolve with many of the major SSD NVMe players providing or ramping up on Gen 5 based, NVMe drives. The third one really has been the CPU platforms. If you think about it both from Intel and AMD, they’re all on the cusp of introducing their next significant platform, whether it is Granite Rapids for Intel or Turin from AMD.

So that is expected to drive the introduction of new platform. And if you combine that with the SSDs being ready for Gen 5 and based on the design wins that we already have, you can expect that those things would be a contributing factor as dollars start flowing back into the compute side, general purpose compute side.

Joe Moore : And for my follow-up, you just mentioned Granite Rapids and Turin, which are the first kind of volume platform supporting CXL 2. What are you hearing in terms of the CPUs will be out, but what will be the initial adoption and how quickly do you think that technology can roll out in 2025?

Sanjay Gajendra: Yes. Let me start off by saying, CXL, every hyperscaler is in some shape or form evaluating and working with the technology. So it’s well and alive. I think where the focus really has been in terms of CXL is on the memory expansion use case, specifically for CPUs. And the expansion could be for reasons like adding more memory for large database applications, more capacity memory. And the second use case, of course, is for more memory bandwidth, which are for HPC type of applications. So the thing that’s been holding back is the availability of CPUs that support CXL at a production quality level. That will change with Granite Rapids and Turin being available. So at this point, what we can say is that we’ve been providing chips for quite some time.

We’ve been in preproduction and supported the various different evaluation POC type of activities that have happened with our hyperscaler customers. So, to that standpoint, we do expect revenue to start coming in 2025 from memory expansion use case for CXL.

Operator: Your next question will come from the line of Tore Svanberg with Stifel.

Tore Svanberg: Yes. Thank you. And let me add my congratulations. My first question is on Gen 6 PCI. So Sanjay, you just mentioned that the design in cycle is going to be shorter than Gen 5 now. Since its backwards compatible for your Gen 5 and especially given the COSMOS software platform, should we assume that you will basically retain most of those sockets that you already had in Gen 5 and then obviously some new ones as well for Gen 6?

Sanjay Gajendra: That’s the goal for the company. We have the COSMOS software and like I noted, PCI Express is one of those protocols which, unlike Ethernet, tends to be a little messy, meaning it’s something that’s been around for a long time. It’s a great technology, but it also requires a lot of handholding. And for us, what has happened is being in the customers’ platforms, bringing up systems that ramp up to millions of devices has allowed us to understand what are the nuances, what works, what doesn’t work, how do you make the link perform at the highest rate. So that tribal knowledge is something that we’ve captured within the COSMOS software that we built running both on our chips as well as customers’ platforms. So we do expect that as Gen 6 starts to materialize, lot of those learnings will be carried over.

Now you’re right that there’s been a lot of competition that has come in as well. But we believe that when it comes to competition, they could have a similar product like us. But no matter what, there is a full time that’s essential when it comes to connectivity type of chips, just given the interoperation and getting the kings out and so on. Meaning you could have a perfect chip yet have a failing system. The reason for that is the complexity of the system and how PCI Express standard is defined. So to that standpoint, I agree with what you said in the sense that we have the leading position now in the Retimer market for PCIe and we expect to build on that both with the new features we have added in PCIe Gen 6 or the AEC product line and also the tribal knowledge that we have built by working with our partners over the last three, four years.

Tore Svanberg: That’s a great perspective. And as my follow-up, I had a question on AEC. It sounds like that business is going to start ramping late this year. First of all, is that with multiple cable partners? And then related to that, are you the only company today that have, an AEC at 7 meters?

Sanjay Gajendra: I don’t know about the only, customer. I would probably request maybe you need to do some research on it on where the competition is. But from a Retimer standpoint, which goes on this, we do have a leading position. So based on that, I would imagine that we are the main provider here, both based on that and the customer traction that we’re seeing. So, this one is an interesting use case. So far, PCI Express, as you know, was defined to be inside the server. But what is happening now, and this is why we’re excited about PCIe, AECs, is that now we are opening up a new front in terms of clustering GPUs, meaning interconnecting accelerators. That is where the AECs will play, and that is a new opportunity that goes along with the Ethernet AECs that we already provide, which are also used for interconnecting GPUs on the backend network.

So, overall, we do believe that combining our PCIe AEC solution and Ethernet AEC solution, we’re well set for some of these evolving trends. And our revenue we expect to start coming in for the latter half of this year. And on PCIe, again, we do believe we are the only one just to make sure I clarify what I initially said, just that I don’t know if there is someone else talking about it that’s not yet in the public domain.

Operator: Your next question will come from the line of Blayne Curtis with Jefferies.

Blayne Curtis : Maybe first one for you, Jitendra. Just curious, you mentioned the right architectures, I think, Harlan asked on it. I was just kind of curious about obviously, you have a lead customer and it’s a lot of CPU to GPU connections. That’s the nature of the market who has the volume. But I’m curious you mentioned back, the backend fabrics a bunch. Kind of curious is that still conceptual? Are you seeing designs for it? And maybe just talk about the widening out of just applications for what the Retimers are being used for?

Jitendra Mohan: Great question. So, there are many applications where we use the Retimers. Of course, we are most known for the connectivity from the GPU to the head node. That is where a lot of the deployments are happening. But these new applications also speak to how rapidly the AI systems are evolving. Every few months, we see a new AI platform come up and that opens up additional opportunities for us. And one of those is to cluster GPUs together. There are two main protocols that are used in addition to NVLink, of course, which are used to cluster GPUs that is PCI Express and Ethernet. And as Sanjay just mentioned, we now have solutions available to interconnect GPUs together, whether they are for PCI Express and/or Ethernet.

Specifically, in the case of PCI Express, some of our customers who want to use PCI Express for clustering GPUs together are now able to do so using our PCI Express Retimers, which are offered in the form of an active electrical cable. So this business is going to be in addition to the sustaining business that we have today in connecting GPUs to head nodes. Now we are connecting GPUs together in a cluster. And as you know, these are very intense, very dense mesh connections. So they can grow very, very rapidly. So we are very excited about where this will grow and starting with some revenue contributions later this year.

Blayne Curtis : And then maybe a question for Mike. The gross margin remained quite high. You said it was mix. I mean, maybe you’re just being kind of conservative with the IPO, but I was just kind of curious did the mix come in? I mean, I think it’s mostly Retimers. I know as the other products start to ramp that will be the headwind. So, I’m just kind of, how do you think about the rest of the year? Should we kind of just have it kind of come down with mix gradually as those new products ramp off this 70% that you’re guiding to?

Mike Tate: Yes. So just to remind everybody, our standalone ICs carry a pretty high margin relative to our hardware solutions. So when the mix gets a little more balanced with hardware versus standalone ICs, we’re expecting our long-term gross margins to 10% to 70%. In Q1, we were heavily weighted to standalone ICs, very favorable mix and that’s how we enjoyed the strong gross margins. As we go through the balance of this year and into next year, we will see an increasing mix of our modules and also add in cards for CXL as well. So, we think we’ll have a gradual trend down towards a long-term model over time as that mix changes.

Operator: Your next question will come from the line of Thomas O’Malley with Barclays.

Thomas O’Malley : Mike, I just wanted to ask, I know you may not be giving segment details specifically, but could you talk about what you’re able to, what contributed to the revenue in the quarter? And then looking out into June, could you talk about from a revenue mix perspective, maybe some sequential help on what’s growing? Obviously, the non-ICs business is growing just given the fact that gross margins are pressured a bit, but just any color on the segments would be helpful to start?

Page 1 of 3