diff --git "a/finance-app-tutorial/earnings-data.json" "b/finance-app-tutorial/earnings-data.json" new file mode 100644--- /dev/null +++ "b/finance-app-tutorial/earnings-data.json" @@ -0,0 +1,37 @@ +[ + { + "symbol": "AAPL", + "quarter": 2, + "year": 2024, + "date": "2024-05-02 20: 30: 10", + "content": "Suhasini Chandramouli: Good Afternoon, and welcome to the Apple Q2 Fiscal Year 2024 Earnings Conference Call. My name is Suhasini Chandramouli, Director of Investor Relations. Today's call is being recorded. Speaking first today is Apple's CEO, Tim Cook, and he'll be followed by CFO, Luca Maestri. After that, we'll open the call to questions from analysts. Please note that some of the information you'll hear during our discussion today will consist of forward-looking statements, including, without limitation, those regarding revenue, gross margin, operating expenses, other income and expense, taxes, capital allocation and future business outlook, including the potential impact of macroeconomic conditions on the company's business and results of operations. These statements involve risks and uncertainties that may cause actual results or trends to differ materially from our forecast. For more information, please refer to the risk factors discussed in Apple's most recently filed Annual Report on Form 10-K and the Form 8-K filed with the SEC today, along with the associated press release. Apple assumes no obligation to update any forward-looking statements, which speak only as of the date they are made. I'd now like to turn the call over to Tim for introductory remarks.\nTim Cook: Thank you, Suhasini. Good afternoon, everyone, and thanks for joining the call. Today, Apple is reporting revenue of $90.8 billion and an EPS record of $1.53 for the March quarter. We set revenue records in more than a dozen countries and regions. These include, among others, March quarter records in Latin-America and the Middle East, as well as Canada, India, Spain and Turkey. We also achieved an all-time revenue record in Indonesia, one of the many markets where we continue to see so much potential. In services, we set an all-time revenue record, up 14% over the past year. Keep in mind, as we described on the last call, in the March quarter a year-ago, we were able to replenish iPhone channel inventory and fulfill significant pent-up demand from the December quarter COVID-related supply disruptions on the iPhone 14 Pro and 14 Pro Max. We estimate this one-time impact added close to $5 billion to the March quarter revenue last year. If we remove this from last year's results, our March quarter total company revenue this year would have grown. Despite this impact, we were still able to deliver the records I described. Of course, this past quarter, we were thrilled to launch Apple Vision Pro and it has been so wonderful to hear from people who now get to experience the magic of spatial computing. They describe the impossible becoming possible right before their eyes and they share their amazement and their emotions about what they can do now, whether it's reliving their most treasured memories or having a movie theater experience right in their living room. It's also great to see the enthusiasm from the enterprise market. For example, more than half of the Fortune 100 companies have already bought Apple Vision Pro units and are exploring innovative ways to use it to do things that weren't possible before, and this is just the beginning. Looking ahead, we're getting ready for an exciting product announcement next week that we think our customers will love. And next month, we have our Worldwide Developers Conference, which has generated enormous enthusiasm from our developers. We can't wait to reveal what we have in-store. We continue to feel very bullish about our opportunity in Generative AI. We are making significant investments, and we're looking forward to sharing some very exciting things with our customers soon. We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple's unique combination of seamless hardware, software and services integration, groundbreaking Apple's silicon, with our industry-leading neural engines and our unwavering focus on privacy, which underpins everything we create. As we push innovation forward, we continue to manage thoughtfully and deliberately through an uneven macroeconomic environment and remain focused on putting our users at the center of everything we do. Now let's turn to our results for the March quarter across each product category, beginning with iPhone. iPhone revenue for the March quarter was $46 billion, down 10% year-over-year. We faced a difficult compare over the previous year due to the $5 billion impact that I mentioned earlier. However, we still saw growth on iPhone in some markets, including Mainland China, and according to Kantar during the quarter, the two best-selling smartphones in Urban China were the iPhone 15 and iPhone 15 Pro Max. I was in China recently where I had the chance to meet with developers and creators who are doing remarkable things with iPhone. And just a couple of weeks ago, I visited Vietnam, Indonesia and Singapore, where it was incredible to see all the ways customers and communities are using our products and services to do amazing things. Everywhere I travel, people have such a great affinity for Apple, and it's one of the many reasons I'm so optimistic about the future. Turning to Mac. March quarter revenue was $7.5 billion, up 4% from a year ago. We had an amazing launch in early March with the new 13-inch and 15-inch MacBook Air. The world's most popular laptop is the best consumer laptop for AI with breakthrough performance of the M3 chip and it’s even more powerful neural engine. Whether it's an entrepreneur starting a new business or a college student finishing their degree, users depend on the power and portability of MacBook Air to take them places they couldn't have gone without it. In iPad, revenue for the March quarter was $5.6 billion, 17% lower year-over-year, due to a difficult compare with the momentum following the launch of M2 iPad Pro and the 10th Generation iPad last fiscal year. iPad continues to stand apart for its versatility, power and performance. For video editors, music makers and creatives of all kinds, iPad is empowering users to do more than they ever could with a tablet. Across Wearables, Home and Accessories, March quarter revenue was $7.9 billion, down 10% from a year-ago due to a difficult launch compare on Watch and AirPods. Apple Watch is helping runners go the extra mile on their wellness journeys, keeping hikers on course with the latest navigation capabilities in watchOS 10, and enabling users of all fitness levels to live a healthier day. Across our watch lineup, we're harnessing AI and machine-learning to power lifesaving features like a regular rhythm notifications and fall detection. I often hear about how much these features mean to users and their loved ones and I'm thankful that so many people are able to get help in their time of greatest need. As I shared earlier, we set an all-time revenue record in services with $23.9 billion, up 14% year-over-year. We also achieved all-time revenue records across several categories and geographic segments. Audiences are tuning in on screens large, small and spatial and are enjoying Apple TV+ Originals like Palm Royale and Sugar. And we have some incredible theatrical releases coming this year, including Wolves, which reunites George Clooney and Brad Pitt. Apple TV+ productions continue to be celebrated as major awards contenders. Since launch, Apple TV+ productions have earned more than 2,100 award nominations and 480 wins. Meanwhile, we're enhancing the live sports experience with a new iPhone app, Apple Sports. This free app allows fans to follow their favorite teams and leagues with real-time scores, stats and more. Apple Sports is the perfect companion for MLS Season Pass subscribers. Turning to retail, our stores continued to be vital spaces for connection and innovation. I was delighted to be in Shanghai for the opening of our latest flagship store. The energy and enthusiasm from our customers was truly something to behold. And across the United States, our incredible retail teams have been sharing Vision Pro demos with customers, delighting them with a profound and emotional experience of using it for the very first time. Everywhere we operate and everything we do, we're guided by our mission to enrich users' lives and lead the world better than we found it, whether we're making Apple podcasts more accessible with a new transcripts feature or helping to safeguard iMessage users' privacy with new protections that can defend against advances in quantum computing. Our environmental work is another great example of how innovation and our values come together. As we work toward our goal of being carbon-neutral across all of our products by 2030, we are proud of how we've been able to innovate and do more for our customers while taking less from the planet. Since 2015, Apple has cut our overall emissions by more than half, while revenue grew nearly 65% during that same time period. And we're now using more recycled materials in our products than ever before. Earlier this spring, we launched our first-ever product to use 50% recycled materials with a new M3-powered MacBook Air. We're also investing in new solar and wind power in the U.S. and Europe, both to power our growing operations and our users' devices. And we're working with partners in India and the U.S. to replenish 100% of the water we use in places that need it most with the goal of delivering billions of gallons of water benefits over the next two decades. Through our Restore Fund, Apple has committed $200 million to nature-based carbon removal projects. And last month, we welcomed two supplier partners as new investors, who will together invest up to an additional $80 million in the fund. Whether we're enriching lives of users across the globe or doing our part to be a force for good in the world, we do everything with a deep sense of purpose at Apple. And I'm proud of the impact we've already made at the halfway point in a year of unprecedented innovation. I couldn't be more excited for the future we have ahead of us, driven by the imagination and innovation of our teams and the enduring importance of our products and services in people's lives. With that, I'll turn it over to Luca.\nLuca Maestri: Thank you, Tim, and good afternoon, everyone. Revenue for the March quarter was $90.8 billion, down 4% from last year. Foreign exchange had a negative year-over-year impact of 140 basis points on our results. Products revenue was $66.9 billion, down 10% year-over-year due to the challenging compare on iPhone that Tim described earlier, which was partially offset by strength from Mac. And thanks to our unparalleled customer satisfaction and loyalty and a high number of customers who are new to our products, our installed base of active devices reached an all-time high across all products and all geographic segments. Services revenue set an all-time record of $23.9 billion, up 14% year-over-year with record performance in both developed and emerging markets. Company gross margin was 46.6%, up 70 basis points sequentially, driven by cost savings and favorable mix to services, partially offset by leverage. Products gross margin was 36.6%, down 280 basis points sequentially, primarily driven by seasonal loss of leverage and mix, partially offset by favorable costs. Services gross margin was 74.6%, up 180 basis points from last quarter due to a more favorable mix. Operating expenses of $14.4 billion were at the midpoint of the guidance range we provided and up 5% year-over-year. Net income was $23.6 billion, diluted EPS was $1.53 and a March quarter record, and operating cash flow was strong at $22.7 billion. Let me now provide more detail for each of our revenue categories. iPhone revenue was $46 billion, down 10% year-over-year, due to the almost $5 billion impact from a year ago that Tim described earlier. Adjusting for this one-time impact, iPhone revenue would be roughly flat to last year. Our iPhone active installed base grew to a new all-time high in total and in every geographic segment. And during the March quarter, we saw many iPhone models as the top-selling smartphones around the world. In fact, according to a survey from Kantar, an iPhone was the top-selling model in the U.S., Urban China, Australia, the U.K., France, Germany and Japan. And the iPhone 15 family continues to be very popular with customers. 451 Research recently measured customer satisfaction at 99% in the U.S. Mac revenue was $7.5 billion, up 4% year-over-year, driven by the strength of our new MacBook Air, powered by the M3 chip. Customers are loving the incredible AI performance of the latest MacBook Air and MacBook Pro models. And our Mac installed base reached an all-time high with half of our MacBook Air buyers during the quarter being new to Mac. Also customer satisfaction for Mac was recently reported at 96% in the U.S. iPad generated $5.6 billion in revenue, down 17% year-over-year. iPad continued to face a challenging compare against the launch of the M2 iPad Pro and iPad 10th Generation from last year. At the same time, the iPad installed base has continued to grow and is at an all-time high as over half of the customers who purchased iPads during the quarter were new to the product. In addition, the latest reports from 451 Research indicated customer satisfaction of 96% for iPad in the US. Wearables, Home and Accessories revenue was $7.9 billion, down 10% year-over-year due to a difficult launch compare. Last year, we had the continued benefit from the launches of the AirPods Pro second-generation, the Watch SE and the first Watch Ultra. Apple Watch continues to attract new customers, with almost two-thirds of customers purchasing an Apple Watch during the quarter being new to the product, sending the Apple Watch installed base to a new all-time high and customer satisfaction was recently measured at 95% in the U.S. In services, as I mentioned, total revenue reached an all-time record of $23.9 billion, growing 14% year-over-year with our installed-base of active devices continuing to grow at a nice pace. This provides a strong foundation for the future growth of the services business as we continued to see increased customer engagement with our ecosystem. Both transacting accounts and paid accounts reached a new all-time high with paid accounts growing double-digits year-over-year. And paid subscriptions showed strong double-digit growth. We have well over $1 billion paid subscriptions across the services on our platform, more than double the number that we had only four years ago. We continued to improve the breadth and quality of our current services from creating new games on Arcade and great new shows on TV+ to launching additional countries and partners for Apple Pay. Turning to enterprise, our customers continued to invest in Apple products to drive productivity and innovation. We see more and more enterprise customers embracing the Mac. In Healthcare, Epic Systems, the world's largest electronic medical record provider, recently launched its native app for the Mac, making it easier for healthcare organizations like Emory Health to transition thousands of PCs to the Mac for clinical use. And since the launch of Vision Pro last quarter, many leading enterprise customers have been investing in this amazing new product to bring spatial computing apps and experiences to life. We are seeing so many compelling use cases from aircraft engine maintenance training at KLM Airlines to real-time team collaboration for racing at Porsche to immersive kitchen design at Lowe's. We couldn't be more excited about the spatial computing opportunity in enterprise. Taking a quick step back, when we look at our performance during the first-half of our fiscal year, total company revenue was roughly flat to the prior year in spite of having one less week of sales during the period and some foreign exchange headwinds. We were particularly pleased with our strong momentum in emerging markets, as we set first-half revenue records in several countries and regions, including Latin-America, the Middle East, India, Indonesia, the Philippines and Turkey. These results, coupled with double-digit growth in services and strong levels of gross margin, drove a first half diluted EPS record of $3.71, up 9% from last year. Let me now turn to our cash position and capital return program. We ended the quarter with $162 billion in cash and marketable securities. We repaid $3.2 billion in maturing debt and commercial paper was unchanged sequentially, leaving us with total debt of $105 billion. As a result, net cash was $58 billion at the end of the quarter. During the quarter, we returned over $27 billion to shareholders, including $3.7 billion in dividends and equivalents and $23.5 billion through open-market repurchases of $130 million Apple's shares. Given the continued confidence we have in our business now and into the future, our Board has authorized today an additional $110 billion for share repurchases, as we maintain our goal of getting to net cash-neutral over time. We are also raising our dividend by 4% to $0.25 per share of common stock, and we continued to plan for annual increases in the dividend going forward as we've done for the last 12 years. This cash dividend will be payable on May 16, 2024 to shareholders of record as of May 13, 2024. As we move ahead into the June quarter, I'd like to review our outlook, which includes the types of forward-looking information that Suhasini referred to at the beginning of the call. The color we are providing today assumes that the macroeconomic outlook doesn't worsen from what we are projecting today for the current quarter. We expect our June quarter total company revenue to grow low-single-digits year-over-year in spite of a foreign exchange headwind of about 2.5 percentage points. We expect our services business to grow double-digits at a rate similar to the growth we reported for the first-half of the fiscal year. And we expect iPad revenue to grow double-digits. We expect gross margin to be between 45.5% to -- and 46.5%. We expect OpEx to be between $14.3 billion and $14.5 billion. We expect OI&E to be around $50 million, excluding any potential impact from the mark-to-market of minority investments and our tax rate to be around 16%. With that, let's open the call to questions.\nSuhasini Chandramouli: Thank you, Luca. We ask that you limit yourself to two questions. Operator, may we have the first question, please?\nOperator: Certainly. We will go ahead and take our first question from Mike Ng with Goldman Sachs. Please go ahead.\nMike Ng: Hey, good afternoon. Thank you very much for the question. I have two, first, I'll ask about the June quarter guidance. The revenue outlook for low-single digits growth, I was wondering if you could run through some of the product assumptions, iPhone, like what kind of gives you confidence around that? And then on the service momentum, what was better than expected in the quarter? And then I just have a quick follow-up.\nLuca Maestri: Hey, Mike. It's Luca. On the outlook, what we said is we expect to grow low-single-digits in total for the company. We expect services to grow double-digits at a rate that is similar to what we've done in the first-half of our fiscal year. And we've also mentioned that iPad should grow double-digits. This is the color that we're providing for the June quarter. In services, we've seen a very strong performance across the board. We've mentioned, we've had records in several categories, in several geographic segments. It's very broad based, our subscription business is going well. Transacting accounts and paid accounts are growing double-digits. And also we've seen a really strong performance both in developed and emerging markets. So very pleased with the way the services business is going.\nMike Ng: Great. Thank you. And I wanted to ask about, as Apple leans more into AI and Generative AI, should we expect any changes to the historical CapEx cadence that we've seen in the last few years of about $10 billion to $11 billion per year or any changes to, you know, how we may have historically thought about the split between tooling, data center and facilities? Thank you very much.\nLuca Maestri: Yes. We are obviously very excited about the opportunity with Gen AI. We obviously are pushing very hard on innovation on every front and we've been doing that for many, many years. Just during the last five years, we spent more than a $100 billion in research and development. As you know, on the CapEx front, we have a bit of a hybrid model where we make some of the investments ourselves. In other cases, we share them with our suppliers and partners on the manufacturing side, we purchased some of the tools and manufacturing equipment. In some of the cases, our suppliers make the investment. On the -- and we do something similar on the data center side. We have our own data center capacity and then we use capacity from third parties. It's a model that has worked well for us historically and we plan to continue along the same lines going forward.\nMike Ng: Excellent. Thank you very much.\nSuhasini Chandramouli: Awesome. Thank you, Mike. Operator, can we have the next question, please?\nOperator: Our next question is from Wamsi Mohan with Bank of America. Please go ahead.\nWamsi Mohan: Yes, thank you so much. Tim, can you talk about the implications to Apple from the changes driven by EU DMA? You've had to open up third-party app stores, clearly disposes some security risks on the one-hand, which can dilute the experience, but also lower payments from developers to Apple. What are you seeing developers choose in these early days and consumers choose in terms of these third-party app stores? And I have a follow-up.\nTim Cook: It's really too early to answer the question. We just implemented in March, as you probably know, in the European Union, the alternate app stores and alternate billing, et cetera. So we're focused on complying while mitigating the impacts to user privacy and security that you mentioned. And so that's our focus.\nWamsi Mohan: Okay. Thank you, Tim. And Luca, I was wondering if you could comment a bit on the product gross margins, the sequential step down. You noted both mix and leverage. Any more color on the mix, if you could share if customers are at all starting to mix down across product lines or is this more a mix across product lines? Just trying to get some color on customer behavior given some of the broader inflationary pressures. Thank you so much.\nLuca Maestri: On a sequential basis, yes, we were down. It's primarily the fact that we had a slightly different mix of products than the previous one. Obviously, leverage plays a big role as we move from the holiday quarter into the -- into, you know, a more typical quarter. So I would say primarily leverage in a different mix of products. I mean, we haven't seen anything different in terms within the product categories, we haven't seen anything particular.\nWamsi Mohan: Thank you so much.\nSuhasini Chandramouli: Thanks, Wamsi. We'll take the next question, please.\nOperator: Our next question is from Erik Woodring with Morgan Stanley. Please go ahead.\nErik Woodring: Great. Thanks so much for taking my questions. Maybe my first one, Tim, you've obviously mentioned your excitement around Generative AI multiple times. I'm just curious how Apple is thinking about the different ways in which you can monetize this technology because historically software upgrades haven't been a big factor in driving product cycles. And so could AI be potentially different? And how could that impact replacement cycles? Is there any services angle you'd be thinking? Any early color that you can share on that? And then I have a follow up, please. Thanks.\nTim Cook: I don't want to get in front of our announcements, obviously. I would just say that we see Generative AI as a very key opportunity across our products. And we believe that we have advantages that set us apart there. And we'll be talking more about it in as we go through the weeks ahead.\nErik Woodring: Okay. Very fair. Thank you. And then Luca, maybe to just follow up on Wamsi's comments or question. There's a broad concern about the headwind that rising commodity costs have on your product gross margins. Wondering if you could just clarify for us if we take a step back and look at all of the components and commodities that go into your products kind of collectively, are we -- are you seeing these costs rising? Are they falling? What tools do you have to try to help and mitigate some rising costs if at all, rising input costs if at all? Thank you so much.\nLuca Maestri: Yes. I mean during the last quarter, commodity costs, and in general, component costs have behaved favorably to us. On the memory front, prices are starting to go up. They've gone up slightly during the March quarter. But in general, I think it's been a period not only this quarter, but the last several quarters where, you know, commodities have behaved well for us. Commodities going cycles and so there's obviously always that possibility. Keep in mind that we are starting from a very high level of gross margins. We reported 46.6%, which is something that we haven't seen in our company in decades. And so we're starting from a good point. As you know, we try to buy ahead when the cycles are favorable to us. And so we will try to mitigate if there are headwinds. But in general, we feel particularly for this cycle, we are in good shape.\nErik Woodring: Thank you so much.\nSuhasini Chandramouli: Great. Thank you, Erik. Operator, we'll take the next question, please.\nOperator: Our next question is from Ben Reitzes with Melius. Please go ahead.\nBen Reitzes: Hey, thanks for the question. And hey, Tim, I was wondering if I could ask the China question again. Is there any more color from your visit there that gives you confidence that you've reached a bottom there and that it's turning? And I know you've been -- you've continued to be confident there in the long-term. Just wondering if there was any color as to when you think that the tide turns there? Thanks a lot. And I have a follow-up.\nTim Cook: Yes, Ben, if you look at our results in Q2 for Greater China, we were down 8%. That's an acceleration from the previous quarter in Q1. And the primary driver of the acceleration was iPhone. And if you then look at iPhone within Mainland China, we grew on a reported basis. That's before any kind of normalization for the supply disruption that we mentioned earlier. And if you look at the top-selling smartphones, the Top 2 in Urban China are iPhones. And while I was there, it was a great visit and we opened a new store in Shanghai and the reception was very warm and highly energetic, and so I left there having a fantastic trip and enjoyed being there. And so I maintain a great view of China in the long-term. I don't know how each and every quarter goes and each and every week. But over the long haul, I have a very positive viewpoint.\nBen Reitzes: Okay. Hey, thanks, Tim. And then my follow-up, I want to ask this carefully though. It's a -- there's a fear out there that, you may lose some traffic acquisition revenue. And I was wondering if you thought AI from big picture and it doesn't have to be on a long-term basis, I mean from a big picture, if AI is an opportunity for you to continue to monetize your mobile real estate, just how you -- how maybe investors can think about that from a big picture, just given that's been one of the concerns that's potentially been an overhang, of course, due to, you know, a lot of the news and the media around some of the legal cases? And I was wondering if there's just a big-picture color you could give that makes us kind of think about it better and your ability to sort of continue to monetize that real estate? Thanks a lot.\nTim Cook: I think AI, Generative AI and AI, both are big opportunities for us across our products. And we'll talk more about it in the coming weeks. I think there are numerous ways there that are great for us. And we think that we're well-positioned.\nBen Reitzes: Thanks, Tim.\nTim Cook: Yes.\nSuhasini Chandramouli: Thanks, Ben. Can we have the next question, please?\nOperator: Thank you. Our next question is from Krish Sankar with TD Cowen. Please go ahead.\nKrish Sankar: Yes, hi. Thanks for taking my question. Again, sorry to beat the AI haul. But Tim, I know you don't want to like reveal a lot. But I'm just kind of curious, because last quarter you spoke about how you're getting traction in enterprise. Is the AI strategy going to be both consumer and enterprise or is it going to be one after the other? Any color would be helpful? And then, I have a follow-up for Luca.\nTim Cook: Our focus on enterprise has been and you know through the quarter and the quarters that preceded it on selling iPhones and iPads and Macs and we recently added Vision Pro to that. And we're thrilled with what we see there in terms of interest from big companies buying some to explore ways they can use it. And so I see enormous opportunity in the enterprise. I wouldn't want to cabin that to AI only. I think there's a great opportunity for us around the world in the enterprise.\nKrish Sankar: Got it. Very helpful. And then for Luca, you know, I'm kind of curious on -- given the macro-environment, on the hardware side, are you seeing a bias towards like standard iPhone versus the Pro model? The reason I'm asking the question is that there's a weaker consumer spending environment, yet your services business is still growing and has amazing gross margins. So I'm just trying to like square the circle over there. Thank you.\nLuca Maestri: I'm not sure I fully understand the question, but in general, what we are seeing on the product side, we continued to see a lot of interest at the top of the range of our products. And I think it's a combination of consumers wanting to purchase the best product that we offer in the different categories and our ability to make those purchases more affordable over time. We've introduced several financing solutions from installment plans to trading programs that reduce the affordability threshold and therefore, customers tend to buy -- want to buy at the top of the range that is very valuable for us in developed markets, but particularly in emerging markets where the affordability issues are more pronounced. But in general, over the last several years and that is also reflected in our gross margins, over the last several years, we've seen this trend, which we think is pretty sustainable.\nKrish Sankar: Got it. Thank you very much, Luca, and thanks, Tim.\nSuhasini Chandramouli: Thank you, Krish. Operator, we'll have the next question, please.\nOperator: Our next question is from Amit Daryanani with Evercore. Please go ahead.\nAmit Daryanani: Thanks for taking my question. I have two as well. You know, I guess, first off on capital allocation, you folks have about $58 billion of net cash right now. As you think about eventually getting to this net cash-neutral target, do you think at some point, Apple would be open to taking on leverage on the balance sheet and continuing the buyback program? Or is it more like once you get to this neutral position, it's going to be about returning free cash flow back to shareholders? I'm just wondering, how do you think about leverage on your balance sheet over time and what sort of leverage do you think you'd be comfortable taking on?\nLuca Maestri: Hey, Amit. This is Luca. I would say one step at a time, we have put out this target of getting to net cash-neutral several years ago and we're working very hard to get there. Our free cash flow generation has been very strong over the years, particularly in the last few years. And so as you've seen this year, we've increased the amount that we're allocating to the buyback. For the last couple of years, we were doing $90 billion, now we're doing $110 billion. So let's get there first. It's going to take a while still. And then when we are there, we're going to reassess and see what is the optimal capital structure for the company at that point in time. Obviously, there's going to be a number of considerations that we will need to look at when we get there.\nAmit Daryanani: Fair enough. I figure it's worth trying anyway. If I go back to this China discussion a bit and, you know, Tim, I think your comments around growth in iPhones in Mainland China is really notable. Could you step back, I mean, these numbers are still declining at least Greater China on a year-over-year basis in aggregate. Maybe just talk about what are you seeing from a macro basis in China and then at least annual decline -- or year-over-year declines that we're seeing. Do you think it's more macro driven or more competitive driven over there? That would be helpful.\nTim Cook: Yes, I can only tell you what we're seeing. And so I don't want to present myself as a economist. So I'll steer clear of that. From what we saw was an acceleration from Q1, and it was driven by iPhone and iPhone in Mainland China before we adjust for this $5 billion impact that we talked about earlier did grow. That means the other products didn't fare as well. And so we clearly have work there to do. I think it has been and is through last quarter, the most competitive market in the world. And I -- so I, you know, wouldn't say anything other than that. I've said that before, and I believe that it was last quarter as well. And -- but if you step back from the 90-day cycle, what I see is a lot of people moving into the middle class, a -- we try to serve customers very well there and have a lot of happy customers and you can kind of see that in the latest store opening over there. And so I continue to feel very optimistic.\nAmit Daryanani: Great. Thank you.\nSuhasini Chandramouli: Thanks, Amit. Operator, we'll take the next question, please.\nOperator: Our next question is from David Vogt with UBS. Please go ahead.\nDavid Vogt: Great. Thanks guys for taking my question. I'm going to roll the two together, so you guys have them both. So Luca obviously, I'm trying to parse through the outlook for the June quarter. And just based on the quick math, it looks like all things being equal, given what you said, the iPhone business is going to be down mid-single-digits again in the June quarter. And if that's the case and maybe this is for Tim obviously, how are you thinking about the competitive landscape in the context of what you just said maybe outside of China and what changes sort of, the consumer demand or receptivity to new devices because we've been in this malaise for a while. Is it really this AI initiative that a lot of companies are pursuing? And do you think that changes sort of the demand drivers going forward? Or is it just really more of a timing issue in terms of the replacement cycle is a little bit long in the tooth, and we see a bit of an upgrade cycle at some point, maybe later this year into next year? Thanks.\nTim Cook: I do see a key opportunity, as I've mentioned before with Generative AI with all of our devices or the vast majority of our devices. And so I think that if you look out that that's not within the next quarter or so and we don't guide at the product level, but I'm extremely optimistic. And so that -- that's kind of how I view it. In terms of the -- I'll let Luca comment on the outlook portion of it. I think if you step back on iPhone though and you make this adjustment from the previous year, our Q2 results would be flattish on iPhone. And so that's how we performed in Q2.\nLuca Maestri: Yes, David, on the outlook, I'll only repeat what we said before, and this is the color that we're providing for the quarter. We do expect to grow in total, low-single-digits. And we do expect services to grow double-digits, and we expect iPad to grow double-digits for the rest. I'll let you make assumptions and then we will report three months from now.\nDavid Vogt: Great. Thanks guys. I'll get back in the queue.\nSuhasini Chandramouli: Thanks, David. Operator, we'll take the next question, please.\nOperator: Our next question is from Samik Chatterjee with JPMorgan. Please go ahead.\nSamik Chatterjee: Hi, thanks for taking my question, and I have a couple as well. Maybe for the first one, your services growth accelerated from 11% growth to 14%. If you can sort of dig into the drivers of where or which parts of services did you really see that acceleration? And why it isn't a bit more sustainable as we think about the next quarter? Because I believe you're guiding more to sort of averaging out the first half of the year for the next quarter. So just curious what were the drivers and why not have it a bit more sustainably sort of improve as we go through the remainder of the year? And I have a quick follow-up. Thank you.\nLuca Maestri: So a number of things on services. First of all, the overall performance was very strong. As I said earlier, all-time records in both developed and emerging markets. So we see our services do well across the world. Records in many of our services categories. There are some categories that are growing very fast also because they are relatively smaller in the scheme of our services business like cloud, video, payment services. You know, those all set all-time revenue records. And so we feel very good about the progress that we're making in services. As we go forward, I'll just point out that if you look at our growth rates a year ago, they improved during the course of the fiscal year last year. So the comps for the services business become a bit more challenging as we go through the year. But in general, as I mentioned, we still expect to grow double-digits in the June quarter at a rate that is very similar to what we've done in the first half.\nSamik Chatterjee: Got it. Got it. And for my follow up, if I can ask you more specifically about the India market. Obviously, you continue to make new records in terms of revenue in that market. How much of the momentum you're seeing would you associate with your sort of retail strategy in that market, retail expansion relative to maybe some of the supply change or the sort of manufacturing changes or strategy you've undergone or taken in that market itself. Any thoughts around that would be helpful?\nTim Cook: Sure. We did grow strong double-digit. And so we were very, very pleased about it. It was a new March quarter revenue record for us. As you know, as I've said before, I see it as an incredibly exciting market and it's a major focus for us. In terms of the operational side or supply chain side, we are producing there, from a pragmatic point of view, you need to produce there to be competitive. And so yes, there the two things are linked from that point of view. But we have both operational things going on and we have go-to-market, and initiatives going on. We just opened a couple of stores as last year, as you know, and we see enormous opportunity there. We're continuing to expand our channels, and also working on the developer ecosystem as well. And we've been very pleased that there is a rapidly-growing base of developers there. And so, we're working all of the entire ecosystem from developer to the market to operations, the whole thing. And I just -- I could not be more excited and enthusiastic about it.\nSamik Chatterjee: Got it. Thank you. Thanks for that.\nTim Cook: Yes.\nSuhasini Chandramouli: Thank you, Samik. Operator, we'll have the next question, please.\nOperator: Our next question is from. Please go ahead.\nAaron Rakers: Yes, thanks for taking the questions, and I think I have to have two as well like everybody else. I guess, I'm going to go back to the China question. I guess, at a high level, the simple question is, when we look at the data points that have been repeatedly reported throughout the course of this quarter, I'm curious, Tim, you know, what are we missing? Like where do you think people are missing, Apple's iPhone traction within the China market, just at a high level, you know, given the data points that were reported throughout this course of the last quarter?\nTim Cook: I can't address the data points. I can only address what our results are. And we did accelerate last quarter, and the iPhone grew in Mainland China. So that's what the results were. I can't bridge to numbers we didn't come up with.\nAaron Rakers: Okay. And then as a quick follow-up, I know you guys haven't talked about this, you know, quantified it in quite some time. But I'm curious how we would characterize the channel inventory dynamics for iPhone?\nTim Cook: Sure. The -- for the March quarter, we decreased channel inventory during the quarter. We usually decreased channel inventory during the Q2 timeframe. So that's not unusual. And we're very comfortable with the overall channel inventory.\nAaron Rakers: Thank you.\nTim Cook: Yes.\nSuhasini Chandramouli: Thank you, Aaron. Operator, we'll take the next question, please.\nOperator: Our next question is from Richard Kramer with Arete Research. Please go ahead.\nRichard Kramer: Thanks very much. I'm not going to ask about China, but you regularly call out all the rapid growth in many other emerging markets. So is Apple approaching a point where all of those other emerging markets in aggregate might crossover to become larger than your current $70 billion Greater China segments, and maybe investors could look at that for driving growth for the wider business? And then I have a follow-up for Luca. Thanks.\nLuca Maestri: I think, Richard, you're asking a really interesting question. We were looking at something similar recently. Obviously, China is by far the largest emerging market that we have. But when we started looking at places like India, like Saudi, like Mexico, Turkey, of course, Brazil and Mexico and Indonesia, the numbers are getting large, and we're very happy because these are markets where our market share is low, the populations are large and growing. And our products are really making a lot of progress with the -- in those markets. The level of excitement for the brand is very high. Tim was in Southeast Asia recently, and the level of excitement is incredibly high. So it is very good for us. And then -- and certainly, the numbers are getting larger all the time. And so the gap as you compare it to the numbers in China is reducing, and hopefully, that trajectory continues for a long time.\nRichard Kramer: Okay. And then as a follow-up, maybe for either of you, I mean, you're coming up on four years from what was incredibly popular iPhone 12 cycle. And, you know, given you're struggling to reduce your net -- your -- reach your net neutral cash position and your margins are sort of near highs, do you see ways to deploy capital more to spur replacement demand in your installed base either with greater device financing, more investment in marketing, more promotions. I mean, do you feel like you needed to produce those sort of margins or is it a more important to spur growth with replacement? Thanks.\nTim Cook: I think innovation spurs the upgrade cycle, and as one thing, of course, there's economic factors as well that play in there. And what kind of offerings there are from our carrier partners and so forth. And so there's a number of variables in there. But we work all of those, and you know, we price our products for the value that we're delivering. And so that's how we look at it.\nLuca Maestri: And if I can add to Tim's comments, Richard, one of the things that when you look over the long arc of time that maybe is not fully understood is that we've gone through a long period of very strong dollar. And what that means given that our company sells more than 60% of our revenue is outside the United States. The demand for our products in those markets is stronger than the results that we report just because of the translation of those local currencies into dollars, right? And so that is something to keep in mind as you look at our results, right? And so we are making all the investments that are needed and Tim has talked about innovation. Obviously, we made a lot of progress with financing solutions, with trading programs and so on, and we will continue to make all those investments.\nRichard Kramer: Okay. Super. Thanks, guys.\nSuhasini Chandramouli: Thank you, Richard. Operator, can we take our last question, please.\nOperator: Our next question is from Atif Malik with Citi. Please go ahead.\nAtif Malik: Hi. Thank you for taking my questions, and I have two questions as well. First for Tim, for enterprise, specifically, what are some of the top two or three use cases on Vision Pro you're hearing most excitement? And then I have a follow-up for Luca.\nTim Cook: Yes, the great thing is, I'm hearing about so many of them. I wouldn't say that one has emerged as the top, right now. The most impressive thing is that similar to the way people use a Mac, you use it for everything. People are using it for many different things in enterprise, and that varies from field service to training to healthcare related things like preparing a doctor for pre-op surgery or advanced imaging. And so the -- it commands control centers. And so it's an enormous number of different verticals. And you know our focus is on -- is growing that ecosystem and getting more apps and more and more enterprises engaged. And the event that we had recently, I can't overstate the enthusiasm in the room. It was extraordinary. And so we're off to a good start, I think, with the enterprise.\nAtif Malik: Great. And then Luca, I believe you mentioned that for the March quarter, the commodity pricing environment was favorable. Can you talk about what you're assuming for commodity pricing on memory and et cetera for the June quarter and maybe for the full-year?\nLuca Maestri: Yes, we provide guidance just for the current quarter. So I'll tell you about the, you know, the guidance. We're guiding to again to a very high level of gross margins, 45.5% to 46.5%. Within that guidance, we expect memory to be a slight headwind, not a very large one, but a slight headwind. And the same applies for foreign exchange. Foreign exchange will have a negative impact sequentially of about 30 basis points.\nAtif Malik: Thank you.\nSuhasini Chandramouli: Thank you, Atif. A replay of today's call will be available for two weeks on Apple podcasts as a webcast on apple.com/investor and via telephone. The number for the telephone replay is 866-583-1035. Please enter confirmation code 0467138 followed by the pound sign. These replays will be available by approximately 5:00 P.M. Pacific Time today. Members of the press with additional questions can contact Josh Rosenstock at 408-862-1142, and financial analysts can contact me, Suhasini Chandramouli, with additional questions at 408-974-3123. Thank you again for joining us." + }, + { + "symbol": "NVDA", + "quarter": 2, + "year": 2024, + "date": "2023-08-23 22:17:10", + "content": "Operator: Good afternoon. My name is David, and I'll be your conference operator today. At this time, I'd like to welcome everyone to NVIDIA's Second Quarter Earnings Call. Today's conference is being recorded. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. Simona Jankowski, you may begin your conference.\nSimona Jankowski: Thank you. Good afternoon, everyone and welcome to NVIDIA's conference call for the second quarter of fiscal 2024. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2024. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 23, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. And with that, let me turn the call over to Colette.\nColette Kress: Thanks, Simona. We had an exceptional quarter. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year-on-year, and above our outlook of $11 billion. Let me first start with Data Center. Record revenue of $10.32 billion was up 141% sequentially and up 171% year-on-year. Data Center compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud as well as a growing number of GPU cloud providers are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HGX with 35,000 parts and highly complex networking has been built up over the past decade. We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process such as [indiscernible] packaging. We expect supply to increase each quarter through next year. By geography, data center growth was strongest in the U.S. as customers direct their capital investments to AI and accelerated computing. China demand was within the historical range of 20% to 25% of our Data Center revenue, including compute and networking solutions. At this time, let me take a moment to address recent reports on the potential for increased regulations on our exports to China. We believe the current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our Data Center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long term, restrictions prohibiting the sale of our Data Center GPUs to China, if implemented, will result in a permanent loss and opportunity for the U.S. industry to compete and lead in one of the world's largest markets. Our cloud service providers drove exceptional strong demand for HGX systems in the quarter, as they undertake a generational transition to upgrade their data center infrastructure for the new era of accelerated computing and AI. The NVIDIA HGX platform is culminating of nearly two decades of full stack innovation across silicon, systems, interconnects, networking, software and algorithms. Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers, with others on the way shortly. Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta, recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA powered instances in the cloud as well as demand for on-premise infrastructure. Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI enterprise software with access to our acceleration libraries, pre-trained models and APIs. We announced a partnership with Snowflake to provide enterprises with accelerated path to create customized generative AI applications using their own proprietary data, all securely within the Snowflake Data Cloud. With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbot, search and summarization, right from the Snowflake Data Cloud. Virtually, every industry can benefit from generative AI. For example, AI Copilot such as those just announced by Microsoft can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers. We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world's largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design. Visual content provider Shutterstock is also using NVIDIA Picasso to build tools and services that enables users to create 3D scene background with the help of generative AI. We've partnered with ServiceNow and Accenture to launch the AI Lighthouse program, fast tracking the development of enterprise AI capabilities. AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture consulting and deployment services. We are collaborating also with Hugging Face to simplify the creation of new and custom AI models for enterprises. Hugging Face will offer a new service for enterprises to train and tune advanced AI models powered by NVIDIA HGX cloud. And just yesterday, VMware and NVIDIA announced a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully integrated platform featuring AI software and accelerated computing from NVIDIA with multi-cloud software for enterprises running VMware. VMware's hundreds of thousands of enterprise customers will have access to the infrastructure, AI and cloud management software needed to customize models and run generative AI applications such as intelligent chatbot, assistants, search and summarization. We also announced new NVIDIA AI enterprise-ready servers featuring the new NVIDIA L40S GPU built for the industry standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor. L40S is not limited by [indiscernible] supply and is shipping to the world's leading server system makers (ph). L40S is a universal data center processor designed for high volume data center standing out to accelerate the most compute-intensive applications, including AI training and inventing through the designing, visualization, video processing and NVIDIA Omniverse industrial digitalization. NVIDIA AI enterprise ready servers are fully optimized for VMware, Cloud Foundation and Private AI Foundation. Nearly 100 configurations of NVIDIA AI enterprise ready servers will soon be available from the world's leading enterprise IT computing companies, including Dell, HP and Lenovo. The GH200 Grace Hopper Superchip which combines our ARM-based Grace CPU with Hopper GPU entered full production and will be available this quarter in OEM servers. It is also shipping to multiple supercomputing customers, including Atmos (ph), National Labs and the Swiss National Computing Center. And NVIDIA and SoftBank are collaborating on a platform based on GH200 for generative AI and 5G/6G applications. The second generation version of our Grace Hopper Superchip with the latest HBM3e memory will be available in Q2 of calendar 2024. We announced the DGX GH200, a new class of large memory AI supercomputer for giant AI language model, recommendator systems and data analytics. This is the first use of the new NVIDIA [indiscernible] switch system, enabling all of its 256 Grace Hopper Superchips to work together as one, a huge jump compared to our prior generation connecting just eight GPUs over [indiscernible]. DGX GH200 systems are expected to be available by the end of the year, Google Cloud, Meta and Microsoft among the first to gain access. Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU systems. Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of [indiscernible] and pays for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners. For Ethernet-based cloud data centers that seek to optimize their AI performance, we announced NVIDIA Spectrum-X, an accelerated networking platform designed to optimize Ethernet for AI workloads. Spectrum-X couples the Spectrum or Ethernet switch with the BlueField-3 DPU, achieving 1.5x better overall AI performance and power efficiency versus traditional Ethernet. BlueField-3 DPU is a major success. It is in qualification with major OEMs and ramping across multiple CSPs and consumer Internet companies. Now moving to gaming. Gaming revenue of $2.49 billion was up 11% sequentially and 22% year-on-year. Growth was fueled by GeForce RTX 40 Series GPUs for laptops and desktop. End customer demand was solid and consistent with seasonality. We believe global end demand has returned to growth after last year's slowdown. We have a large upgrade opportunity ahead of us. Just 47% of our installed base have upgraded to RTX and about 20% of the GPU with an RTX 3060 or higher performance. Laptop GPUs posted strong growth in the key back-to-school season, led by RTX 4060 GPUs. NVIDIA's GPU-powered laptops have gained in popularity, and their shipments are now outpacing desktop GPUs from several regions around the world. This is likely to shift the reality of our overall gaming revenue a bit, with Q2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops. In desktop, we launched the GeForce RTX 4060 and the GeForce RTX 4060 TI GPUs, bringing the Ada Lovelace architecture down to price points as low as $299. The ecosystem of RTX and DLSS games continue to expand. 35 new games added to DLSS support, including blockbusters such as Diablo IV and Baldur’s Gate 3. There's now over 330 RTX accelerated games and apps. We are bringing generative AI to gaming. At COMPUTEX, we announced NVIDIA Avatar Cloud Engine or ACE for games, a custom AI model foundry service. Developers can use this service to bring intelligence to non-player characters. And it harnesses a number of NVIDIA Omniverse and AI technologies, including NeMo, Riva and Audio2Face. Now moving to Professional Visualization. Revenue of $375 million was up 28% sequentially and down 24% year-on-year. The Ada architecture ramp drove strong growth in Q2, rolling out initially in laptop workstations with a refresh of desktop workstations coming in Q3. These will include powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory. They can be configured with NVIDIA AI enterprise or NVIDIA Omniverse inside. We also announced three new desktop workstation GPUs based on the Ada generation. The NVIDIA RTX 5000, 4500 and 4000, offering up to 2x the RT core throughput and up to 2x faster AI training performance compared to the previous generation. In addition to traditional workloads such as 3D design and content creation, new workloads in generative AI, large language model development and data science are expanding the opportunity in pro visualization for our RTX technology. One of the key themes in Jensen's keynote [indiscernible] earlier this month was the conversion of graphics and AI. This is where NVIDIA Omniverse is positioned. Omniverse is OpenUSD's native platform. OpenUSD is a universal interchange that is quickly becoming the standard for the 3D world, much like HTML is the universal language for the 2D [indiscernible]. Together, Adobe, Apple, Autodesk, Pixar and NVIDIA form the Alliance for OpenUSD. Our mission is to accelerate OpenUSD's development and adoption. We announced new and upcoming Omniverse cloud APIs, including RunUSD and ChatUSD to bring generative AI to OpenUSD workload. Moving to automotive. Revenue was $253 million, down 15% sequentially and up 15% year-on-year. Solid year-on-year growth was driven by the ramp of self-driving platforms based on [indiscernible] or associated with a number of new energy vehicle makers. The sequential decline reflects lower overall automotive demand, particularly in China. We announced a partnership with MediaTek to bring drivers and passengers new experiences inside the car. MediaTek will develop automotive SoCs and integrate a new product line of NVIDIA's GPU chiplet. The partnership covers a wide range of vehicle segments from luxury to entry level. Moving to the rest of the P&L. GAAP gross margins expanded to 70.1% and non-GAAP gross margin to 71.2%, driven by higher data center sales. Our Data Center products include a significant amount of software and complexity, which is also helping drive our gross margin. Sequential GAAP operating expenses were up 6% and non-GAAP operating expenses were up 5%, primarily reflecting increased compensation and benefits. We returned approximately $3.4 billion to shareholders in the form of share repurchases and cash dividends. Our Board of Directors has just approved an additional $25 billion in stock repurchases to add to our remaining $4 billion of authorization as of the end of Q2. Let me turn to the outlook for the third quarter of fiscal 2024. Demand for our Data Center platform where AI is tremendous and broad-based across industries on customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity. Additionally, the new L40S GPU will help address the growing demand for many types of workloads from cloud to enterprise. For Q3, total revenue is expected to be $16 billion, plus or minus 2%. We expect sequential growth to be driven largely by Data Center with gaming and ProViz also contributing. GAAP and non-GAAP gross margins are expected to be 71.5% and 72.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $2.95 billion and $2 billion, respectively. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $100 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 14.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight some upcoming events for the financial community. We will attend the Jefferies Tech Summit on August 30 in Chicago, the Goldman Sachs Conference on September 5 in San Francisco, the Evercore Semiconductor Conference on September 6 as well as the Citi Tech Conference on September 7, both in New York. And the BofA Virtual AI conference on September 11. Our earnings call to discuss the results of our third quarter of fiscal 2024 is scheduled for Tuesday, November 21. Operator, we will now open the call for questions. Could you please poll for questions for us? Thank you.\nOperator: Thank you. [Operator Instructions] We'll take our first question from Matt Ramsay with TD Cowen. Your line is now open.\nMatt Ramsay: Yes. Thank you very much. Good afternoon. Obviously, remarkable results. Jensen, I wanted to ask a question of you regarding the really quickly emerging application of large model inference. So I think it's pretty well understood by the majority of investors that you guys have very much a lockdown share of the training market. A lot of the smaller market -- smaller model inference workloads have been done on ASICs or CPUs in the past. And with many of these GPT and other really large models, there's this new workload that's accelerating super-duper quickly on large model inference. And I think your Grace Hopper Superchip products and others are pretty well aligned for that. But could you maybe talk to us about how you're seeing the inference market segment between small model inference and large model inference and how your product portfolio is positioned for that? Thanks.\nJensen Huang: Yeah. Thanks a lot. So let's take a quick step back. These large language models are fairly -- are pretty phenomenal. It does several things, of course. It has the ability to understand unstructured language. But at its core, what it has learned is the structure of human language. And it has encoded or within it -- compressed within it a large amount of human knowledge that it has learned by the corpuses that it studied. What happens is, you create these large language models and you create as large as you can, and then you derive from it smaller versions of the model, essentially teacher-student models. It's a process called distillation. And so when you see these smaller models, it's very likely the case that they were derived from or distilled from or learned from larger models, just as you have professors and teachers and students and so on and so forth. And you're going to see this going forward. And so you start from a very large model and it has a large amount of generality and generalization and what's called zero-shot capability. And so for a lot of applications and questions or skills that you haven't trained it specifically on, these large language models miraculously has the capability to perform them. That's what makes it so magical. On the other hand, you would like to have these capabilities in all kinds of computing devices, and so what you do is you distill them down. These smaller models might have excellent capabilities on a particular skill, but they don't generalize as well. They don't have what is called as good zero-shot capabilities. And so they all have their own unique capabilities, but you start from very large models.\nOperator: Okay. Next, we'll go to Vivek Arya with BofA Securities. Your line is now open.\nVivek Arya: Thank you. Just had a quick clarification and a question. Colette, if you could please clarify how much incremental supply do you expect to come online in the next year? You think it's up 20%, 30%, 40%, 50%? So just any sense of how much supply because you said it's growing every quarter. And then Jensen, the question for you is, when we look at the overall hyperscaler spending, that buy is not really growing that much. So what is giving you the confidence that they can continue to carve out more of that pie for generative AI? Just give us your sense of how sustainable is this demand as we look over the next one to two years? So if I take your implied Q3 outlook of Data Center, $12 billion, $13 billion, what does that say about how many servers are already AI accelerated? Where is that going? So just give some confidence that the growth that you are seeing is sustainable into the next one to two years.\nColette Kress: So thanks for that question regarding our supply. Yes, we do expect to continue increasing ramping our supply over the next quarters as well as into next fiscal year. In terms of percent, it's not something that we have here. It is a work across so many different suppliers, so many different parts of building an HGX and many of our other new products that are coming to market. But we are very pleased with both the support that we have with our suppliers and the long time that we have spent with them improving their supply.\nJensen Huang: The world has something along the lines of about $1 trillion worth of data centers installed, in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We're seeing two simultaneous platform shifts at the same time. One is accelerated computing. And the reason for that is because it's the most cost-effective, most energy effective and the most performant way of doing computing now. So what you're seeing, and then all of a sudden, enabled by generative AI, enabled by accelerated compute and generative AI came along. And this incredible application now gives everyone two reasons to transition to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. It's about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year. You're seeing the data centers around the world are taking that capital spend and focusing it on the two most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing. This is a long-term industry transition and we're seeing these two platform shifts happening at the same time.\nOperator: Next, we go to Stacy Rasgon with Bernstein Research. Your line is open.\nStacy Rasgon: Hi, guys. Thanks for taking my question. I was wondering, Colette, if you could tell me like how much of Data Center in the quarter, maybe even the guide is like systems versus GPU, like DGX versus just the H100? What I'm really trying to get at is, how much is like pricing or content or however you want to define that [indiscernible] versus units actually driving the growth going forward. Can you give us any color around that?\nColette Kress: Sure, Stacy. Let me help. Within the quarter, our HGX systems were a very significant part of our Data Center as well as our Data Center growth that we had seen. Those systems include our HGX of our Hopper architecture, but also our Ampere architecture. Yes, we are still selling both of these architectures in the market. Now when you think about that, what does that mean from both the systems as a unit, of course, is growing quite substantially, and that is driving in terms of the revenue increases. So both of these things are the drivers of the revenue inside Data Center. Our DGXs are always a portion of additional systems that we will sell. Those are great opportunities for enterprise customers and many other different types of customers that we're seeing even in our consumer Internet companies. The importance there is also coming together with software that we sell with our DGXs, but that's a portion of our sales that we're doing. The rest of the GPUs, we have new GPUs coming to market that we talk about the L40S, and they will add continued growth going forward. But again, the largest driver of our revenue within this last quarter was definitely the HGX system.\nJensen Huang: And Stacy, if I could just add something. You say it’s H100 and I know you know what your mental image in your mind. But the H100 is 35,000 parts, 70 pounds, nearly 1 trillion transistors in combination. Takes a robot to build – well, many robots to build because it’s 70 pounds to lift. And it takes a supercomputer to test a supercomputer. And so these things are technology marvels, and the manufacturing of them is really intensive. And so I think we call it H100 as if it’s a chip that comes off of a fab, but H100s go out really as HGXs sent to the world’s hyperscalers and they’re really, really quite large system components, if you will.\nOperator: Next, we go to Mark Lipacis with Jefferies. Your line is now open.\nMark Lipacis: Hi. Thanks for taking my question and congrats on the success. Jensen, it seems like a key part of the success -- your success in the market is delivering the software ecosystem along with the chip and the hardware platform. And I had a two-part question on this. I was wondering if you could just help us understand the evolution of your software ecosystem, the critical elements. And is there a way to quantify your lead on this dimension like how many person years you've invested in building it? And then part two, I was wondering if you would care to share with us your view on the -- what percentage of the value of the NVIDIA platform is hardware differentiation versus software differentiation? Thank you.\nA – Jensen Huang: Yeah, Mark, I really appreciate the question. Let me see if I could use some metrics, so we have a run time called AI Enterprise. This is one part of our software stack. And this is, if you will, the run time that just about every company uses for the end-to-end of machine learning from data processing, the training of any model that you like to do on any framework you'd like to do, the inference and the deployment, the scaling it out into a data center. It could be a scale-out for a hyperscale data center. It could be a scale-out for enterprise data center, for example, on VMware. You can do this on any of our GPUs. We have hundreds of millions of GPUs in the field and millions of GPUs in the cloud and just about every single cloud. And it runs in a single GPU configuration as well as multi-GPU per compute or multi-node. It also has multiple sessions or multiple computing instances per GPU. So from multiple instances per GPU to multiple GPUs, multiple nodes to entire data center scale. So this run time called NVIDIA AI enterprise has something like 4,500 software packages, software libraries and has something like 10,000 dependencies among each other. And that run time is, as I mentioned, continuously updated and optimized for our installed base for our stack. And that's just one example of what it would take to get accelerated computing to work. The number of code combinations and type of application combinations is really quite insane. And it's taken us two decades to get here. But what I would characterize as probably our -- the elements of our company, if you will, are several. I would say number 1 is architecture. The flexibility, the versatility and the performance of our architecture makes it possible for us to do all the things that I just said, from data processing to training to inference, for preprocessing of the data before you do the inference to the post processing of the data, tokenizing of languages so that you could then train with it. The amount of -- the workflow is much more intense than just training or inference. But anyways, that's where we'll focus and it's fine. But when people actually use these computing systems, it's quite -- requires a lot of applications. And so the combination of our architecture makes it possible for us to deliver the lowest cost ownership. And the reason for that is because we accelerate so many different things. The second characteristic of our company is the installed base. You have to ask yourself, why is it that all the software developers come to our platform? And the reason for that is because software developers seek a large installed base so that they can reach the largest number of end users, so that they could build a business or get a return on the investments that they make. And then the third characteristic is reach. We're in the cloud today, both for public cloud, public-facing cloud because we have so many customers that use -- so many developers and customers that use our platform. CSPs are delighted to put it up in the cloud. They use it for internal consumption to develop and train and to operate recommender systems or search or data processing engines and whatnot all the way to training and inference. And so we're in the cloud, we're in enterprise. Yesterday, we had a very big announcement. It's really worthwhile to take a look at that. VMware is the operating system of the world's enterprise. And we've been working together for several years now, and we're going to bring together -- together, we're going to bring generative AI to the world's enterprises all the way out to the edge. And so reach is another reason. And because of reach, all of the world's system makers are anxious to put NVIDIA's platform in their systems. And so we have a very broad distribution from all of the world's OEMs and ODMs and so on and so forth because of our reach. And then lastly, because of our scale and velocity, we were able to sustain this really complex stack of software and hardware, networking and compute and across all of these different usage models and different computing environments. And we're able to do all this while accelerating the velocity of our engineering. It seems like we're introducing a new architecture every two years. Now we're introducing a new architecture, a new product just about every six months. And so these properties make it possible for the ecosystem to build their company and their business on top of us. And so those in combination makes us special.\nOperator: Next, we'll go to Atif Malik with Citi. Your line is open.\nAtif Malik: Hi. Thank you for taking my question. Great job on results and outlook. Colette, I have a question on the core L40S that you guys talked about. Any idea how much of the supply tightness can L40S help with? And if you can talk about the incremental profitability or gross margin contribution from this product? Thank you.\nJensen Huang: Yeah, Atif. Let me take that for you. The L40S is really designed for a different type of application. H100 is designed for large-scale language models and processing just very large models and a great deal of data. And so that's not L40S' focus. L40S' focus is to be able to fine-tune models, fine-tune pretrained models, and it'll do that incredibly well. It has a transform engine. It's got a lot of performance. You can get multiple GPUs in a server. It's designed for hyperscale scale-out, meaning it's easy to install L40S servers into the world's hyperscale data centers. It comes in a standard rack, standard server, and everything about it is standard and so it's easy to install. L40S also is with the software stack around it and along with BlueField-3 and all the work that we did with VMware and the work that we did with Snowflakes and ServiceNow and so many other enterprise partners. L40S is designed for the world's enterprise IT systems. And that's the reason why HPE, Dell, and Lenovo and some 20 other system makers building about 100 different configurations of enterprise servers are going to work with us to take generative AI to the world's enterprise. And so L40S is really designed for a different type of scale-out, if you will. It's, of course, large language models. It's, of course, generative AI, but it's a different use case. And so the L40S is going to -- is off to a great start and the world's enterprise and hyperscalers are really clamoring to get L40S deployed.\nOperator: Next, we'll go to Joe Moore with Morgan Stanley. Your line is open.\nJoseph Moore: Great. Thank you. I guess the thing about these numbers that's so remarkable to me is the amount of demand that remains unfulfilled, talking to some of your customers. As good as these numbers are, you sort of more than tripled your revenue in a couple of quarters. There's a demand, in some cases, for multiples of what people are getting. So can you talk about that? How much unfulfilled demand do you think there is? And you talked about visibility extending into next year. Do you have line of sight into when you get to see supply-demand equilibrium here?\nJensen Huang: Yeah. We have excellent visibility through the year and into next year. And we're already planning the next-generation infrastructure with the leading CSPs and data center builders. The demand – easiest way to think about the demand, the world is transitioning from general-purpose computing to accelerated computing. That's the easiest way to think about the demand. The best way for companies to increase their throughput, improve their energy efficiency, improve their cost efficiency is to divert their capital budget to accelerated computing and generative AI. Because by doing that, you're going to offload so much workload off of the CPUs, but the available CPUs is -- in your data center will get boosted. And so what you're seeing companies do now is recognizing this -- the tipping point here, recognizing the beginning of this transition and diverting their capital investment to accelerated computing and generative AI. And so that's probably the easiest way to think about the opportunity ahead of us. This isn't a singular application that is driving the demand, but this is a new computing platform, if you will, a new computing transition that's happening. And data centers all over the world are responding to this and shifting in a broad-based way.\nOperator: Next, we go to Toshiya Hari with Goldman Sachs. Your line is now open.\nToshiya Hari: Hi. Thank you for taking the question. I had one quick clarification question for Colette and then another one for Jensen. Colette, I think last quarter, you had said CSPs were about 40% of your Data Center revenue, consumer Internet at 30%, enterprise 30%. Based on your remarks, it sounded like CSPs and consumer Internet may have been a larger percentage of your business. If you can kind of clarify that or confirm that, that would be super helpful. And then Jensen, a question for you. Given your position as the key enabler of AI, the breadth of engagements and the visibility you have into customer projects, I'm curious how confident you are that there will be enough applications or use cases for your customers to generate a reasonable return on their investments. I guess I ask the question because there is a concern out there that there could be a bit of a pause in your demand profile in the out years. Curious if there's enough breadth and depth there to support a sustained increase in your Data Center business going forward. Thank you.\nColette Kress: Okay. So thank you, Toshiya, on the question regarding our types of customers that we have in our Data Center business. And we look at it in terms of combining our compute as well as our networking together. Our CSPs, our large CSPs are contributing a little bit more than 50% of our revenue within Q2. And the next largest category will be our consumer Internet companies. And then the last piece of that will be our enterprise and high performance computing.\nJensen Huang: Toshi, I'm reluctant to guess about the future and so I'll answer the question from the first principle of computer science perspective. It is recognized for some time now that general purpose computing is just not and brute forcing general purpose computing. Using general purpose computing at scale is no longer the best way to go forward. It's too energy costly, it's too expensive, and the performance of the applications are too slow. And finally, the world has a new way of doing it. It's called accelerated computing and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that's already in the data center. And by using it, you offload the CPUs. You save a ton of money in order of magnitude, in cost and order of magnitude and energy and the throughput is higher and that's what the industry is really responding to. Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It's ready to be deployed. And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers. This is true for the world's clouds and you're seeing a whole crop of new GPU specialty -- GPU specialized cloud service providers. One of the famous ones is CoreWeave and they're doing incredibly well. But you're seeing the regional GPU specialist service providers all over the world now. And it's because they all recognize the same thing, that the best way to invest their capital going forward is to put it into accelerated computing and generative AI. We're also seeing that enterprises want to do that. But in order for enterprises to do it, you have to support the management system, the operating system, the security and software-defined data center approach of enterprises, and that's all VMware. And we've been working several years with VMware to make it possible for VMware to support not just the virtualization of CPUs but a virtualization of GPUs as well as the distributed computing capabilities of GPUs, supporting NVIDIA's BlueField for high-performance networking. And all of the generative AI libraries that we've been working on is now going to be offered as a special SKU by VMware's sales force, which is, as we all know, quite large because they reach some several hundred thousand VMware customers around the world. And this new SKU is going to be called VMware Private AI Foundation. And this will be a new SKU that makes it possible for enterprises. And in combination with HP, Dell, and Lenovo's new server offerings based on L40S, any enterprise could have a state-of-the-art AI data center and be able to engage generative AI. And so I think the answer to that question is hard to predict exactly what's going to happen quarter-to-quarter. But I think the trend is very, very clear now that we're seeing a platform shift.\nOperator: Next, we'll go to Timothy Arcuri with UBS. Your line is now open.\nTimothy Arcuri: Thanks a lot. Can you talk about the attach rate of your networking solutions to your -- to the compute that you're shipping? In other words, is like half of your compute shipping with your networking solutions more than half, less than half? And is this something that maybe you can use to prioritize allocation of the GPUs? Thank you.\nJensen Huang: Well, working backwards, we don't use that to prioritize the allocation of our GPUs. We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, InfiniBand is, I hate to say it, kind of a no-brainer. And the reason for that because the efficiency of InfiniBand is so significant, some 10%, 15%, 20% higher throughput for $1 billion infrastructure translates to enormous savings. Basically, the networking is free. And so, if you have a single application, if you will, infrastructure or it’s largely dedicated to large language models or large AI systems, InfiniBand is really a terrific choice. However, if you’re hosting for a lot of different users and Ethernet is really core to the way you manage your data center, we have an excellent solution there that we had just recently announced and it’s called Spectrum-X. Well, we’re going to bring the capabilities, if you will, not all of it, but some of it, of the capabilities of InfiniBand to Ethernet so that we can also, within the environment of Ethernet, allow you to – enable you to get excellent generative AI capabilities. So Spectrum-X is just ramping now. It requires BlueField-3 and it supports both our Spectrum-2 and Spectrum-3 Ethernet switches. And the additional performance is really spectacular. BlueField-3 makes it possible and a whole bunch of software that goes along with it. BlueField, as all of you know, is a project really dear to my heart, and it’s off to just a tremendous start. I think it’s a home run. This is the concept of in-network computing and putting a lot of software in the computing fabric is being realized with BlueField-3, and it is going to be a home run.\nOperator: Our final question comes from the line of Ben Reitzes with Melius. Your line is now open.\nBenjamin Reitzes: Hi. Good afternoon. Good evening. Thank you for the question, putting me in here. My question is with regard to DGX Cloud. Can you talk about the reception that you're seeing and how the momentum is going? And then Colette, can you also talk about your software business? What is the run rate right now and the materiality of that business? And it does seem like it's already helping margins a bit. Thank you very much.\nJensen Huang: DGX Cloud's strategy, let me start there. DGX Cloud's strategy is to achieve several things: number one, to enable a really close partnership between us and the world's CSPs. We recognize that many of our -- we work with some 30,000 companies around the world. 15,000 of them are startups. Thousands of them are generative AI companies and the fastest-growing segment, of course, is generative AI. We're working with all of the world's AI start-ups. And ultimately, they would like to be able to land in one of the world's leading clouds. And so we built DGX Cloud as a footprint inside the world's leading clouds so that we could simultaneously work with all of our AI partners and help blend them easily in one of our cloud partners. The second benefit is that it allows our CSPs and ourselves to work really closely together to improve the performance of hyperscale clouds, which is historically designed for multi-tenancy and not designed for high-performance distributed computing like generative AI. And so to be able to work closely architecturally to have our engineers work hand in hand to improve the networking performance and the computing performance has been really powerful, really terrific. And then thirdly, of course, NVIDIA uses very large infrastructures ourselves. And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant. And none of our optimizing compilers are possible without our DGX systems. Even compilers these days require AI, and optimizing software and infrastructure software requires AI to even develop. It's been well publicized that our engineering uses AI to design our chips. And so the internal -- our own consumption of AI, our robotics team, so on and so forth, Omniverse teams, so on and so forth, all needs AI. And so our internal consumption is quite large as well, and we land that in DGX Cloud. And so DGX Cloud has multiple use cases, multiple drivers, and it's been off to just an enormous success. And our CSPs love it, the developers love it and our own internal engineers are clamoring to have more of it. And it's a great way for us to engage and work closely with all of the AI ecosystem around the world.\nColette Kress: And let's see if I can answer your question regarding our software revenue. In part of our opening remarks that we made as well, remember, software is a part of almost all of our products, whether they're our Data Center products, GPU systems or any of our products within gaming and our future automotive products. You're correct, we're also selling it in a standalone business. And that stand-alone software continues to grow where we are providing both the software services, upgrades across there as well. Now we're seeing, at this point, probably hundreds of millions of dollars annually for our software business, and we are looking at NVIDIA AI enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100. And I think we're going to see more availability even with our CSP marketplaces. So we're off to a great start, and I do believe we'll see this continue to grow going forward.\nOperator: And that does conclude today's question-and-answer session. I'll turn the call back over to Jensen Huang for any additional or closing remarks.\nJensen Huang: A new computing era has begun. The industry is simultaneously going through 2 platform transitions, accelerated computing and generative AI. Data centers are making a platform shift from general purpose to accelerated computing. The $1 trillion of global data centers will transition to accelerated computing to achieve an order of magnitude better performance, energy efficiency and cost. Accelerated computing enabled generative AI, which is now driving a platform shift in software and enabling new, never-before possible applications. Together, accelerated computing and generative AI are driving a broad-based computer industry platform shift. Our demand is tremendous. We are significantly expanding our production capacity. Supply will substantially increase for the rest of this year and next year. NVIDIA has been preparing for this for over two decades and has created a new computing platform that the world’s industry -- world’s industries can build upon. What makes NVIDIA special are: one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases. The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency. Two, installed base. NVIDIA has hundreds of millions of CUDA-compatible GPUs worldwide. Developers need a large installed base to reach end users and grow their business. NVIDIA is the developer’s preferred platform. More developers create more applications that make NVIDIA more valuable for customers. Three, reach. NVIDIA is in clouds, enterprise data centers, industrial edge, PCs, workstations, instruments and robotics. Each has fundamentally unique computing models and ecosystems. System suppliers like OEMs, computer OEMs can confidently invest in NVIDIA because we offer significant market demand and reach. Scale and velocity. NVIDIA has achieved significant scale and is 100% invested in accelerated computing and generative AI. Our ecosystem partners can trust that we have the expertise, focus and scale to deliver a strong road map and reach to help them grow. We are accelerating because of the additive results of these capabilities. We’re upgrading and adding new products about every six months versus every two years to address the expanding universe of generative AI. While we increased the output of H100 for training and inference of large language models, we’re ramping up our new L40S universal GPU for scale, for cloud scale-out and enterprise servers. Spectrum-X, which consists of our Ethernet switch, BlueField-3 Super NIC and software helps customers who want the best possible AI performance on Ethernet infrastructures. Customers are already working on next-generation accelerated computing and generative AI with our Grace Hopper. We’re extending NVIDIA AI to the world’s enterprises that demand generative AI but with the model privacy, security and sovereignty. Together with the world’s leading enterprise IT companies, Accenture, Adobe, Getty, Hugging Face, Snowflake, ServiceNow, VMware and WPP and our enterprise system partners, Dell, HPE, and Lenovo, we are bringing generative AI to the world’s enterprise. We’re building NVIDIA Omniverse to digitalize and enable the world’s multi-trillion dollar heavy industries to use generative AI to automate how they build and operate physical assets and achieve greater productivity. Generative AI starts in the cloud, but the most significant opportunities are in the world’s largest industries, where companies can realize trillions of dollars of productivity gains. It is an exciting time for NVIDIA, our customers, partners and the entire ecosystem to drive this generational shift in computing. We look forward to updating you on our progress next quarter.\nOperator: This concludes today's conference call. You may now disconnect." + }, + { + "symbol": "MSFT", + "quarter": 2, + "year": 2024, + "date": "2024-01-30 21:32:11", + "content": "Operator: Greetings, and welcome to the Microsoft Fiscal Year 2024 Second Quarter Earnings Conference Call. At this time, all participants are in a listen-only mode. A question-and-answer session will follow the formal presentation. [Operator Instructions] As a reminder, this conference is being recorded. I would now like to turn the conference over to your host, Brett Iversen, Vice President of Investor Relations. Please go ahead.\nBrett Iversen: Good afternoon, and thank you for joining us today. On the call with me are Satya Nadella, Chairman and Chief Executive Officer; Amy Hood, Chief Financial Officer; Alice Jolla, Chief Accounting Officer; and Keith Dolliver, Corporate Secretary and Deputy General Counsel. On the Microsoft Investor Relations website, you can find our earnings press release and financial summary slide deck, which is intended to supplement our prepared remarks during today's call, and provides the reconciliation of differences between GAAP and non-GAAP financial measures. More detailed outlook slides will be available on the Microsoft Investor Relations website, when we provide outlook commentary on today's call. Microsoft, completed the acquisition of Activision Blizzard this quarter and we are reporting its results in our More Personal Computing segment, beginning on October 13, 2023. Accordingly, our Xbox content and services revenue growth investor metric includes the net impact of Activision. Additionally, our press release and slide deck contains supplemental information regarding the net impact of the Activision acquisition on our financial results. On this call, we will discuss certain non-GAAP items. The non-GAAP financial measures provided should not be considered as a substitute for or superior to the measures of financial performance prepared in accordance with GAAP. They are included as additional clarifying items to aid investors in further understanding the company's second-quarter performance in addition to the impact these items and events have on the financial results. All growth comparisons we make on the call today relate to the corresponding period of last year, unless otherwise noted. We will also provide growth rates in constant currency when available as a framework for assessing how our underlying businesses performed, excluding the effect of foreign currency rate fluctuations. Where growth rates are the same in constant currency, we will refer to the growth rate only. We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question. It will be included in our live transmission, in the transcript, and in any future use of the recording. You can replay the call and view the transcript on the Microsoft Investor Relations website. During this call, we'll be making forward-looking statements which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the Risk Factors section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We do not undertake any duty to update any forward-looking statement. And with that, I'll turn the call over to Satya.\nSatya Nadella: Thank you, Brett. It was a record quarter driven by the continued strength of Microsoft Cloud, which surpassed $33 billion in revenue, up 24%. We’ve moved from talking about AI to applying AI at scale by infusing AI across every layer of our tech stack, we are winning new customers and helping drive new benefits and productivity gains. Now I'll highlight examples of our momentum and progress starting with Azure. Azure again took share this quarter with our AI advantage. Azure offers the top performance for AI training and inference in the most diverse selection of AI accelerators, including the latest from AMD and NVIDIA, as well as our own first-party silicon Azure Maia. And with Azure AI, we provide access to the best selection of foundation and open-source models, including both LLM and SLMs, all integrated deeply with infrastructure, data, and tools on Azure. We now have 53,000 Azure AI customers, over one-third are new to Azure over the past 12 months. Our new models of service offering makes it easy for developers to use LLM's from our partners like Cohere, Meta, and Mistral on Azure, without having to manage underlying infrastructure. We have also built the world's most popular SLMs, which offer performance comparable to larger models, but are small enough to run on a laptop or mobile device. Anker, Ashley, AT&T, EY, and Thomson Reuters, for example, are all already exploring how to use our SLM-5 for their applications. And we have great momentum with Azure OpenAI Service. This quarter we added support for OpenAI's latest models including GPT-4 Turbo, GPT-4 with Vision, DALL-E 3 as well as fine-tuning. We are seeing increased usage from AI-first start-ups like Moveworks, Perplexity, SymphonyAI, as well as some of the world's largest companies. Over half of the Fortune 500 use Azure OpenAI today including Ally Financial, Coca-Cola, and Rockwell Automation. For example, at CES this month, Walmart shared how it's using Azure OpenAI Service along with its own proprietary data and models to streamline how more than 50,000 associates work and transform how it's millions of customers’ shop. More broadly, customers continue to choose Azure to simplify and accelerate their cloud migrations. Overall, we are seeing larger and more strategic Azure deals with an increase in the number of $1 billion-plus Azure commitments. Vodafone, for example, will invest $1.5 billion in Cloud and AI services over the next 10 years as it works to transform the digital experience of more than 300 million customers worldwide. Now on to data. We are integrating the power of AI across the entire data stack. Our Microsoft Intelligent Data Platform brings together operational databases, analytics, governance, and AI to help organizations simplify and consolidate their data estates. Cosmos DB is the go-to database to build AI-powered apps at any scale powering workloads for companies in every industry from AXA and Kohl's to Mitsubishi and TomTom. KPMG, for example, has used Cosmos DB including its built-in native vector search capabilities along with Azure OpenAI service to power an AI assistant, which it credits with driving an up to 50% increase in productivity for its consultants. All-up, Cosmos DB data transactions increased 42% year-over-year and for those organizations who want to go beyond in-database vector search, Azure AI search offers the best hybrid search solution. OpenAI is using it for retrieval augmented generation as part of ChatGPT. And this quarter, we made Microsoft Fabric generally available, helping customers like Milliman and PwC go from data to insights to action, all within the same unified SaaS solution. Data stored in Fabric's multi-cloud data lake, OneLake increased 46% quarter-over-quarter. Now on to developers. From GitHub to Visual Studio, we have the most comprehensive and loved developer tools for the era of AI. GitHub revenue accelerated to over 40% year-over-year, driven by all-up platform growth and adoption of GitHub Copilot, the world's most widely deployed AI developer tool. We now have over 1.3 million paid GitHub copilot subscribers, up 30% quarter-over-quarter, and more than 50,000 organizations use GitHub Copilot business to supercharge the productivity of their developers from digital natives like Etsy and HelloFresh to leading enterprises like Autodesk, Dell Technologies, and Goldman Sachs. Accenture alone will rollout GitHub Copilot to 50,000 of its developers this year and we're going further making copilot ubiquitous across the entire GitHub platform and new AI-powered security features, as well as Copilot enterprise, which tailors Copilot to organization's code bases and allows developers to converse with it in natural language. We're also the leader in low-code no-code development helping everyone create apps, automate workflows, analyze data, and now build custom copilots. More than 230,000 organizations have already used AI capabilities in Power Platform, up over 80% quarter-over-quarter, and with Copilot Studio, organizations can tailor Copilot for Microsoft 365 or create their own custom copilots. It is already being used by over 10,000 organizations including An Post, Holland America, PG&E. In just weeks, for example, both PayPal and Tata Digital built copilots to answer common employee queries, increasing productivity and reducing support costs. We're also using this AI moment to redefine our role in business applications. Dynamics 365 once again took share as organizations use our AI-powered apps to transform their marketing, sales, service, finance, and supply-chain functions. And we are expanding our TAM by integrating Copilot into third-party systems too. In sales out Copilot has helped sellers at more than 30,000 organizations including Lumen Technologies and Schneider Electric to enrich their customer interactions using data from Dynamics 365 or Salesforce. And with our new Copilot for service employees at companies like Northern Trust can resolve client queries faster. It includes out-of-the-box integrations to apps like Salesforce, ServiceNow, and Zendesk. With our industry and cross-industry clouds, we're tailoring our solutions to meet the needs of specific industries. In healthcare, DAX Copilot is being used by more than 100 healthcare systems including Lifespan, UNC Health and UPMC to increase physician productivity and reduce burnout. And our Cloud for Retail was front and center at NRF with retailers from Canadian Tire Corporation, to Leatherman and Ralph Lauren sharing how they will use our solutions across the shopper journey to accelerate time to value. Now on to future of work. A growing body of evidence makes clear the role AI will play in transforming work. Our own research as well as external studies show as much as 70% improvement in productivity, using generative AI for specific work tasks. And overall early Copilot for Microsoft 365 users were 29% faster in a series of tasks like searching, writing, and summarizing. Two months in, we have seen faster adoption than either our E3 or E5 suites as enterprises like Dentsu, Honda, Pfizer, all deploy Copilot to their employees. And we are expanding availability to organizations of all sizes. We're also seeing a Copilot ecosystem begin to emerge. ISVs like Atlassian, Mural, and Trello, as well as customers like Air India, Bayer, and Siemens have all built plug-ins for specific lines of business that extend Copilot's capabilities. When it comes to Teams, we again saw record usage as organizations brought together collaboration chat, meetings, and calling on one platform and Teams has also become a new entry point for us. More than two-thirds of our enterprise Teams customers buy Phone, Rooms or Premium. All this innovation is driving growth across Microsoft 365. We now have more than 400 million paid Office 365 seats and organizations like BP, Elanco, ING Bank, Mediaset, WTW, all chose E5 this quarter to empower their employees with our best-in-class productivity apps along with advanced security compliance, voice, and analytics. Now on to Windows. In 2024, AI will become a first-class part of every PC. Windows PCs with built-in neural processing units were front and center at CES, unlocking new AI experiences to make what you do on your PC easier and faster from searching for answers and summarizing emails to optimizing performance and battery efficiency. Copilot in Windows is already available on more than 75 million Windows 10 and Windows 11 PCs and with our new Copilot key, the first significant change to the Windows keyboard in 30 years, we're providing one-click access. We also continue to transform how Windows is experienced and managed with Azure Virtual Desktop and Windows 365, introducing new features that make it simpler for employees to access and IT teams to secure their cloud PCs. Usage of cloud-delivered Windows increased over 50% year-over-year. And all-up, Windows 11 commercial deployments increased 2 times year-over-year as companies like HPE and Petrobras rolled out operating systems to employees. Now onto security. The recent security attacks, including the nation-state attack on our corporate systems, we reported a week and a half ago have highlighted the urgent need for organizations to move even faster to protect themselves from cyber threats. It's why last fall we announced a set of engineering priorities under our secure future initiative, bringing together every part of the company to advance cyber security protection across both new products and legacy infrastructure. And it's why we continue to innovate across our security portfolio as well as our operational security posture to help customers adopt a Zero Trust security architecture. Our industry-first unified security operations platform brings together our SIM Microsoft Sentinel, our XDR Microsoft Defender, and Copilot for security to help teams manage an increasingly complex security landscape. And with Copilot for security, we're now helping hundreds of early access customers including Cmax, Dow, LTI Mindtree, McAfee, Nucor Steel, significantly increase their SecOps team's productivity. This quarter, we extended copilot to Entra, Intune, and Purview. All-up, we have over 1 million customers, including more than 700,000 who use four or more of our security products like Arrow Electronics, DXC Technology, Freeport-McMoRan, Insight Enterprises, JB Hunt, and the Mosaic Company. Now on to LinkedIn. LinkedIn is now helping over 1 billion members learn, sell, and get hired. We continue to see strong global membership growth driven by member sign-ups in key markets like Germany and India. In an ever-changing job market, members are staying competitive through skill-building and knowledge-sharing. Over the last 12 months, members have added 680 million skills to their profiles, up 80% year-over-year. Our new AI-powered features are transforming the LinkedIn member experience, everything from how people learn new skills to how they search for jobs and engage with [indiscernible]. New AI features, including more personalized emails also continue to increase business ROI on the platform and our hiring business took share for the sixth consecutive quarter. And more broadly, AI is transforming our search and browser experience. We are encouraged by the momentum, earlier this month, we achieved a new milestone with 5 billion images created and 5 billion chats conducted to-date, both doubling quarter-over-quarter and both being an edge took share this quarter. We also introduced Copilot as a standalone destination across all browsers and devices, as well as a Copilot app on iOS and Android. And just two weeks ago, we introduced Copilot Pro providing access to the latest models for quick answers and high-quality image creation and access to Copilot for Microsoft 365 personal and family subscribers. Now on to gaming. This quarter we set all-time records for monthly active users in Xbox PC, as well as mobile, where we now have over 200 million monthly active users alone, inclusive of Activision Blizzard King. With our acquisition, we have added hundreds of millions of gamers to our ecosystem, as we execute on our ambition to reach more gamers on more platforms. With cloud gaming, we continue to innovate to offer players more ways to experience the games they love where and when and how they want, hours streamed increased 44% year-over-year. Great content is key to our growth and across our portfolio, I've never been more excited about our line-up of upcoming games. Earlier this month, we shared exciting new first-party titles coming this year to Xbox PC and Game Pass including Indiana Jones. And we've also announced launching significant updates this calendar year to many of our most durable franchises, which brings in millions of players each month, including Call of Duty, Elder Scrolls Online, and Starfield. In closing, we are looking forward to how AI-driven transformation will benefit people and organizations in 2024. With that, I'll hand it over to Amy.\nAmy Hood: Thank you, Satya, and good afternoon, everyone. This quarter, revenue was $62 billion, up 18% and 16% in constant currency. When adjusted for the prior year's Q2 charge, operating income increased 25% and 23% in constant currency, and earnings per share was $2.93, which increased 26% and 23% in constant currency. Results exceeded expectations and we delivered another quarter of double-digit top and bottom-line growth. Strong execution by our sales teams and partners drove share gains again this quarter across many of our businesses, as Satya referenced. In our commercial business, strong demand for our Microsoft cloud offerings, including AI services drove better-than-expected growth and large long-term Azure contracts. Microsoft 365 suite strength contributed to ARPU expansion for our office commercial business, while new business growth continued to be moderated for standalone products sold outside the Microsoft 365 suite. Commercial bookings were ahead of expectations and increased 17% and 9% in constant currency on a low expiry base. The strength in long-term Azure contracts mentioned earlier, along with strong execution across our core annuity sales motions, including healthy renewals drove our results. Commercial remaining performance obligation increased 17% and 16% in constant currency to $222 billion, roughly 45% will be recognized in revenue in the next 12 months, up 15% year-over-year. The remaining portion recognized beyond the next 12 months increased 19%. And this quarter, our annuity mix was 96%. In our consumer business, the PC and advertising markets were generally in line with our expectations. PC market volumes continued to stabilize at pre-pandemic levels [Technical Difficulty] Gaming console market was a bit smaller. As a reminder, my Q2 commentary includes the net impact of Activision from the date of acquisition, inclusive of purchase accounting, integration, and transaction-related expenses. The net impact includes adjusting for the movement of Activision content from our prior relationship as a third-party partner to first-party. At a company level, Activision contributed approximately 4 points to revenue growth, was a 2 point drag on adjusted operating income growth, and a negative $0.5 impact to earnings per share. This impact includes $1.1 billion from purchase accounting adjustments, integration and transaction-related costs, such as severance-related charges related to last week's announcement. FX was roughly in line with our expectations on total company revenue, segment-level revenue, COGS, and operating expense growth. Microsoft Cloud revenue was $33.7 billion, ahead of expectations, and grew 24% and 22% in constant currency. Microsoft Cloud gross margin percentage was 72%, relatively unchanged year-over-year. Excluding the impact of the change in accounting estimate for useful lives, gross margin percentage increased roughly 1 point, driven by improvement in Azure and Office 365, partially offset by the impact of scaling our AI infrastructure to meet growing demand. Company gross margin dollars increased 20% and 18% in constant-currency and gross margin percentage increased year-over-year to 68%. Excluding the impact of the change in accounting estimate, gross margin percentage increased roughly 2 points, even with the impact of $581 million from purchase accounting adjustments, integration, and transaction-related costs from the Activision acquisition. Growth was driven by improvement in devices, as well as the improvement in Azure and Office 365 as just mentioned. Operating expenses increased 3% with 11 points from the Activision acquisition, partially offset by 7 points of favorable impact from the prior year Q2 charge. The Activision impact includes $550 million from purchase accounting adjustments, integration, and transaction-related cost. At a company level, headcount at the end of December was 2% lower than a year ago. Operating margins increased roughly 5 points year-over-year to 44%. Excluding the impact of the change in accounting estimate, operating margins increased roughly 6 points driven by the higher gross margin noted earlier, the favorable impact from the prior year Q2 charge, and improved operating leverage through disciplined cost control. Now, to our segment results. Revenue from Productivity and Business Processes was $19.2 billion and grew 13% and 12% in constant currency, ahead of expectations, primarily driven by better-than-expected results in LinkedIn. Office commercial revenue grew 15% and 13% in constant currency. Office 365 commercial revenue increased 17% and 16% in constant currency, in line with expectations, driven by healthy renewal execution and ARPU growth from continued E5 momentum. Paid Office 365 commercial seats grew 9% year-over-year to over $400 million with installed base expansion across all customer segments. Seat growth was again driven by our small and medium business and frontline worker offerings, offset by the continued growth trends in new standalone business noted earlier. Office commercial licensing declined 17% and 18% in constant currency with continued customer shift to cloud offerings. Office Consumer revenue increased 5% and 4% in constant currency with continued momentum in Microsoft 355 subscriptions, which grew 16% to 78.4 million. LinkedIn revenue increased 9% and 8% in constant currency, ahead of expectations, driven by slightly better-than-expected performance across all businesses. In our Talent Solutions business bookings growth was again impacted by weaker hiring environment in key verticals. Dynamics revenue grew 21% and 19% in constant currency, driven by Dynamics 365, which grew 27% and 24% in constant currency with continued growth across all workloads. Bookings growth was impacted by weaker new business, primarily in Dynamics 365 ERP and CRM workloads. Segment gross margin dollars increased 14% and 12% in constant currency and gross margin percentage increased slightly year-over-year. Excluding the impact of the change in accounting estimate, gross margin percentage increased roughly 1 point, primarily driven by improvement in Office 365. Operating expenses decreased 5% and 6% in constant currency, with 5 points of favorable impact from the prior-year Q2 charge. Operating income increased 26% and 24% in constant currency. Next, the Intelligent Cloud segment, revenue was $25.9 billion, increasing 20% and 19% in constant currency, ahead of expectations, with better-than-expected results across all businesses. Overall server products and cloud services revenue grew 22% and 20% in constant currency. Azure and other cloud services revenue grew 30% and 28% in constant-currency, including 6 points of growth from AI services. Both AI and non-AI Azure services drove our outperformance. In our per-user business, the Enterprise Mobility and Security installed base grew 11% to over 268 million seats with continued impact from the growth trends in new standalone business noted earlier. In our on-premises server business, revenue increased 3% and 2% in constant currency, ahead of expectations, driven primarily by the better-than-expected demand related to Windows Server 2012 end of support. Enterprise and partner services revenue increased 1% and was relatively unchanged in constant currency with better-than-expected performance across enterprise support services and industry solutions. Segment gross margin dollars increased 20% and 18% in constant currency and gross margin percentage was relatively unchanged. Excluding the impact of the change in accounting estimate, gross margin percentage increased roughly 1 point, driven by the improvement in Azure noted earlier, partially offset by the impact of scaling our AI infrastructure to meet growing demand. Operating expenses decreased 8% and 9% in constant currency, with 9 points of favorable impact from the prior year Q2 charge, operating income grew 40% and 37% in constant currency. Now, to more Personal Computing. Revenue was $16.9 billion, increasing 19% and 18% in constant currency, in line with expectations overall. Growth includes 15 points of net impact from the Activision acquisition. Windows OEM revenue increased 11% year-over-year, ahead of expectations, driven by slightly better performance and higher monetizing consumer markets. Windows Commercial products and cloud services revenue increased 9% and 7% in constant-currency, below expectations primarily [Technical Difficulty] period revenue recognition from the mix of contracts. Annuity billings growth remains healthy. Devices revenue decreased 9% and 10% in constant currency, ahead of expectations due to stronger execution in the commercial segment. Search and news advertising revenue ex-TAC increased 8% and 7% in constant currency, relatively in line with expectations, driven by higher search volume, offset by negative impact from a third-party partnership. And in gaming, revenue increased 49% and 48% in constant currency, with 44 points of net impact from the Activision acquisition. Total gaming revenue was in line with expectations of stronger-than-expected performance from Activision was offset by the weaker-than-expected console market noted earlier. Xbox content and services revenue increased 61% and 60% in constant currency, driven by 55 points of net impact from the Activision acquisition. Xbox hardware revenue grew 3% and 1% in constant currency. Segment gross margin dollars increased 34% and 32% in constant currency with 17 points of net impact from the Activision acquisition. Gross margin percentage increased roughly 6 points year-over-year, driven by higher devices gross margin and sales mix shift to higher-margin businesses. Operating expenses increased 38% with 48 points of impact from the Activision acquisition, partially offset by 6 points of favorable impact from the prior year Q2 charge. Operating income increased 29% and 26% in constant currency. Now back to total company results. Capital expenditures, including finance leases were $11.5 billion, lower-than-expected due to delivery for a third-party capacity contract shifting from Q2 to Q3. Cash paid-for PP&E was $9.7 billion. These data center investments support our cloud demand, inclusive of needs to scale our AI infrastructure. Cash flow from operations was $18.9 billion, up 69% driven by strong cloud billings and collections on a prior year comparable that was impacted by lower operating income. Free cash flow was $9.1 billion, up 86% year-over-year, reflecting the timing of cash paid-for property and equipment. This quarter, other income and expense was in line with expectations, negative $506 million, driven by interest expense and net losses on investments, partially offset by interest income. Our effective tax rate was approximately 18%. And finally, we returned $8.4 billion to shareholders through dividends and share repurchases. Now moving to our Q3 outlook, which unless specifically noted otherwise is on a US dollar basis. First FX, based on current rates, we expect FX to increase total revenue and segment-level revenue growth by less than 1 point. And we expect no impact to COGS and operating expense growth. In commercial bookings, strong execution across our core annuity sales motions, including healthy renewals along with long-term Azure commitments should drive healthy growth on a growing expiry base. Microsoft Cloud gross margin percentage should decrease roughly 1 point year-over-year, excluding the impact from the accounting estimate change, Q3 cloud gross margin percentage will be relatively flat as improvement in Office 365 and Azure will be offset by sales mix shift to Azure, as well as the impact of scaling our AI infrastructure to meet growing demand. We expect capital expenditures to increase materially on a sequential basis, driven by investments in our cloud and AI infrastructure and the flip of a delivery date from Q2 to Q3 from a third-party provider noted earlier. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-out. Next to segment guidance. In Productivity and Business Processes, we expect revenue of $19.3 billion to $19.6 billion or growth between 10% and 12%. In Office Commercial revenue growth will again be driven by Office 365 with seat growth across customer segments and ARPU growth through E5. We expect Office 365 revenue growth to be approximately 15% in constant currency. While it's early days for Microsoft 365 Copilot, we're excited by the adoption we've seen to date and continue to expect revenue to grow over time. In our on-premise business, we expect revenue to decline in the low 20s. In Office Consumer, we expect revenue growth in the mid-to-high single-digits, driven by Microsoft's 365 subscriptions. For LinkedIn, we expect revenue growth in the mid-to-high single digits, driven by continued growth across all businesses. And in Dynamics, we expect revenue growth in the mid-teens, driven by Dynamics 365. For Intelligent Cloud, we expect revenue of $26 billion to $26.3 billion or growth between 18% and 19%. Revenue will continue to be driven by Azure, which, as a reminder, can have quarterly variability, primarily from our per-user business and from in-period revenue recognition, depending on the mix of contracts. In Azure, we expect Q3 revenue growth in constant currency to remain stable to our stronger-than-expected Q2 results. Growth will be driven by our Azure consumption business with continued strong contribution from AI. Our per-user business should see benefit from Microsoft 365 Suite momentum though we expect continued moderation in seat growth rates given the size of the installed base. In our on-premises server business, we expect revenue growth in the low-to-mid single-digits with continued hybrid demand, including licenses running in multi-cloud environments. And in the enterprise and partner services revenue should decline approximately 10% on a high prior year comparable for enterprise support services and more Personal Computing, we expect revenue of $14.7 billion, $15.1 billion or growth between 11% and 14%. Windows OEM revenue growth should be relatively flat as PC market unit volumes continue at pre-pandemic levels. In Windows Commercial products and cloud services, customer demand for Microsoft 365 and our Advanced Security Solutions should drive revenue growth in the mid-teens. As a reminder, our quarterly revenue growth can have variability, primarily from in-period revenue recognition, depending on the mix of contracts. In Devices, revenue should decline in the low-double-digits as we continue to focus on our higher-margin premium products. Search and news advertising ex-TAC revenue growth should be in the mid-to-high single-digits, about 8 points higher than overall search and news advertising revenue, driven by continued volume strength. And in gaming, we expect revenue growth in the low 40s, including approximately 45 points of net impact from the Activision acquisition. We expect Xbox content and services revenue growth in the low-to-mid 50s, driven by approximately 50 points of net impact from the Activision acquisition. Hardware revenue will decline year-over-year. Now back to company guidance. We expect COGS between $18.6 billion to $18.8 billion, including approximately $700 million of amortization of acquired intangible assets from the Activision acquisition. We expect operating expenses of $15.8 billion to $15.9 billion, including approximately $300 million from purchase accounting, integration, and transaction-related costs from the Activision acquisition. Other income and expenses should be roughly negative $600 million as interest income will be more than offset by interest expense and other losses. As a reminder, we are required to recognize gains or losses on our equity investments, which can increase quarterly volatility. We expect our Q3 effective tax rate to be in line with our full-year rate, which we now expect to be approximately 18%. Now, some additional thoughts on the full-fiscal year. First FX, assuming current rates remain stable, we now expect FX to increase Q4 and full-year revenue growth by less than 1 point. We continue to expect no meaningful impact to full-year COGS or operating expense growth. Second Activision. For the full-year FY 2024, we expect Activision to be accretive to operating income, when excluding purchase accounting, integration and transaction-related cost. At a total company level, we delivered strong results in H1 and demand for our Microsoft Cloud continues to drive the growth in our outlook for H2. Our commitment to scaling our cloud and AI investment is guided by customer demand and a substantial market opportunity. As we scale these investments, we remain focused on driving efficiencies across every layer of our tech stack and disciplined cost management across every team. Therefore, we expect full-year operating margins to be up 1 to 2 points year-over-year, even as AI capital investments drive COGS growth. This operating margin expansion excludes the impact from the Activision acquisition and the headwind from the change in useful lives last year. In closing, we are focused on execution. So our customers can realize the benefits of AI productivity gains as we invest to lead this AI platform wave. With that, let's go to Q&A, Brett.\nBrett Iversen: Thanks Amy. We'll now move over to Q&A. Out of respect for others on the call, we request the participants please only ask one question. Joe, can you please repeat your instructions?\nOperator: [Operator Instructions] And our first question comes from the line of Mark Moerdler with Bernstein Research. Please proceed.\nMark Moerdler: Thank you very much. Congratulations on the strong quarter and thanks for letting me ask the question. Amy, you've discussed Azure being stable and you deliver Azure growth stability, but if we drill in one layer, we see Azure AI [aiming] (ph) to become a bigger portion of the revenue. I understand that separating what is directly AI revenue versus other IaaS, PaaS revenue that are leveraging well driven by AI is difficult, can you help me with two related questions? Optimization has been stabilizing and at some point, it should be part of the revenue flow. How should we think about what happens then, do we see non-directly AI consumption being flattish or do we see a rebound as the cloud shift continues and the need for data of inferencing grows? Second point. On AI, where are we in the journey from training driving most of Azure AI usage to inferencing? When do you think we start to see pick-up in non-Microsoft inferencing kick in, when do you think we could hit the point where inferencing is the bigger part of the driver? Thank you.\nSatya Nadella: You want me to go first and...\nAmy Hood: You go first and I'll take the technical.\nSatya Nadella: Yes, let me -- just on the inferencing and training, most of what you've seen for the most part is all inferencing. So, none of the large model training stuff is in any of our higher numbers at all. What small batch training, so somebody is doing fine-tuning or what have you, that will be there, but that's sort of a minor part. So, most of what you see in the Azure number is broadly inferencing. And Mark, I think it may be helpful to sort of think about, like what is the new workload in AI? The new workload in AI, obviously, in our case starts with one of the frontier -- I mean, starts with the Frontier model Azure OpenAI. But it's not just about just one model, right. So, you first -- you take that model, you do RLHF, you may do some fine-tuning, you do retrieval, which means you are sort of either heating some storage meter or you're heating some compute meters. And so to -- and by the way, you will also distil a large model to a small model and that would be a training perhaps. But that's a small batch training that uses essentially inference infrastructure. So, I think that's what's happening. So you could even say these AI workloads themselves will have a lifecycle which is they'll get rebuilt, then there'll be continuously optimized over time. So, that's sort of one-side. And I think if I understand your question, what's happening with the traditional optimization, and I think last quarter we said. One, we're going to continue to have these cycles where people will build new workloads, they will optimize the workloads and then they'll start new workloads. So I think that that's what we continue to see. But that period of massive, I'll call it, optimization only and no new workloads start, that I think has ended at this point. So what you're seeing is much more of that continuous cycles by customers, both whether it comes to AI or whether it comes to the traditional workloads.\nAmy Hood: No, maybe I'll just add just a few things to that. I think whether you use the word lapping, these optimization comparables or the comparables easing, is all sort of the same thing, that we're getting to that point, in H2 that's absolutely true. We'd like to talk about the contribution of AI specifically for the reason Satya talked about, these are -- this is starting to see the application of AI at scale. And we want to be able to show people, this is how that point will work, it's inferencing workloads where people are expecting productivity gains, other benefits that grow revenue and so, I do think about those as both related. And ultimately the TAM that we go after is best sort of across both of those, both AI workload and I guess, “non-AI workload” although to Satya's point, you need all of it.\nMark Moerdler: Perfect. Thank you very much for the deep answer.\nBrett Iversen: Thanks, Mark. Joe, next question, please.\nOperator: Our next question comes from the line of Brent Thill with Jefferies. Please proceed.\nBrent Thill: Good afternoon. Amy, the margin improvement is pretty shocking to most considering the investments that you and Satya are putting into AI. I'm curious if you could just walk through, how this is possible and what you're seeing so far in some of the costs that you're trying to manage as you scale up AI?\nAmy Hood: Thanks, Brent. First of all, thanks for the question. The teams are obviously been hard at work on this topic. We do point out that, Q2 because of the impact of the charge a year ago, you're seeing larger margin improvement than I would say, sort of a run-rate margin improvement. So, let me first say that. Secondly, the absolute margin improvement has also been very good and it speaks to, I think one of the things Satya talked about and I reiterated a bit, which is that, we want really to make sure we're making investments, we're making them in consistency across the tech stack. The tech stack we're building, no matter what team is on, is inclusive of AI enablement. And so think about as building that consistency without needing to add a lot of resources to do that. It's been a real pivot of our entire investment infrastructure to be working on this work. And I think that's important, because it means you're shifting to an AI-first position, not just in the language we use, but in what people are working on day-to-day. That does obviously create a leverage opportunity. There has also been really good work put in by many teams on improving the gross margin of the products; we talked about it with Office 365, we talked about in Azure core. We even talked about it across our devices portfolio, where we've seen material improvements over the course of the year. And so, when you kind of take improvements at the gross margin level, plus this consistency of re-pivoting our workforce toward the AI-first work we're doing, without adding material number of people to the workforce, you end up with that type of leverage. And we still need to be investing. And so, the important part, invest towards the thing that's going to shape the next decade and continue to stay focused on being able to deliver your day-to-day commitments. And so it's a great question. And hopefully, that helps piece apart a few of the components.\nBrent Thill: Thanks, Amy.\nBrett Iversen: Thanks, Brent. Joe next question, please.\nOperator: Our next question comes from the line of Kash Rangan with Goldman Sachs. Please proceed.\nKash Rangan: Hi, thank you very much. A superb quarter of great improvements. Just one question for you Satya. Cloud computing changed the tech stack in ways that we could not imagine 10 years back, the nature of the database layer, the operating system layer, every layer just changed dramatically. How do you foresee generative AI changing the tech stack as we know it? Thank you so much.\nSatya Nadella: Yes, I think it's going to have a very foundational impact. In fact, you could say the core compute architecture itself changes, everything from power, power density to the data center design, to what used to be the accelerator, now is the sort of the main CPU, so to speak, or the main compute unit. And so, I think in the network, the memory architecture, all of it. So as the core computer architecture changes, I think every workload changes. And so yes, so there is a full, like, take our data layer, the most exciting thing for me in the last year has been to see how our data layer has evolved to be built for AI, right? If you think about Fabric, one of the genius of Fabric is to be able to say, let's separate out storage from the compute layer. In compute we'll have traditional SQL, we’ll have Spark. And by the way, you can have an Azure AI job on top of the same data lake, so to speak, or the lakehouse pattern. And then the business model you can combine all of those different compute. So that's the type of compute architecture. So it's sort of a -- so that's just one example. The tool stuff is changing. Office, I mean if you think about what -- if I look at Copilot; Copilots extensibility with GPT, Copilot apps to the Copilot stack, that's another sort of part of what's happening to the tech stack. So yes, I mean, definitely builds. I mean. I do believe, being in the cloud has been very helpful to build AI. But now, AI is just redefining what it means to have, what the cloud looks like, both at the infrastructure level and the app model.\nKash Rangan: Terrific. Thank you so much.\nBrett Iversen: Thanks, Kash. Joe, our next question, please.\nOperator: Our next question comes from the line of Karl Keirstead with UBS. Please proceed.\nKarl Keirstead: Thank you. I wanted to return to AI, the six point AI lift to Azure is just extraordinary. But I wanted to ask you about your progress in standing up the infrastructure to meet that demand. If you feel like Microsoft is supply GPU-constrained. Is the success you've had maybe working through some of the scaling bottlenecks that some of the other cloud infrastructure providers have talked about, a little bit maybe on the infrastructure scaling front might be interesting. Thank you.\nAmy Hood: Thanks, Karl. Maybe I'll start and Satya feel free to add on. I think we feel really good about where we have been in terms of adding capacity. You started to see the acceleration in our capital expense starting almost a year ago, and you've seen us scale through that process. And that is going toward as we talked about Servers and also new data center footprints to be able to meet what we see as this demand and really changing demand as we look forward. And so, I do feel like the team has done a very good job. I feel like, primarily obviously, this is being built by us, but we've also used third-party capacity to help when we could have that help us in terms of meeting customer demand. And I think looking forward, you'll tend to C&I guide toward it, accelerated capital expense to continue to be able to add capacity in the coming quarters, given what we see in terms of pipeline.\nBrett Iversen: Thanks, Karl. Joe, next question, please.\nOperator: Our next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed.\nBrad Zelnick: Great, thank you so much for taking the question. The early market feedback that we're all hearing on Microsoft 365 copilot is very powerful. Can you provide more granularity on what you're seeing in terms of adoption trends versus perhaps other new product introductions in the past, what if anything is holding it back, and how much of a priority is it to get it in the hands of customers? To what lengths might you go to incentivize just getting it out in the market? Thank you.\nSatya Nadella: No, thank you for the question, Brad. So, a couple of things. In my comments I said increase in relation to our previous suites like, let's say, E3 or E5. Whatever two months in, it's definitely much faster than that. And so, from that perspective. It's exciting to see, I’d say, the demand signal, the deployment signal. I was looking at by tenant, even usage, it's faster than anything else because it's easier, right. I mean, it's sort of -- it shows up in your app, if you click on it, like any ribbon thing and it becomes a daily habit. So it in fact, it reminds me a little bit of sort of the back-in-the day of PC adoption, right. It's kind of -- I think it first starts off with few people having access. There are many companies that are doing standard issue, right. So just like PCs became standard issue at some point after PCs being adopted by early adopters. I think that's the cycle that at least we expect. In terms of what we're seeing, it's actually interesting, If you look at the data we have, summarization, that's what it's like number-one, like I'm doing summarization of Teams meetings inside of Teams, during the meeting, after the meeting, word documents summarization, I get something in email on summarizing. So summarization has become a big deal. Drafts, right, you're drafting emails, drafting documents. So, anytime you want to start something, the blank page thing goes away and you start by prompting and drafting. Chat, to me, the most powerful feature is now you have the most important database in your company, which happens to be the database of your documents and communications. It is now queryable by natural language in a powerful way, right. I can go and say, what are all the things Amy said, I should be watching out for next quarter and it will come out with great detail. And so Chat, summarization, draft, also by the way, actions. One of the most used thing is, here's the Word document, go complete, I mean, create a PowerPoint for me. So, those are the stuff that is also beginning. So, I feel like these all become -- but fundamentally, what happens is, if you remember the PC adoption cycle, what it did was work artifact and work flow changed, right. You can imagine what forecasting was before excel and email and what it was after. So similarly, you'll see work and workflow change as people summarize faster, draft regulatory submissions faster. Chat to get knowledge from your business. And so, those are the things that we are seeing as overall patterns.\nAmy Hood: And maybe just to add two points. One of the exciting things as I said for some companies, it's going to be standard issue like PC, for other companies, they may want to do a land with a smaller group, see the productivity gains and then expand. And so being able to lift some of the seat requirements that we did earlier this month, it's really going to allow customers to be able to use that approach too. And the other thing I would add, we always talk about in enterprise software, you sell software, then you wait and then it gets deployed, and then after deployment, you want to see usage. And in particular, what we've seen and you would expect this, in some ways with Copilot even in the early stages, obviously, deployment happens very quickly. But really what we're seeing is engagement grow. As to Satya's point on how you learn and your behavior changes you see engagement grow with time. And so I think those are just to put a pin in that, because it's an important dynamic when we think about the optimism you hear from us.\nBrad Zelnick: Excellent, thank you so much.\nBrett Iversen: Thanks, Brad. Joe, next question, please.\nOperator: Our next question comes from the line of Mark Murphy with J.P. Morgan. Please proceed.\nMark Murphy: Yeah. Thank you very much. Is it possible to unpack the 6 point AI services tailwind, it's just to help us understand which elements ramped up by the three incremental points. For instance, is it more of the open AI inferencing, GitHub Copilot, other Copilots, the Azure OpenAI service, third party LLMs running on Azure. I'm just wondering, where did you see the strongest step-up in that activity?\nAmy Hood: Mark, without getting into tons of line items, it's more simple to think of it as really, it's people adopting it for inferencing at the API generally. I mean that's the easiest way to think about it. And we also saw growth in GitHub Copilot which you talked -- which Satya talked about and we saw a growing number of third parties using it in some small ways for training. But this is primarily an inferencing workload right now in terms of what's driving that number. We used to think of it that way.\nSatya Nadella: Azure OpenAI and then OpenAIs on APIs on top of Azure would be the sort of the major drivers. But there is a lot of the small batch training that goes on, whether it's let Jeff for fine-tuning. And then lot of people who are you starting to use models as a service with all the other new models, but it's predominantly Azure open AI today.\nMark Murphy: Thank you.\nBrett Iversen: Thanks, Mark. Joe, next question, please.\nOperator: Our next question comes from the line of Brad Reback with Stifel. Please proceed.\nBrad Reback: Great, thanks very much. Amy for many, many years in Commercial Office 365 seat growth has far outpaced ARPU and over the last couple of quarters, we're getting a convergence, obviously, as the seat count gets really large. As we look-forward, should they run even for a period of time or should we expect ARPU to outpace seat growth here in the short term? Thanks.\nAmy Hood: That's a great question, Brad. Let me split apart the components. And then we can come back to whether they should equalize or just go on sort of a bit, actually believe it or not, somewhat independent trajectory and I will explain why I say that. Your seat growth as we talk about is primarily from, at this point, small and medium-size businesses and really frontline workers scenarios. And to your point on occasion, those are lower ARPU seats, but there are also -- there are new seats and so you see that in the seat count number. And as we get through and we've seen that come down a little bit quarter-over-quarter and we've guided for that really to happen again next quarter, but a very separate thing is being able to add ARPU. And traditionally, and again this quarter, right, that's come over-time from E3. Then from E5. And we're continuing to see very healthy seat momentum and you heard very good renewals. So, all of that, right, completely independent in some way from seat growth. Then the next thing, that actually we just talked about, maybe in Brad's question I'm trying to recall is that, you're going to see Copilot revenue will run there as ARPU, right. That won't show a seat growth. So you'll have E3, E5 transition, Copilot, all show-up in ARPU over time, and then you’ll have the seat growth be primarily still small business and frontline worker and maybe new industry scenarios. So, I tend to not really, Brad, think about them as related lines, believe it or not. I think about them as sort of unique Independent motions we run and there is still room for seat growth and obviously with the levers we've talked about, there's room for ARPU growth as well.\nBrad Reback: That's great. Thanks very much.\nBrett Iversen: Thanks, Brad. Joe we have time for one last question.\nOperator: Our last question will come from the line of Tyler Radke with Citi. Please proceed.\nTyler Radke: Thanks for taking my question. Satya your enthusiasm about GitHub Copilot was noticeable on the conference call and at the AI Summit in New York last week. I'm wondering how you're thinking about pricing, obviously, this is driving pretty incredible breakthroughs and productivity for developers. But how do you think about your ability to drive ARPU on the GitHub Copilot over-time and just talk us through how you're thinking about the next phase of new releases there?\nSatya Nadella: Yeah. I mean -- it's -- I always go back to sort of my own conviction that this generation of AI is going to be different, started with the move from 2.5 to 3 of GPT. And then it's use inside of developer scenario with GitHub copilot and so yes. I think this is the place where it's most evolved. In terms of its economic benefits or productivity benefits and you see it. We see it inside of Microsoft, we see it in all of the key studies we put out of customers, everybody had talked to its pick-up, but it is the one place where it's becoming standard issue for any developer is like if you takeaways spell check from Word, I'll be unemployable. And similarly, it will be like -- I think GitHub Copilot becomes core to anybody who is doing software development. The thing that you brought up is a little bit of a continuation to how Amy talked about, right. So you are going to start seeing people think of these tools as productivity enhancers, right. I mean, if I look at it, our ARPUs have been great, but they're pretty low.. You know even though we've had a lot of success, it's not like we had a high-priced ARPU company. I think what you're going to start finding is, whether it's sales copilot or service copilot or GitHub Copilot of security copilot. They are going to fundamentally capture some of the value they drive in terms of the productivity of the OpEx, right. So it's like 2 points, 3 points of OpEx leverage would be goal is on software spend. I think that's a pretty straightforward value equation. And so that's the first time, I mean this is not something we've been able to make the case for before whereas now I think we have that case. Then even the horizontal copilot is what Amy was talking about, which is at the Office 365 or Microsoft 365 level, even there, you can make the same argument whatever ARPU we may even have with E5, now, you can see incrementally as a percentage of the OpEx, how much would you pay for a copilot to give you more time savings for example. And so yes, I think all up, I do see this as a new vector for us in what I'll call the next phase of knowledge work and frontline work even in their productivity and how we participate. And I think GitHub copilot, I never thought of the tools business as fundamentally participating in the operating expenses of a company's spend on, let's say, development activity and now you're seeing that transition, it is just not tools, it's about productivity of your dev team.\nBrett Iversen: Thanks, Tyler. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today and we look forward to speaking with all of you soon.\nAmy Hood: Thank you.\nSatya Nadella: Thank you.\nOperator: This concludes today's conference. You may now disconnect. Enjoy the rest of your evening." + }, + { + "symbol": "TSLA", + "quarter": 2, + "year": 2024, + "date": "2024-07-24 01:45:26", + "content": "Travis Axelrod: Good afternoon, everyone and welcome to Tesla's Second Quarter 2024 Q&A Webcast. My name is Travis Axelrod, Head of Investor Relations and I’m joined today by Elon Musk, Vaibhav Taneja, and a number of other executives. Our Q2 results were announced at about 3.00 p.m. Central Time and the Update Deck we published at the same link as this webcast. During this call, we will discuss our business outlook and make forward-looking statements. These comments are based on our predictions and expectations as of today. Actual events or results could differ materially due to a number of risks and uncertainties, including those mentioned in our most recent filings with the SEC. During the question-and-answer portion of today's call, please limit yourself to one question and one follow-up. Please use the raise hand button to join the question queue. Before we jump into Q&A, Elon has some opening remarks. Elon?\nElon Musk: Thank you. So to recap, we saw large adoption exploration in EVs, and then a bit of a hangover as others struggle to make compelling EVs. So there are quite a few competing electric vehicles that have entered the market. And mostly they’ve not done well, but they’ve discounted their EVs very substantially, which has made it a bit more difficult for Tesla. We don’t see this as long-term issue, but really -- fairly short-term. And we still obviously firmly believe that EVs are best for customers and that the world is headed for a fully electrified transport, not just the cars, but also aircrafts and boats. Despite many challenges the Tesla team did a great job executing and we did achieve record quarterly revenues. Energy storage deployments reached an all-time high in Q2, leading to record profits for the energy business. And we are investing in many future projects, including AI training and inference and great deal of infrastructure to support future products. We won't get too much into the product roadmap here, because that is reserved for product announcement events. But we are on track to deliver a more affordable model in the first half of next year. The big -- really by far the biggest differentiator for Tesla is autonomy. In addition to that, we've scale economies and we're the most efficient electric vehicle producer in the world. So, this, anyway -- while others are pursuing different parts of the AI robotic stack, we are pursuing all of them. This allows for better cost control, more scale, quicker time to market, and a superior product, applying not to -- not just to autonomous vehicles, but to autonomous humanoid robots like Optimus. Regarding Full Self-Driving and Robotaxi, we've made a lot of progress with Full Self-Driving in Q2 and with version 12.5 beginning rollout, we think customers will experience a step change improvement in how well supervised full self-driving works. Version 12.5 has 5x the parameters of 12.4 and will finally merge the highway and city stacks. So the highway stack is still at this point is pretty old. So often the issues people encounter are on highway, but with 12.5, we are finally merged the two stacks. I still find that most people actually don't know how good the system is, and I would encourage anyone to understand the system better, to simply try it out and let the car drive you around. One of the things we're going to be doing just to make sure people actually understand the capabilities of the car is when delivering a new car and when picking up a car for service to just show people how to use it and just drive them around the block. Once people use it at all they tend to continue using it. So it's very compelling. And then this I think will be a massive demand driver, even unsupervised full self-driving will be a massive demand driver. And as we increase the miles between intervention, it will transition from supervised full self-driving to unsupervised full self-driving, and we can unlock massive potential in [V3] (ph). We postponed the sort of Robotaxi the sort of product unveil by a couple of months where it were -- it shifted to 10/10 to the 10th October -end because I wanted to make some important changes that I think would improve the vehicle -- sort of Robotaxi, the thing that we are -- the main thing that we are going to show and we are also going to show off a couple of other things. So moving it back a few months allowed us to improve the Robotaxi as well as add in a couple other things for the product unveil. We're also nearing completion of the South expansion of Giga Texas, which will house our largest training cluster to date. So it will be an incremental for 50,000 H100s plus 20,000 of our hardware 4 AI5 Tesla AI computer. With Optimus, Optimus is already performing tasks in our factory. And we expect to have Optimus production Version 1 in limited production starting early next year. This will be for Tesla consumption. It's just better for us to iron out the issues ourselves. But we expect to have several thousand Optimus robots produced and doing useful things by the end of next year in the Tesla factories. And then in 2026, ramping up production quite a bit, and at that point we'll be providing Optimus robots to outside customers. That will be Production Version 2 of Optimus. For the energy business, this is growing faster than anything else. This is -- we are really demand constrained rather than production constrained. So we are ramping up production in our U.S. factory as well as building the Megapack factory in China that should roughly double our output, maybe more than double -- maybe triple potentially. So in conclusion, we are super excited about the progress across the board. We are changing the energy system, how people move around, how people approach the economy. The undertaking is massive, but I think the future is incredibly bright. I really just can't emphasize just the importance of autonomy for the vehicle side and for Optimus. Although the numbers sound crazy, I think Tesla producing at volume with unsupervised FSD essentially enabling the fleet to operate like a giant autonomous fleet. And it takes the valuation, I think, to some pretty crazy number. ARK Invest thinks, on the order of $5 trillion, I think they are probably not wrong. And long-term Optimus, I think, it achieves a valuation several times that number. I want to thank the Tesla team for a strong execution and looking forward to exciting years ahead.\nTravis Axelrod: Great. Thank you very much, Elon, and Vaibhav has opening remarks as well.\nVaibhav Taneja: Thanks. As Elon mentioned, the Tesla team rose to the occasion yet again and delivered on all fronts with some notable records. In addition to those records, we saw our automotive deliveries go sequentially. I would like to thank the entire Tesla team for their efforts in delivering a great quarter. On the auto business front, affordability remains a top of mind for customers, and in response in Q2, we offered attractive financing options to offset sustained high interest rates. These programs had an impact on revenue per unit in the quarter. These impacts will persist into Q3 as we have already launched similar programs. We are now offering extremely competitive financing rates in most parts of the world. This is the best time to buy a Tesla, I mean, if you are waiting on the sidelines, come out and get your car. We had a record quarter on regulatory credits, revenues, and as well. On net, our auto margins remained flat sequentially. It is important to note that the demand for regulatory credits is dependent on other OEMs plans for the kind of vehicles they are manufacturing and selling as well as changes in regulations. We pride ourselves to be the company with the most American-made cars and are continuing our journey to further localize our supply chain, not just in the U.S., but in Europe and China as well for the respective factories. As always, our focus is on providing the most compelling products at a reasonable price. We have stepped up our efforts to provide more trims that have estimated range of more than 300 miles on a single charge. We believe this, along with the expansion of our supercharging network, is the right strategy to combat range anxiety. Since the revision of FSD pricing in North America, we've seen production rates increase meaningfully and expect this to be a driver of vehicle sales as the feature set improves further. Cost per vehicle declined sequentially when we removed the impact of Cybertruck. While we are experiencing material costs trending down, note that there is latency on the cost side and such reductions would show up in the P&L when the vehicles built with these materials get delivered. Additionally, as we get into the second half of the year, it is important to note that we are still ramping Cybertruck and Model 3 and are also getting impacted by varying amounts of tariffs on both raw materials and finished goods. While our teams are working feverishly to offset these, unfortunately it may have an impact on the cost in the near-term. We previously talked about the potential of the energy business and now feel excited that the foundation that was laid over time is bearing the expected results. Energy storage deployments more than doubled with contribution not just from Megapack, but also Powerwall, resulting in record revenues and profit for the energy business. Energy storage backlog is strong. As discussed before, deployments will fluctuate from period to period with some quarters seeing large increases and others seeing a decline. Recognition of storage gigawatt hours is dependent on a variety of factors, including logistics timing as we send units from a single factory to markets across the world, customer readiness and in case of EPC projects on construction activities. Moving on to the other parts of the business, service and other gross profits also improved sequentially from the improvement in service utilization and growth in our collision repair business. The impact of our recent reorg is reflected in restructuring other - on the income statement. Just to level set, this was about $622 million of charge, which got recorded in the period. And I want people to remember that we've called it out separately on the financials. Sequentially, our operating expenses excluding surcharges reduced despite an increase in spend for AI-related activities and higher legal and other costs. On the CapEx front, while we saw a sequential decline in Q2, we still expect the year to be over $10 billion in CapEx as we increase our spend to bring a 50k GPU cluster online. This new cluster will immensely increase our capabilities to scale FSD and other AI initiatives. We reverted to positive free cash flow of $1.3 billion in Q2. This was despite restructuring payments being made in the quarter and we ended the quarter with over $30 billion of cash and investments. Once again, we've begun the journey towards the next phase for the company with the building blocks being placed. It will take some time, but will be a rewarding experience for everyone involved. Once again, I would like to thank the entire Tesla team for their efforts.\nA - Travis Axelrod: Great. Thank you very much, Vaibhav. Now let's go to investor questions. The first question is, what is the status on the Roadster?\nElon Musk: With respect to Roadster, we've completed most of the engineering. And I think there's still some upgrades we want to make to it, but we expect to be in production with Roadster next year. It will be something special, like the whole thing [Indiscernible].\nTravis Axelrod: Fantastic. The next question is about timing of Robotaxi event, which we've already covered. So we'll go to the next question, when do you expect the first Robotaxi ride?\nElon Musk: I guess that, that's really just a question of when can we expect the first -- or when can we do unsupervised full self-driving. It's difficult, obviously, my predictions on this have been overly optimistic in the past. So I mean, based on the current trend, it seems as though we should get miles between interventions to be high enough that -- to be far enough in excess of humans that you could do unsupervised possibly by the end of this year. I would be shocked if we cannot do it next year. So next year seems highly probable to me based on [quite simply] (ph) plus the points of the curve of miles between intervention. That trend exceeds humans for sure next year, so yes.\nTravis Axelrod: Thank you very much. Our third question is, the Cybertruck is an iconic product that wows everyone who sees it. Do you have plans to expand the cyber vehicle lineup to a cyber SUV or cyber van?\nElon Musk: I think we want to limit product announcements to when we have a special -- specific product announcement event, rather than earnings calls.\nTravis Axelrod: Great, thank you. Our next question is, what is the current status of 4680 battery cell production and how is the ramp up progressing?\nLars Moravy: Yes, 4680 production ramped strongly in Q2, delivering 51% more cells than Q1 while reducing COGS significantly. We currently produce more than 1,400 Cybertrucks of 4680 cells per week, and we'll continue to ramp output as we drive cost down further towards the cost parity target we set for the end of the year. We've built our first validation Cybertruck with dry cathode process made on our mass production equipment, which is a huge technical milestone and we're super proud of that. We're on track for production launch with dry cathode in Q4, and this will enable cell cost to be significantly below available alternatives, which was the original goal of the 4680 program.\nTravis Axelrod: Great. Thank you very much. The next question is any update on Dojo?\nElon Musk: Yes, so Dojo, I should preface this by saying I'm incredibly impressed by NVIDIA's execution and the capability of their hardware. And what we are seeing is that the demand for NVIDIA hardware is so high that it's often difficult to get the GPUs. And there just seems this, I guess I'm quite concerned about actually being able to get state-of-the-art NVIDIA GPUs when we want them. And I think this therefore requires that we put a lot more effort on Dojo in order to have -- in order to ensure that we've got the training capability that we need. So we are going to double down on Dojo, and we do see a path to being competitive with NVIDIA with Dojo. And I think we kind of have no choice because the demand for NVIDIA is so high and the -- it's obviously their obligation essentially to raise the price of GPUs to whatever the market will bear, which is very high. So, I think we've really got to make Dojo work and we will.\nTravis Axelrod: Right. The next question is what type of accessories will be offered with Optimus?\nElon Musk: There's -- Optimus is intended to be a generalized humanoid robot with a lot of intelligence. So it's like saying what kind of accessories will be offered with a human. It's just really intended to be able to be backward compatible with human tasks. So it would use any accessories that a human would use. Yes.\nTravis Axelrod: Thank you. The next question is, do you feel you're cheating people out of the joys of owning a Tesla by not advertising?\nElon Musk: We are doing some advertising, so, want to say something?\nVaibhav Taneja: Yes, I would say something. Our fundamental belief is that we need to be providing the best products at a reasonable price to the consumers. Just to give you a fact, in U.S. alone in Q2, over two-thirds of our sales were to -- deliveries were to people who had never owned a Tesla before and which is encouraging. We've spent money on advertising and other awareness programs and we have adjusted our strategy. We're not saying no to advertising, but this is a dynamic play and we know that we have not exhausted all our options and therefore plan to keep adjusting, but in the latter half of this year as well.\nTravis Axelrod: Great. Thank you very much. The next question is on energy growth, which we already covered in opening remarks, so we'll move on to the next one. What is the updated timeline for Giga Mexico and what will be the primary vehicles produced initially?\nElon Musk: Well, we currently are paused on Giga Mexico. I think we need to see just where things stand after the election. Trump has said that he will put heavy tariffs on vehicles produced in Mexico. So it doesn't make sense to invest a lot in Mexico if that is going to be the case. So we kind of need to see where the things play out politically. However, we are increasing capacity at our existing factories quite significantly. And I should say that the Cybertaxi or Robotaxi will be produced here at our headquarters at Giga Texas.\nTravis Axelrod: All right. Thank you.\nElon Musk: And as well Optimus towards the end of next year for Optimus production Version 2, the high volume version of Optimus will also be produced here in Texas.\nTravis Axelrod: Great. Thank you. Just a couple more. Is Tesla still in talks with an OEM to license FSD?\nElon Musk: There are a few major OEMs that have expressed interest in licensing Tesla full self-driving. And I suspect there will be more over time. But we can't comment on the details of those discussions.\nTravis Axelrod: All right. Thank you. And the last one, any updates on investing in xAI and integrating Grok into Tesla software?\nElon Musk: I should say Tesla is learning quite a bit from xAI. It's been actually helpful in advancing full self-driving and in building up the new Tesla data center. With -- regarding investing in xAI, I think, we need to have a shareholder approval of any such investment. But I'm certainly supportive of that if shareholders are, the group -- probably, I think we need a vote on that. And I think there are opportunities to integrate Grok into Tesla's software, yes.\nTravis Axelrod: All right. Thanks very much. And now we will move on to analyst questions. The first question comes from Will Stein from Truist. Will, please go ahead and unmute yourself.\nWill Stein: Great. Thanks so much for taking my question. And this relates a little bit to the last one that was asked. Elon, I share your strong enthusiasm about AI and I recognize Tesla's opportunity to do some great things with the technology. But there are some concerns I have about Tesla's commercialization and that's what I'd like to ask about specifically. There were some news stories through the quarter that indicated that you redirected some AI compute systems that were destined for Tesla instead to xAI or perhaps it was to X, I'm not sure. And similarly, a few quarters ago, if you recall, I asked about your ability to hire engineers in this area, and you noted that there was a great desire for some of these engineers to work on projects that you were involved with, but some of them weren't at Tesla, they were instead at xAI or perhaps even X again. So the question is, when it comes to your capital investments, your AI R&D, your AI engineers, how do you make allocation decisions among these various ventures and how do you make Tesla owners comfortable that you're doing it in a way that really benefits them? Thank you.\nElon Musk: Yes, I mean, I think you're referring to a very -- like an old article, regarding GPUs. I think that's like 6 or 7 months old. At Tesla, we had no place to try them on, so it would've been a waste of Tesla capital because we would just have to order H100 and have no place to try them on. So it was just -- there was -- this wasn't a, let's pick xAI of Tesla. There's -- there was no -- the Tesla data centers were full. There was no place to actually put them. The -- we've been working 24/7 to complete the South extension on the Tesla Giga factory in Texas. That South extension is what will house 50,000 H100s and we're beginning to move the H100 server racks into place there. But we really needed -- we needed that to complete physically. You can't just order compute -- order GPUs and turn them on, you need a data center, it's not possible. So I want to be clear, that was in Tesla's interest, not contrary to Tesla's interest. Does Tesla no good to have GPUs that it can't turn on. That South extension is able to take GPUs, which is really just this week. We are moving the GPUs in there and we'll bring them online. With regard to xAI, there are a few that only want to work on AGI. So what I was finding was that when trying to recruit people to Tesla, they were only interested in working on AGI and not on Tesla's specific problems and they want to start -- do a start-up. So it was a case of either they go to a start-up or -- and I am involved or they do a start-up and I am not involved. Those are the two choices. This wasn't they would come to Tesla. They were not going to come to Tesla under any circumstances. So, yes.\nVaibhav Taneja: Yes, I mean, I would even add that AI is a broad spectrum and there are a lot of things which we are focused on full time driving as Tesla and also Optimus, but there's the other spectrum of AI which we're not working on, and that's the kind of work which other companies are trying to do in this case, xAI. So you have to keep that in mind that it's a broad spectrum. It's not just one specific thing.\nElon Musk: Yes. And once again, I want to just repeat myself here. I tried to recruit them to Tesla, including to say like, you can work on AGI, I if you want and they refused. Only then was xAI created.\nWill Stein: I really appreciate that clarification. If I can ask one follow-up, it relates to the new vehicles that you're planning to introduce next year. I understand this is not the venue for product announcements, but when we think about the focus, I've heard on the one hand that the focus is on cost reduction. On the other hand, you also said that the Roadster would come out. Should we expect other maybe more limited variants like, similar to the cars that you make today, but with some changes or improvements or different, some other variability in the form factors. It should -- we expect that to be a significant part of the strategy in the next year or two?\nElon Musk: I don't want to get into details of product announcements. And we have to be careful of the Osborne effect here. So, if you start announcing some great thing, it affects our near-term sales. We're going to make great products in future just like we have in the past, end of story.\nTravis Axelrod: Right. The next question comes from Ben Kallo from Baird. Ben, please go ahead and unmute yourself.\nBen Kallo: Hi. Thanks for taking my question. When we think about revenue contribution and with energy growing so quickly and Optimus on the come, how do we think about the overall segments longer term? And then do you think that auto revenue will fall below 50% of your overall revenue? And then my follow-up is just on the last call you talked about, distributed compute on your new hardware. Could you just update us and talk a little bit more about that, the timeline for it and how you would reward customers for letting you use their compute power and their cars? Thanks.\nElon Musk: Yes, I mean, as I've said a few times, I think the long-term value of Optimus will exceed that of everything else that Tesla combined. So, it's simply -- just simply consider the usefulness utility of a humanoid robot that can do pretty much anything you ask of it. I think everyone on earth is going to want one. There's 8 billion people on earth, so it's 8 billion right there. Then you've got, all of the industrial uses, which is probably at least as much, if not way more. So I suspect that the long-term demand for general purpose humanoid robots is in excess of 20 billion units. And Tesla is -- that has the most advanced humanoid robot in the world, and is also very good at manufacturing, which these other companies are not. And we've got a lot of experience -- with the most experienced with the world leaders in real world AI. So we have all of the ingredients. I think we are unique in having all of the ingredients necessary for large scale, high utility, generalized humanoid robots. That's why my rough estimate long-term is in accordance with the ARK [ph] Invest analysis of market cap on the order of $5 trillion for -- maybe more for autonomous transport, and it's several times that number for general purpose humanoid robots. I mean, at that point, I'm not sure what money even means, but in the benign AI scenario, we are headed for an age of abundance where there is no shortage of goods and services. Anyone can have pretty much anything they want. It's a wild -- very wild future we're heading for.\nBen Kallo: On the distributed compute?\nElon Musk: Yes, distributed compute, that seems like a pretty obvious thing to do. I think the -- where this distributed compute becomes interesting is with our next generation Tesla AI truck, which is hardware viable or what we're calling AI5, which is -- from the standpoint of inference capability comparable toB200 -- and a bit of B200. And we are aiming to have that in production at the end of next year and scale production in '26. So it just seemed like if you've got -- even if you've got autonomous vehicles that are operating for 50 or 60 hours a week, there's a 168 hours in a week. So you have somewhere above I think a 100 [indiscernible] net computing. I think we need a better word than GPU because GPU means graph express in unit. So there's a 100 hours plus per week of AI compute, AI advanced compute from the fleet, from the vehicles and probably some percentage from the humanoid robots that it would make sense to do distributed inference. And if you're -- if there's a fleet of at some point a 100 million vehicles with AI5 and beyond, because you have AI 6 and 7 and whatnot, and there may be billions of humanoid robots that is just a staggering amount of inference compute or that could be used for general purposes at computing. It doesn't have to be used for, the humanoid robot or for the car. So I think, that's just -- that -- that's a pretty obvious thing to say, like, well, it's more useful than having to do nothing.\nTravis Axelrod: All right. Thank you. The next question comes from Alex Potter from Piper Alex. Alex, please go ahead and unmute yourself.\nAlex Potter: Perfect. Thanks. I wanted to ask a question on FSD licensing. You mentioned that in passing previously, was just wondering if you can elaborate maybe on the mechanics of how that would work. I guess presumably this would not be some sort of simple plug and play proposition that presumably an OEM would need, I don't know, several years to develop its own vehicle platform that's based on FSD. I imagine they would need to adopt Tesla's electrical architecture, compute, sensor stack. So I, correct me if I'm sort of misunderstanding this, but if you had a cooperative agreement of some kind with another OEM, then presumably it would take you several years before you'd be able to recognize licensing revenue from that agreement. Is that the right way to think about that?\nElon Musk: Yes. The OEMs not real fast. There's not really a sensor suite, it's just cameras. But they would have to integrate our AI computer and have cameras with a 360 degree view. And at least the gateway, like the what talks to the internet, and communicates with the Tesla system, what that you need kind of a gateway computer too. So it's really gateway computer with the cellular and Wi-Fi connectivity, the Tesla AI computer, and seven cameras, or not cameras, again, a 360 degree view. But this will -- given the speed at which, the auto industry moves, it would be several years before you would see this in volume.\nAlex Potter: Okay, good. That's more or less what I expected. So then the follow-up here is, if you did sign an FSD licensing agreement with another automaker, when do you think you would disclose that? Would you do it right when you signed the agreement or only after that multiple years has passed and the vehicle is ready to be rolled out? think it depends on the OEM. I guess we'd be happy either way. Yes, it depends on, what kind of arrangement we enter into. A lot of those things are, we are not resolved yet, so we'll make that determination as and when we get to that point.\nElon Musk: And the kind of deals that are obviously relevant are only if, some OEM is willing to do this in a million cars a year or something significant. It's not -- if it's like 10,000 or a 100,000 cars a year. We can just make that ourselves.\nTravis Axelrod: All right, thank you. The next question comes from Dan Levy from Barclays. Dan, please go ahead and unmute yourself.\nDan Levy: Hi, good evening. Thanks for taking the questions. First, wanted to start with a question on Shanghai. You've leveraged Shanghai as an export center really due its low cost, and that makes sense. But maybe you can just give us a sense of, of how the strategy changes, if at all, given, the implementation of tariffs in Europe. Also to what extent, your import of batteries from China into the U.S., how that might change given the tariffs. Thank you.\nElon Musk: Yes. I think I covered some part of it in my opening remarks, but just to give you a little bit more, just on the tariff side, the European authorities did sample certain other OEMs in the first round to establish the tariffs for cars being imported from China into Europe. While we were not picked up in our individual examination in the first round, they did pick us up in the second round. They visited our factory. They -- we worked with them, provided them all the information. As a result, we were adjusting our import strategy out of China into Europe. But -- and one other thing to note is in Q2 itself, we started building right hand from model wise out of Berlin and we also delivered it in U.K. And we're adjusting as needed, but we will keep adjust. We're still importing Model 3s into Europe, out of Shanghai. And we are still evaluating what is the best alternate manage all this just on the examination by the European authorities. Like I said, we cooperated with them. Well, we are confident that they, we should get a better rate than what they have imposed for now. But this is literally evolving and we are adjusting as fast as we can with this. It is -- I would also add that, because of this, you've seen the impact that Berlin is doing more imports into places like Taiwan as well as, U.K I just mentioned. So it will keep changing and we will keep adapting as we go about it.\nDan Levy: Great. Thanks. Yes, thank you. As a follow-up, wanted to ask about the Robotaxi strategy and specifically the shareholder deck here notes that the release is going to be -- one of the gating factors is regulatory approval. So maybe you could help us understand which regulations specifically are the ones that we should be looking for? Is it FMVSS, that's standard? And then to what extent does the strategy shift? You've done with FSD more of a nationwide, no boundary approach. Is the Robotaxi approach one that's more geofenced, so to speak, and is more driven by a state by state approach?\nElon Musk: I mean, our solution is a generalized solution like what everybody else has. They, if you see like Waymo has one of it, they have a very localized solution that requires high density mapping. It's not -- it's quite fragile. So, their ability to expand rapidly is limited. Our solution is a general solution that works anywhere. It would even work on a different earth. So if you're rendered a new Earth, it would work on a new earth. So it's -- there's this capability I think in our experience, once we demonstrate that something is safe enough or significantly safer than human. We are fine that regulators are supportive of deploying deployment of that capability. It's difficult to argue with if you -- if you've got a large number of -- yes, if you've got billions of miles that show that in the future unsupervised FSD is safer than human. What regulator could really stand in the way of that? They would -- they're morally obligated to approve. So I don't think regulatory approval will be a limiting factor. I should also say that the self-driving capabilities of this are deployed outside of North America are far behind that in, in North America. So with the -- with Version 12.5, and maybe a 12.6, but pretty soon we will ask for regular regulatory approval of the Tesla supervised FSD in Europe, China, and other countries. And I, I think we're likely to receive that before the end of the year, which will be a helpful demand driver in those regions obviously.\nTravis Axelrod: Thank you. Just to …\nElon Musk: Go ahead, Travis.\nTravis Axelrod: In terms of like, as Elon said, in terms of regulatory approval, the vehicles are governed by FMVSS in U.S., which is the same across all 50 states. The road rules are the same across all 50 states. So creating a generalized solution gives us the best opportunity to deploy in all 50 states, reasonably. Of course there are state and even local and municipal level regulations that may apply to, being a transportation company or deploying taxes. But as far as getting the vehicle on the road, that's all federal and that's very much in line with what you was just suggesting about the data and the vehicle itself.\nVaibhav Taneja: And to add to the technology point, the end-to-end network basically makes no assumption about the location. Like you could add data from different countries and it just like perform equally well there, just like almost like close to zero US specific, um, code in there. It's all just the data that comes from the U.S\nElon Musk: Yes. To, to that end of the show, it's like, we can go as humans to other countries and drive with some reasonable amount of assessment in those countries. And that's how you design the FSC software. Yes, exactly.\nTravis Axelrod: Great. Thanks guys. The next question comes from George from Canaccord. George, please go ahead and unmute yourself.\nGeorge Gianarikas: Hi, everyone. Thank you for taking my questions. Maybe just to expand on the regulatory question for a second. And I could be comparing apples and oranges, but GM canceled their pedal less, wheel less vehicle. And according to the company this morning, their decision was driven by uncertainty about the regulatory environment. And from what we understand, and again, maybe I'm wrong here, but the Robotaxi that has been shown at least in images of the public is also pedal less and wheel less. Is there a different regulatory concern just if you deploy a vehicle like that that doesn't have pedal -- pedals or a wheel, and that may not be different from just regular FSD on a traditional Tesla vehicle. Thank you.\nElon Musk: Well, obviously the real reason that they cancel it is because GM can't make it work, not because the regulators, they're blaming regulators. That's misleading of them to do so, because Waymo is doing just fine in those markets. So it's just that their technology is not far.\nGeorge Gianarikas: Right. And maybe just as a follow-up, I think you mentioned, that FSD take rates were up materially after you reduced the price. Is there any way you can help us quantify what that means Exactly? Thank you.\nVaibhav Taneja: Yes, we shared the [indiscernible] that there we've seen a meaningful increase. I don't want to get into specific because we started from a low base and -- but we are seeing encouraging results. And the key thing here is, like Elon said, you need to experience it because words can't describe it till the time we actually use it. And that's why we are trying to make sure that every time a car is getting delivered, people are being showed how this thing is working because when you see it working, you realize how great it is. I mean, just to give you one example, so again, there's a bias example, but I have a more than 20 mile commute into the factory almost every day. I have zero interventions on the latest stack, and the card just literally drives me over. And especially with the latest version wherein, we are also tracking your eye movement, the steering wheel lag is almost not there as long as you're not wearing sunglasses.\nElon Musk: Well, we are fixing the sunglasses thing. It's coming soon. So you will be able to drive -- you'll be able to have sunglasses on and have the car drive.\nGeorge Gianarikas: Yes.\nElon Musk: So -- but there's number of times I've talked with smart people who like live in New York or maybe downtown Boston and don't ever drive and then ask me about FSD, I'm like, you can just get a car and try it. And if you're not doing that, you have no idea what's going on.\nTravis Axelrod: Thank you. The next question comes from Pierre from New Street. Pierre, please unmute yourself.\nFerragu Pierre: Hey, guys. Thank you for taking my question. So it's on Robotaxi again, and I completely get it that with a universal solution, we will get like regulatory approval, we'll get there eventually clicking up miles and compute, et cetera. And my question is more, how you think about deployments, because I'm still like, I'm thinking once you have a car that can drive everywhere, that can replace me, it can replace a taxi, but then to do the right hailing service, you need a certain scale. And that means a lot of cars on the road and so you need an infrastructure to just maintain the cars, take care of them, et cetera. And so my question is, are you already working on that? Do you have already an idea of what, like your plan to deploy looks like? And is that like a test Tesla only plan or are you looking at partners, local partners, global partners to do that? And I'll have a quick follow-up.\nElon Musk: Yes. This would just be the Tesla network. You just literally open the Tesla app and summon a car and resend a car to pick you up and take you somewhere. And you can -- our -- we'll have a fleet that's I don't know, on order of 7 million dedicated global autonomy soon. In the years come it'll be over 10 million, then over 20 million. This is immense scale. And the car is able to operate 24/7, unlike the human driver. So, the capability to -- like, if there's this basically instant scale with a software update. And now this is for a customer on fleet. So you can think of that as being a bit like Airbnb, like you can choose to allow your car to be used by the fleet, or cancel that and bring it back. It can be used by the fleet all the time. It can be used by the fleet some of the time, and then Tesla would take -- would share on the revenue with the customer. But you can think of the giant fleet of Tesla vehicles as like a giant sort of Airbnb equivalent fleet, Airbnb on wheels. The -- I mean, then in addition we would make some number of cars for Tesla that would just be owned by Tesla and be added to the fleet. I guess that would be a bit more like Uber. But this would all be a Tesla network. And there's an important clause we've put in, in every Tesla purchase, which is that the Tesla vehicles can only be used in the Tesla fleet. They cannot be used by a third-party for autonomy.\nFerragu Pierre: Okay. And do you think that scale is like progressively so you can start in a city with just a handful of cars and you grow the number of cars over time? Or do you think there is like a critical mass you need to get to, to be able to offer like a service that is of competitive quality compared to what like the -- like Uber would be typically delivering already?\nElon Musk: I guess I'm not -- maybe I'm not conveying this correctly. The entire Tesla fleet basically becomes active. This is obviously maybe there's some number of people who don't want their car to own money, but I think most people will. It's instant scale.\nTravis Axelrod: Thank you. Our next question comes from Colin from Oppenheimer. Colin, please unmute yourself.\nColin Rusch: Sorry about that guys. I've got two questions around energy storage. With the tight supply and the stationary storage, can you talk about your pricing strategy and how you're thinking about saturation and given geographies given that some of these larger systems are starting to shift wholesale power markets in a pretty meaningful way quickly?\nVaibhav Taneja: So, I mean, we are working with a large set of players in the market and our pipeline is actually pretty long. And there's actually very -- there's actually long end in terms of where you enter into a contract where delivery started -- starts happening. And so far we have good pricing leverage. And now Mike, chime in on this too.\nUnidentified Company Representative: Yes, I mean there's a lot of competition from Chinese OEMs just like there is in the vehicle space. So we're in close contact with our customers and making sure that we're remaining competitive in where they're needing to be competitive to, to secure contracts to sell power and energy in the markets. We had a really strong contracting quarter and continue to build our backlog for 2025 and 2026. So we feel pretty good about where we are in the market. We realize that competition is strong, but we have a pretty strong value proposition with offering a fully integrated product with our own power electronics and site level controls. So …\nVaibhav Taneja: Yes, and again, the aspect which people miss do not fully understand is that there's also a whole software stack, which comes with from Megapack, right? And that is a unique proposition which we -- which is only available to us, and we are using it with other stuff too, but that gives us a much more of an edge as compared to the competition.\nElon Musk: Yes, we find customers that they can sort of put together a hodgepodge solution. And so, and then sometimes they'll pick that solution, and then that doesn't work. And then they come back to us.\nUnidentified Company Representative: Yes, and we're not really seeing saturation for like, on a global scale. There's little pockets of saturation in different markets, but we're more seeing that there's markets opening up given demand on the grid just continues to increase more than anyone expects. So that just opens up markets, really across the world in different pockets.\nVaibhav Taneja: Yes, I mean just even on the AI computer side, right? These GPUs are really powerful already and the amount of new pipeline, which we're getting for people for data center backup and things like that is increasing at a pretty large scale.\nColin Rusch: Yes. Thanks. And then the follow-up here is 4680 process technology and the role to role process. There's some news around your equipment suppliers. Can you talk about how far along you are in, in potentially qualifying an incremental supplier around some of that, those critical process technology steps?\nLars Moravy: Yes, I can talk about that. As you're probably referring to the lawsuit that we have with one of our suppliers, look, I don't think this is going to affect our ability to roll out 4680. We have very strong IP position in the technology and the majority of the equipment that we use is in-house designed and some of it's in-house build. And so we can take our IP stack and have someone else build it if we need to. So it's, that's not really a concern right now.\nElon Musk: Yes. I, I think people don't understand just how much demand there will be for grid storage. They really just like the [indiscernible] I think are underestimating this demand by probably orders magnitude. So that the actual energy, total energy output of, say the U.S grid is if the power plants can operate a steady state is at least two to three times, the amount of energy it currently produces, because there are a huge gap. There's a huge difference in the -- from peak to trough in terms of energy of power generation. So in order for a grid to not have blackouts, it must be able to support the load at the worst minute of the worst day of the year, the coldest or hottest day, which means that for the rest of the time, the rest of the year, it's got massive excess power generation capability, but it has no way to store that energy. Once you add battery packs, you can now run the power plants at steady state. Steady state means that basically any given grid anywhere in the world can produce in terms of cumulative energy in the course of the year, at least twice what it is currently producing in some cases, maybe three times.\nTravis Axelrod: All right. Thank you, Elon. The next question comes from Colin Langan from Wells Fargo. Colin, please unmute yourself.\nColin Langan: Oh, great. Thanks for taking my questions. Do you hear me?\nTravis Axelrod: Yes.\nColin Langan: Yes. Sorry. I guess when we are going to ask, if Trump wins, there's a higher chance that IRA could get cut. I think Elon, you had commented online that Tesla doesn't survive on EV subsidies. But when Tesla lose a lot of support if IRA goes away? I think model Y3 and Y get IRA help for customers, and I think your batteries get production tax credits. So, just one, can you clarify if the end, if IRA ends, would it be a negative for your profitability in the near-term? Why might it not be a negative? And then, any framing of the current support you get, IRA-related?\nElon Musk: I guess that there would be like some impact, but I think it would be devastating for our competitors. But -- and it would hurt Tesla slightly. But long-term probably actually helps Tesla would be my guess. Yes -- but I've said this before on earnings calls, it -- the value of Tesla overwhelmingly is autonomy. These other things are in the noise relative to autonomy. So I recommend anyone who doesn't believe that Tesla will solve vehicle autonomy should not hold Tesla stock. They should sell their Tesla stock. You should believe Tesla will solve autonomy, you should buy Tesla stock. And all these other questions are in the noise.\nVaibhav Taneja: Yes, I mean, I'll add this just to clarify a few things that -- at the end of the day, when we are looking at our business, we've always been looking at it whether or not IRA is there and we want our business to grow healthy without having any subsidies coming in, whichever way you look at it. And that's the way we have always modeled everything. And that is the way internally also even when we are looking at battery costs, yes, I --, there are manufacturing credits which we get, but we always drive ourselves to say, okay, what if there is no higher benefit and how do we operate in that kind of an environment? And like Elon said, we definitely have a big advantage as compared to a competition on that front. We've delivered it and you can see it in the numbers over the years. Like, so there is you cannot ignore the fundamental size of the business. And then on top of it, once you add autonomy to it, like even said, it becomes meaningless to you think about the short-term.\nTravis Axelrod: Okay. I think that's unfortunately all the time we have for today. We appreciate all of your questions. We look forward to talking to you next quarter. Thank you very much and goodbye.\nElon Musk: That's excellent." + }, + { + "symbol": "IBM", + "quarter": 2, + "year": 2024, + "date": "2024-07-24 20:44:06", + "content": "Operator: Welcome and thank you for standing-by. At this time, all participants are in a listen-only mode. Today's conference is being recorded. If you have any objections, you may disconnect at this time. Now, I will turn the meeting over to Olympia McNerney, IBM's Global Head of Investor Relations. Olympia, you may begin.\nOlympia McNerney: Thank you. I'd like to welcome you to IBM's Second Quarter 2024 Earnings Presentation. I'm Olympia McNerney and I'm here today with Arvind Krishna, IBM's Chairman and Chief Executive Officer and Jim Kavanaugh, IBM's Senior Vice President and Chief Financial Officer. We'll post today's prepared remarks on the IBM investor website within a couple of hours and a replay will be available by this time tomorrow. To provide additional information to our investors, our presentation includes certain non-GAAP measures. For example, all of our references to revenue and signings growth are at constant currency. We provided reconciliation charts for these and other non-GAAP financial measures at the end of the presentation, which is posted to our investor website. Finally, some comments made in this presentation may be considered forward-looking under the Private Securities Litigation Reform Act of 1995. These statements involve factors that could cause our actual results to differ materially. Additional information about these factors is included in the company's SEC filings. So with that, I'll turn the call over to Arvind.\n\u200b\u200b\u200b\u200b\u200bArvind Krishna: Thank you for joining us today to discuss IBM's Second Quarter Earnings. We delivered a strong quarter, exceeding our expectations, driven by solid revenue growth, profitability, and cash-flow generation. We had strong performance in software and infrastructure above our model as investment in innovation is yielding organic growth, while consulting remained below model. Our results underscore the continued success of our hybrid cloud and AI strategy and the strength of our diversified business. Let me start with a few comments on the macroeconomic environment. Technology spending remains robust as it continues to serve as a key competitive advantage, allowing businesses to scale, drive efficiencies and fuel growth. As we stated last quarter, factors such as interest rates and inflation impacted timing of decision making and discretionary spend in consulting. Overall, we remain confident in the positive macro outlook for technology spending, but acknowledge this impact. It has been a year since we introduced watsonx and our generative AI strategy to the market. We have infused AI across the business from the tools clients use to manage and optimize their hybrid cloud environments to our platform products across.ai,.data and.gov to infrastructure and consulting, you can find AI innovation in all of our segments. For example, in software, our broad suite of automation products like Apptio and watsonx Orchestrate are leveraging AI and we expect to do the same with HashiCorp once the acquisition is complete. Red Hat is bringing AI to OpenShift AI and rhel.ai. In transaction processing, we are seeing early momentum in watsonx Code Assistant for Z. In infrastructure, IBM Z is equipped with real-time AI inferencing capabilities. In consulting, our experts are helping clients design and implement AI strategies. Our enterprise AI strategy is resonating as we evolve to meet client needs. Let me start by discussing IBM models. Choosing the right AI model is crucial for success in scaling AI. While large general-purpose models are great for starting on AI use cases, clients are finding that smaller models are essential for cost-effective AI strategies. Smaller models are also much easier to customize and tune. IBM's Granite models ranging from 3 billion to 34 billion parameters and trained on 116 programming languages consistently achieved top performance for a variety of coding tasks. To put costs in perspective, these fit-for-purpose models can be approximately 90% less expensive than large models. Hybrid cloud remains a top priority for clients as flexibility of deployment of AI models across multiple environments and data sovereignty remain a key focus. We believe in the power of open innovation and recently announced at IBM Think that we open-sourced IBM's Granite family of models now available under Apache 2.0 licenses on both Hugging Face and GitHub. We see parallel to Linux becoming dominant in the enterprise server space, thanks to the speed and innovation offered by open-source. We are confident that the same dynamic will play out with AI as we benefit from developer mindshare and community innovation. We also recently launched InstructLab, a tool for more rapid model tuning through synthetic data generation, allowing our clients to more efficiently customize models using their own data and expertise. The last 12 months of AI pilots has made it clear that sustained value from AI requires truly leveraging enterprise data. In summary, our AI strategy is a comprehensive platform play. Rhel.ai and OpenShift AI are the foundation of our enterprise AI platform. They combine open-source IBM Granite's LLMs and InstructLab model alignment tools with full stack optimization, enterprise-grade security and support and model indemnification. On top of that, we have an enterprise AI middleware platform with watsonx and an embed strategy with our AI assistance infused through our software portfolio and those of our ecosystem partners. In addition, our consulting services are critical in helping clients build their AI strategies from the ground-up. We also continue to see our infrastructure segment play a larger role as clients leverage their hardware investments in their AI strategies. Our book of business related to generative AI now stands at greater than $2 billion inception-to-date. The mix is roughly one quarter software and three quarters consulting signings. We believe these strong results highlight our momentum and traction with clients. Our early leadership positions us for long-term success and this transformational technology, which is still in the initial stages of adoption. As clients build our AI strategies, the IT landscape is becoming increasingly complex. Labor demographic shifts further emphasize the importance of optimizing IT spend and automating business processes. We continue to innovate and invest and have created a leading automation portfolio to capture this opportunity, which you can see in our results. This includes Apptio for cost management, capability for observability and resource management and with announced acquisition of HashiCorp, the automation of cloud infrastructure. The powerful combination of Red Hat Ansible and Terraform will simplify provisioning and configuration of applications across hybrid cloud environments. The latest addition to this portfolio is IBM Concert, also announced at Think, a Gen AI-powered tool, which helped clients get end-to-end visibility across business applications. We also recently completed the acquisition of the StreamSets and webMethods assets from Software AG. This acquisition brings together leading capabilities in integration, API management and data ingestion. Let me now spend a minute on the continued strength we are seeing in infrastructure. IBM Z, our mainframe solution is an integral part of our clients' hybrid cloud environments, driving their most secure and mission-critical workloads. Our latest cycle z16 is uniquely tailored to offer clients security, scalability and resilience, which help clients address both cybersecurity threats and complex regulatory requirements. z16's Telum processor is a unique differentiator driving real-time, in-line AI inferencing at unprecedented speed and scale for applications like real-time fraud detection. Our storage offerings are also benefiting from generative AI as clients address data readiness and need high-speed access to massive volumes of unstructured data. We continue to invest in innovation and make great progress in emerging technology like quantum computing. This quarter, we expanded Qiskit, IBM's quantum computing software into a comprehensive stack aimed at optimizing performance on the utility-scale quantum hardware. These updates aim to enhance the stability, efficiency and usability of Qiskit, supporting advanced quantum algorithm development and fostering broader adoption across various industries. This strong momentum and innovation across the portfolio manifests itself in client adoption. In virtually all industries and geographies, clients leverage IBM solutions to help them transform their operations and create better experiences for end users. Names like Virgin Money, Credit Mutuel and Panasonic all turned to IBM in the quarter. We also continued to strengthen our ecosystem. At our Think event, we announced a series of new AI partnerships with industry leaders like Adobe, AWS, Microsoft, Meta, Mistral, Salesforce and SAP. In May, IBM and Palo Alto Networks announced a partnership to deliver AI-powered security solutions using What's the Next. As part of this, Palo Alto is acquiring IBM's QRadar SaaS assets and we are partnering to offer seamless migration for QRadar customers to XM. IBM will train over 1,000 security consultants on Palo Alto Network products to drive a significant book of business with them. In summary, we are excited to continue delivering strong results. Given our first-half performance, we are raising our expectations for free cash flow to greater than $12 billion for the year. I will now hand over to Jim to walk you through the details of the quarter. Jim, over to you.\nJim Kavanaugh: Thanks, Arvind. In the second quarter, we delivered $15.8 billion in revenue, $2.8 billion of operating pre-tax income, and $2.43 operating diluted earnings per share. Our 4% revenue growth at constant currency combined with greater than 200 basis points of operating pre-tax margin expansion drove 17% operating pre-tax income growth and 11% operating diluted earnings per share growth, highlighting our strong execution. And through the first-half, we generated $4.5 billion of free cash flow. Our free cash flow generation is the strongest first half level we have reported in many years. We are pleased with these results, exceeding our expectations for revenue, profitability, free cash flow, and earnings per share. Revenue growth was led by software and infrastructure. It is clear that our investments in innovation are yielding results and driving strong organic growth across these segments. Software grew by 8% with solid growth across hybrid platform and solutions and transaction processing and strong transactional performance. Infrastructure had great performance, up 3%, delivering growth across IBM Z and distributed infrastructure. Consulting was up 2% and continued to be impacted by a pullback in discretionary spending. Looking at our profit metrics, we expanded operating gross margin by 190 basis points and operating pre-tax margin by 220 basis points over the last year, inclusive of about a 30 basis-point currency headwind to pre-tax margin. Margin expansion was driven by our operating leverage, product mix and ongoing productivity initiatives. Driving productivity is core to our operating and financial model. This includes enabling a higher-value workforce through automation and AI, streamlining our supply-chain, aligning our teams by workflow and reducing our real-estate footprint. These actions allow for continued investment in innovation with R&D up 9% in the first-half. This includes investments in both AI and hybrid cloud as well as infrastructure ahead of our Nexi program in 2025, which we expect to accelerate our organic growth profile over time. Our results this quarter reflect broad-based growth and the strength in the fundamentals of our business with revenue up about $300 million, operating pre-tax income up about $400 million, adjusted EBITDA up more than $350 million and free cash flow up about $500 million. For the first-half, we generated $4.5 billion of free cash flow, up $1.1 billion year-over-year. The largest driver of this first-half growth comes from adjusted EBITDA, up about $550 million year-over-year and timing of CapEx. We are a few points ahead of our two-year average attainment levels through the first-half. In terms of cash uses, we returned $3.1 billion to shareholders in the first half in the form of dividends. From a balance sheet perspective, we have a very strong liquidity position with cash of $16 billion, up $2.5 billion since year-end 2023. Our debt balance at the end of the second-quarter was flat with year-end 2023 at $56.5 billion, including $11.1 billion from our financing business. Putting this all together, our business fundamentals remain solid with continued revenue growth, margin expansion, cash generation, and a strong balance sheet with financial flexibility to support our business. Turning to the segments. Software revenue growth accelerated to 8% this quarter. Both hybrid platform and solutions and transaction processing grew as clients leverage the capabilities of our AI and hybrid cloud platforms. This performance reflects the investments we've been making in software, both organically, which drove more than 6 points of the growth as well as acquisitions. As mentioned in January, the software revenue growth drivers for the year include Red Hat growth, the combination of innovation, recurring revenue, and transaction processing, as well as acquisitions. Let me spend a minute on each of these elements. In Red Hat, annual bookings growth accelerated to over 20% this quarter. Within that performance, OpenShift annual bookings were up over 40% and RHEL and Ansible growth was double digit. The strength reflects the demand for our hybrid cloud solutions, including app modernization, management automation, generative AI and virtualization. In a subscription-based business, the majority of revenue is under contract for the next two quarters. Think of it as our CRPO for the next six months. This metric is growing in the mid-teens and accelerating more than 5 points versus the first-half of the year. We continue to bring new innovation to our portfolio and it's contributing nicely to our software performance. Our new innovation includes generative AI offerings like watsonx, our AI middleware, watsonx Assistants, the recently-announced IBM Concert and others, which contributed about $0.5 billion to our AI book of business inception to-date. And we delivered good growth across our recurring revenue base, which is about 80% of the annual software revenue. This is evident in hybrid platform and solutions, where our ARR is now $14.1 billion and up 9% since last year. Transaction processing delivered 13% revenue growth. This performance demonstrates the innovation and value of our mission-critical hardware stack across IBM Z, power and storage. The combination of growing demand for capacity, good client renewals, and strong large deal performance fueled our results. And notably, our new generative AI portfolio innovation, watsonx Code Assistant for Z is resonating well with clients. Together, these dynamics contributed to both recurring and transactional software revenue growth again this quarter. Revenue performance this quarter also benefited from our focused M&A strategy, including synergies realized across the portfolio. This included the recent Apptio acquisition. Less than 12 months since closing, we have accelerated annual bookings and are seeing an uptick in ARR growth already in the mid-teens. The synergy between Apptio's FinOps offerings and our broader automation portfolio helps clients manage, optimize and automate technology spending decisions. Earlier this month, we completed the acquisition of StreamSets and webMethods from Software AG and expect the HashiCorp acquisition to close by year end. Looking at software profit, gross profit margin expanded and segment profit was up over 350 basis points year-to-year, with the latter reflecting operating leverage driven by our revenue scale and mix this quarter. Our consulting revenue was up 2%, consistent with last quarter and largely reflecting organic growth. In April, we discussed that we were seeing solid demand for our large transformational offerings as clients continue to prioritize driving productivity with AI and analytics. At the same time, we saw a pullback on discretionary projects as clients prioritize their spending. The second quarter buying behavior played out much in the same way. Signings for the quarter were $5.7 billion, driven by solid demand for large engagements across finance and supply-chain transformation, cloud modernization, and application development. This contributed to backlog growth of 5% year-over-year and our trailing 12-month book-to-bill remaining over 1.15. Meanwhile, continued discretionary spending constraints impacted our small engagement performance and backlog realization in the quarter. As Arvind mentioned, our book of business in generative AI inception-to-date is greater than $2 billion and about three quarters of it represents consulting signings with strong quarter-over-quarter momentum. Our extensive industry and domain expertise has placed us in an early leadership role, which is crucial at the onset of a technology shift. IBM has both technology and consulting, which is a unique and powerful combination to help clients navigate this technology transition. Similar to previous technology shifts such as the advent of the Internet, globalization, and cloud computing, generative AI is driving the next wave of growth. In a human capital-based business, signings represents clients reprioritizing spend on this technology transition, while there is some potential for lift as the total addressable market expands. We are delivering value in two ways. First, partnering with our clients to design and scale AI solutions, whether that be leveraging AI capabilities of IBM, our partners or a combination. Second, we are developing new ways of working, driving productivity and improving delivery, all with our Consulting Advantage platform. In summary, GenAI is acting as a catalyst for companies to grow revenues, cut costs and change the ways they work, creating a significant opportunity for us. We are seeing this already as IBM is the strategic partner of choice for clients using this technology, including WPP, Elevance Health, and the UK's Department of Work and Pensions. Turning to our lines of business. Business transformation revenue grew 6%, led by finance and supply-chain transformations. Data transformation also contributed to growth. In Technology Consulting, revenue was up 1%. Growth was driven by application modernization services. Application operations revenue declined, reflecting weakness in on-prem custom application management, partially offset by strength in cloud-based application management offerings. Looking at consulting profit, we expanded gross profit margins by 40 basis points, driven by productivity and pricing actions we have taken. Segment profit margin was modestly down, reflecting continued labor inflation and currency. Moving to infrastructure, revenue was up 3%. We're capitalizing on the strong and broad-based demand for our hardware platforms, especially IBM Z. Within hybrid infrastructure, IBM Z revenue was up 8% this quarter. We're now more than two years into the z16 cycle and the revenue performance continues to outperform prior cycles. Our clients are facing increasing demands for workloads given rapid business expansion, the complex regulatory environment and increasing cybersecurity threats and attacks. IBM Z addresses these needs with a combination of cloud-native development for hybrid cloud, embedded AI at scale, quantum-safe security, energy efficiency, and strong reliability and scalability. Increasing workloads translates to more Z capacity or MIPS, which are up about threefold over the last few cycles. IBM Z remains an enduring platform for mission-critical workloads, driving both hardware and related software, storage and services adoption. In distributed infrastructure, revenue grew 5%, driven by strength in both power and storage. Power growth was fueled by demand for data-intensive workloads on Power10 led by SAP HANA. Storage delivered growth again this quarter, including growth in high-end storage tied to the z16 cycle and solutions tailored to protect, manage, and access data for scaling generative AI. Looking at infrastructure profit, we delivered solid gross profit margin expansion and segment profit accelerated quarter-to-quarter to the high-teens. Segment profit margin was down 230 basis points in the quarter, reflecting key investments we're making in the business across areas like AI, hybrid cloud and quantum, and almost a point of impact due to currency. Now, let me bring it back to the IBM level to wrap up. We feel good about our performance in the first half with revenue growth reflecting the investments we've been making both organically as well as acquisitions. Our focus on execution and the strength in the fundamentals of our business resulted in strong performance in the quarter across revenue, margin expansion, and growth in profitability and earnings. Looking to the full-year 2024, we are holding our view on revenue. We see full-year constant-currency revenue growth in line with our mid-single-digit model, still prudently at the low end. For free cash flow, given the strength in our performance in the first half, we feel confident in raising our expectations to greater than $12 billion, driven primarily by growth in adjusted EBITDA. This also includes a modest contribution resulting from the Palo Alto QRadar transaction, largely offset by related structural actions to address stranded costs. We continue to expect the QRadar transaction to close by the end of the third quarter. On the segments, in Software, we had solid first-half performance, up more than 7%. This performance reflects strength in our recurring revenue base and early traction in GenAI. With this performance, we are raising our view of growth in software to high-single-digits for the year. And given ongoing productivity initiatives and operating leverage, we now expect software segment profit margin to expand by over a point. In Consulting, given the continued pressure we have seen on spending related to discretionary projects, we now expect low-single-digit growth for the year and segment profit margin to expand by about half a point. And given the strength in infrastructure in the first-half, we now expect it to be about neutral for the year with segment profit margin in the mid-teens to high-teens. With these segment dynamics, we are raising our expectations of operating pre-tax margin expansion to over a half a point year to year. And we are maintaining our view of operating tax rate in the mid-teens range, consistent with last year. On currency, given the strengthening of the dollar, we now expect a 100 basis-point to 200 basis-point impact to revenue growth for the year. For the third quarter, we see revenue growth consistent with the full-year. For profit, we expect our net income SKU through the third quarter to remain a couple of points ahead of the prior year, driven by the strength of our business. And again, we expect the gain of the Palo Alto QRadar transaction will be offset by related structural actions to address stranded costs. In closing, we are pleased with our performance this quarter and for the first-half, driving confidence in our updated expectations. We are positioned to grow revenue, expand operating profit and grow free cash flow for the year. Arvind and I are now happy to take your questions. Olympia, let's get started.\nOlympia McNerney: Thank you, Jim. Before we begin the Q&A, I'd like to mention a couple of items. First, supplemental information is provided at the end of the presentation. And then second, as always, I'd ask you to refrain from multi-part questions. Operator, let's please open it up for questions.\nOperator: Thank you. At this time, we'll begin the question-and-answer session of the conference. [Operator Instructions] And our first question comes from Wamsi Mohan with Bank of America. Please state your question.\nWamsi Mohan: Yes, thank you so much. Your long-term model on transaction processing is low-single-digit and you just posted a very strong quarter with 13% growth in the quarter. How should we think about the trajectory of that in 2024 and maybe in 2025? I know, Jim, you noted a few different things, including solid client renewals and some strong large deal performance. Was there anything very episodic or unusually large within that mix as well? Thank you so much.\nJim Kavanaugh: Thanks, Wamsi. I appreciate the question overall. Very important. You know, if you take a step back, you know, we continue to be very pleased with our transaction processing performance overall. You know, if you dial back to when we laid out our mid-term model, we said we converted this to a growth vector, low-single-digit overall. And if you look at the last couple of years, we've been averaging mid-single-digit or better overall. We shifted this now to a growth contributor. And why is that important? One, high source of profit and cash, the fund investment flexibility; and two, it provides a very solid incumbency base for the IBM or multiplier effect. But if you take a look at it, we are capitalizing on the strength that we've seen over the last three programs of our mainframe cycle. It's really instantiating the enduring value of that platform. Our MIPS over the last few programs are up three times from an installed perspective and over 80% of our clients are growing MIPS on the mainframe. I think that was a very different picture when you dial back five, seven years ago already. So, we've taken that portfolio. We've invested now significantly, which I'll come to around watsonx Code Assistant for Z, but we've taken that from a down mid-single-digit portfolio to now capitalizing on the stack economics of our mainframe, execution and move that to low-single-digit. Now for the year, as you heard, we are taking up our guidance just given the strength of first half to mid-single-digit. You know, when you get into 2025, we'll talk about our guidance going forward, but we feel very confident that we can continue growing this and that's why we're investing in bringing out new capabilities like watsonx Code Assistant for Z, which is resonating extremely well.\nOlympia McNerney: Operator, let's take the next question.\nOperator: Our next question comes from Toni Sacconaghi with Bernstein.\nToni Sacconaghi: Yes, thank you for taking the question. I'm wondering maybe you can discuss how you think about AI signings and whether you believe they're really incremental or just a shift in client spending? And part of the reason I asked the question is, if I -- you know, it looks like your AI book of business was up about $1 billion sequentially. You're saying three quarters of that is Consulting, so it's $700-plus million in Consulting signings in the quarter. If I take that out, your book-to-bill and the rest of your business is actually down. And despite the strong signings, you're lowering your Consulting expectations for the year. So, I'm just wondering, do you think AI investments in Consulting are a shift in spending? Or do you think they're accretive? Or do you actually think they could even be cannibalistic to Consulting spend and more broadly IT spend?\nArvind Krishna: Yes. So Toni, let me start and then Jim will add more color on this topic. First, it's a great question and you laid out some of the dynamics that were going on in there. If we just step back and just look at our comments on the macroeconomic environment, we kind of stated that there is discretionary spend pressure in Consulting. When you do have that pressure, but there is a demand for AI, I would look you in the eye and say, probably the bulk of that demand, not all but the bulk is indeed a shift from other areas of Consulting. We don't actually believe it's cannibalistic to the point you're pointing out. Now, as time goes on and as people move from early experimentation and proving out the value to wanting to scale and really get the full benefits of generative AI, we do actually believe at that point, even for consulting, these will turn into accretive and additive, but we are still some time away from when that will happen. So, that is just to give you some color and acknowledging that the bulk, but not all is a shift. Jim?\nJim Kavanaugh: Yes. Thanks Toni for the question. Just building on what Arvind said. I mean, first of all, we're very pleased with the early momentum that we've gotten with our book of business around GenAI, both on the technology side with our watsonx platform and now with our open innovation strategy around RHEL AI, OpenShift Granite model's InstructLab, etc. But let's just deep -- dive a little deeper into your question about Consulting, because I think when you look at Consulting, first of all, why is it so important right now in a early part of a cycle? It's important because it's got to establish IBM Consulting as the strategic provider of choice for enterprises as they're going through what we'd like to call digital transformation 2.0 with GenAI. Everyone is looking for who is going to be their strategic provider and partner. And I think $1.5 billion over $1.5 billion book of business in the first 12 months, which by the way is in excess of the ramp we saw play out with hybrid cloud and Red Hat, we're off to a pretty good start. Now, to Arvind's point, you know, in every technology shift, very different dynamics between a Human Capital based business and a product IP business. Human Capital based business, we do see and we expected clients will shift and reprioritize spending. They're doing that now as they're driving large enterprise transformation projects, which is what our portfolio has been able to capture, and that's why you see nice acceleration in growth in our backlog up healthy at 5%. But to Arvind's point, we do think once you get through the early cycle, this is an incremental expansion of TAM that drives a long-tail growth vector over time that has multiplier opportunities for us. So when you look at our consulting book of business, let's dive into the sub-segments, you see business transformation services, which a lot of the GenAI plays out too early right now. That is how do you transform the way you operate HR, finance, supply-chain. We've doubled and accelerated our growth quarter-to-quarter. What you're seeing is a reprioritization and dynamic spending decisions by clients because our AO, where we have a lot of short-term discretionary staff augmentation work, there's a lot of trade-offs between those two. So, it's important for us strategically with our client base, but I think you see how it plays out. Now, just to wrap up the full picture, Software, I think is fundamentally different. Our software book of business now $0.5 billion through the first 12 months. I think inception-to-date right now, we're about two-thirds of subscription, SaaS, one-third perpetual. I think that's contributing nicely about a point of growth. And by the way, that's one of the two components of why we took our software up for the year. So, I think that's predominantly all lift.\nOlympia McNerney: Operator, let's take the next question.\nOperator: Our next question comes from Amit Daryanani with Evercore ISI.\nAmit Daryanani: Thanks for taking my question. I guess, you know, my question is really on the Consulting side. And when I think about this business growing low-single-digits for '24, if I take out some of the M&A contribution, also some of the revenues from the AI book of bill -- book of business that you have at $1.5 billion, is it fair to think that maybe the non-AI Consulting piece actually gets worse in H2 versus H1 for you? If you just talk about the puts and takes on the back half consulting expectations versus front half, that would be really helpful. And then, you know, I'm curious, if you talk to your customers, what is your sense on the duration of this weakness in Consulting and when do you think it has to come back? Thank you.\nArvind Krishna: Hi. So Amit, let me just start and maybe address the second part of your question first and then. I actually do not believe there's any secular macro trend around weakness. I think that this is temporal based on a number of factors we have. The geopolitical uncertainty has gone longer than most people expected and that weighs into people's heads about what that might happen. And specifically, the war in Europe as well as the war in the Middle-East. Second, inflation has gone longer than people expected, which has the unfortunate consequence of higher interest rates and that begins to bear on people. If I look at those two altogether and that then at the moment you have higher interest rates and inflation, you have wage inflation, which does impact the bottom-line of our clients. You put all of that into perspective and is this going to go on for another six months? Likely. Is it going to go on for another year? I'm not so sure, but we got to get through the second half to be able to go there. So, that is why we are optimistic about the medium-term and long-term vector on Consulting. And as Jim answered in the prior question, we do see that this is going to become a tailwind over time, at least for us. Now, in the short-term for the next six months, we do think it holds up a little bit. In terms of answering the specifics and sort of decomposing some of the numbers that you laid out in the first part of your question, I'm going to turn that over to Jim.\nJim Kavanaugh: Yes. Thanks Arvind and thanks Amit for the question overall. You know, let's put this in perspective, right? You go back 90 days ago, how did we see the year kind of playing out with Consulting? We said at that point in time, we had backlog growing nicely mid-single-digit, albeit we did talk about durations going up because large scale transformations were really where the spend was moving to. But we had a solid book-to-bill trailing 12 months over 1.15. We had GenAI momentum that was going to continue throughout the year early in the cycle. We had strategic partnerships, Red Hat growth profile, and we had future acquisitions as we're going to continue to be opportunistic around our M&A criteria and the synergistic value of how consulting plays to our portfolio. If you look right now, 90 days later, as we look to the second-half, many of those are still playing out. You got GenAI, which arguably were above our own expectations, right now doubling, by the way, in Consulting, our GenAI book of business quarter-to-quarter, strategic partnerships, especially hyperscalers, Red Hat still growing nicely. What you're seeing, you know, at the end of the day, those are large scale transformations, lower yield, that's why Arvind and I are saying these are longer-term growth vectors and tails that will play out into '25, '26 and beyond as we get that strategic provider of choice. But in the interim, what you're seeing is that spending reprioritization around short-term discretionary that I think, you know, everyone in the industry is talking about. We're all dealing with this. The key is we have to win that strategic provider of choice in GenAI. And I would argue we're off to a great start. You look at competitor numbers overall, we got $1.5 billion over $1.5 billion book of business doubling quarter-to-quarter right now. I think we're in pretty good shape. That's what we're focused on because that will provide the future revenue multiplier effect as we move forward.\nOlympia McNerney: Operator, let's take the next question.\nOperator: Our next question comes from Jim Schneider with Goldman Sachs.\nJim Schneider: Thanks for taking my question. Maybe if I could just ask on different topic for a second. Can you maybe talk about the environment you see right now for M&A and your intention to continue to drive through acquisitions? And do you believe you have sufficient scale in open-source and DevOps software in particular? And can you maybe comment on the attractiveness of multiples in the public market today relative to the private market?\nArvind Krishna: Hey, Jim, great question and thank you for asking this. Look, on overall M&A, I just want to begin with that our strategy has not changed. We are -- we are disciplined and we are focused. By focused, I mean we stick to the areas that we are investing in hybrid cloud and artificial intelligence. And by discipline, I mean it has to be not just aligned to our strategy, but we expect synergy from the acquisition, especially the multiples are higher as you pointed out and it has to be accretive to free cash flow if it's larger, definitely within two years at the outer end of the range. So having said that, if I look at it right now, we have HashiCorp out there. So, we got to get through that. We expect that to happen in the second-half of this year. We just finished StreamSets and webMethods and we've done a couple of smaller ones in the Consulting space and in other technology tuck-ins.\nJim Schneider: All right.\nArvind Krishna: What do we see going into this space? Our valuations rich, they're reasonably rich. They're not outrageous, I would say, like they had become in parts of late 2020 and 2021. So, I would say that they are more reasonable than then, but they're richer than they were about 18 months ago. There are different dynamics in both the public and the private markets. Public markets are quite variable. I mean, as we can see, some of the multiples and if you look at multiple to revenue, which is not a great metric, let me just acknowledge that, but it is one that's out there. If you look at six, seven, eight, maybe nine or 10 times, we can see our way there for a large deal as long as we have sufficient synergy. Now, for very small deals, that's not even a fair multiple. Very small deals are all about technology and people. In the private markets, we were very pleased with what we got done on StreamSets and webMethods. I would call that a private market deal, not a public market deal. And there, I think it all depends upon what's the property, what is its growth profile, what is the attractiveness of it to the seller versus the buyer, in this case us, all of that play into those multiples. I do expect that on the private side, valuations will be slightly less, but then the risk of going public or some other exit is also taken away. And in some sense, you get a discount for taking that risk off the table. For people who are venture-backed, that's different. They are looking at IPO versus a strategic exit and those are different multiples. But putting all of that together, we remain in the market and M&A is an important part of our growth methodology. We maintain a strong balance sheet for that purpose and we've kind of been clear of that. All that said, this year, we got a big one coming. So, we want to wait and get that done because part of the discipline is also making sure that we kind of digest them at the right rate and pace and put them into our global go-to-market distribution engine.\nOlympia McNerney: Operator, let's take the next question.\nOperator: Our next question comes from Ben Reitzes with Melius Research. Please state your question.\nBen Reitzes: Yes. Hey, thank you. Appreciate it. Jim, I wanted to -- and Arvind, I wanted to see, you know, if the -- it sounds like the margin progress is sustainable for the year. So while I appreciate that you guide to free cash flow and you've raised it a little bit, do you anticipate us being able to flow through the $0.25 of upside on the EPS line? And you know, can -- does that mean earnings is sustainable in the back half? And then I was just wondering if you have any more info on HashiCorp, yes, in terms of the revenue contribution, Street was looking for about $750 million in revenue next year. And on the dilution, there's -- there should be a loss of around $0.30 in interest income. So, just wondering if you have any further views on the net effect to 2025 on that deal. Thanks so much, guys.\nJim Kavanaugh: Hey, Ben, thank you. Appreciate it. Very good question overall. But let's take a step back on your first part of the question around free cash flow. Yes, we're very pleased with the start of the year. Free cash flow of $4.5 billion, up $1.1 billion year-to-year, 4 points above historical attainment. It's our largest first half free cash flow generation as far back as I can go and count. So, we're off to a pretty good start and that gives us the confidence overall of how we're positioning second-half. But the second half and why we took the guidance up is entirely driven by the strength of the fundamentals of our business and flowing through the adjusted EBITDA overachievement. So, read that, although we don't guide on EPS, the strong overachievement of the $0.25 of EPS, we're flowing that through to adjusted EBITDA and that flows through to our guide take-up on free cash flow. The rest of the free cash flow dynamics we've been talking about all year long around, yes, we got benefits of change in retirement plans and cash tax that's going to be a headwind and other balance sheet items, none of that changes. One thing I will bring up and we said in the prepared remarks, but just so there's absolute clarity, we do expect to close the Palo Alto transaction here in the third quarter around certain assets of our QRadar business that will obviously generate a gain. We're excited about the new strategic relationship between our two great companies overall, but we will take structural actions to offset that gain to address stranded cost. And oh, by the way, to your second part of your question, to accelerate our productivity initiatives in 2025, so you get the HashiCorp. First of all, the strategic transaction stands on its -- on its own. Arvind went through our M&A criteria. I think there's a very compelling strategic fit around an end-to-end leadership hybrid cloud platform. There's a lot of synergistic value both on product technology and go-to-market, but there's a very attractive financial profile that we talked about 90 days ago, higher revenue growth profile, adjusted EBITDA accretive in 12 months, free cash flow accretive to Arvind's point by two years. And we do see potential significant near-term cost in operating synergies that lead to about a 30% to 40% free cash flow margin business over a handful of years. Now, when you look at dilution, we understand dilution. I mean, M&A has been an integral part of our financial model for decades. So, underneath that, we understand the purchase growth of those transactions, the synergies of those transactions, the balance sheet capital structure implications of those. And with all that said, our model is to grow mid-single-digit revenue and grow operating leverage so we grow free cash flow quicker than revenue. We don't see that changing in 2025. We see growth profiles around revenue, around operating leverage and around free cash flow overall. And that speaks to the diversity or diversification, I should say, of our business model around productivity. We entered the year, raised it to $3 billion. We're getting out ahead of that again and you see that play out in our margins through the first-half, what, up 180 basis points on pre-tax. So, we've got many levers to deal with this overall. We know how to handle it.\nOlympia McNerney: Operator, let's take the next question.\nOperator: The next question comes from Erik Woodring with Morgan Stanley.\nErik Woodring: Hey guys, thanks so much for taking my question. Arvind or Jim, I'd love if you could just dig into the Red Hat business a bit more. You know, over the last few quarters, you've talked about some very healthy bookings growth numbers ranging anywhere between, call it, 15% and 20%. But we did see growth obviously decelerate by about a point this quarter despite, you know, expectations that it would be flat to maybe increasing for the rest of the year. So, can you just kind of double click on exactly what you're seeing with the Red Hat business today? What's kind of the offset to the strong bookings numbers? And how should we think about Red Hat growth now in constant currency for 2024? Thanks so much.\nArvind Krishna: Great question, Erik. So, let's just look at the Red Hat business in terms of how the dynamics function between our clients and ourselves. So, clients come in and create demand, we fulfill that. That shows up as bookings, not as revenue because the Red Hat business model is a pure consumption business model. Clients pay for what they're consuming and so the bookings then play out. Now, those bookings are a signal of further demand and typically they're anywhere from one-year to three-year worth of revenue that the client is pre-committing to. So, when we enter a year, about half the revenue, we can look at the bookings of the previous year and say that that gives us. The other half has to come over the quarter. Now that we have a year, not longer, but a year of the double-digit demand that you're talking about, if I remember right, it was 14%, 17%, 14%, 20% in terms of those demands. Now that full-year is there, that points to that for the portion that we can see, and as we get into a quarter, it climbs up from that 50% to 60% to 70% to 80% and Jim mentioned in his prepared remarks, what he called CRPO or the revenue performance obligations, we see those sitting around mid-teens for the second half of the year to answer your question. Now, if that's about 80% and that will translate into low double digit is what we can look at and feel quite comfortable on. By the way, we see these early signs of the demand continue into this quarter and likely the half, which means that we expect to continue now in the low double digits going forward. So, I hope that that gives you a sense. But I'm also excited by the underlying product capabilities. We see OpenShift, which is extremely important. It plays into containerization, it plays into virtualization. It's an important element of how our clients exercise hybrid. It has been growing and the demand there grew again at about 40% this past quarter. But we also saw acceleration in Linux and enhanceable where both of those demand vectors grown to the low double-digits. That given the size of the Linux business is very good news for us going forward. So, I hope that that gives you some color on those pieces. And a vector that we have not talked about that will play, but probably into '25 and '26, we are very excited by our two open-source AI projects inside the Red Ad business, our RHEL AI as well as OpenShift AI. And as people begin to deploy at scale, but not only on public cloud, but also on-premise, leveraging their hybrid environment, we expect that both of those will also contribute into the Red Hat business, but that will take more time.\nOlympia McNerney: Operator, let's take one last question.\nOperator: Thank you. Our next question comes from Matt Swanson with RBC Capital Markets.\nMatt Swanson: Thank you. Yes. Arvind, if we could pick up right where you left off there. Can you just give us a little more color on the decision to open up the Granite models and the code base? And then really kind of what you're seeing in the market that makes you feel like taking maybe a more developer-focused approach to those? I think as you put it, fit-for-purpose models, is the right long-term strategy?\nArvind Krishna: So Matt, thank you for asking that question. And there was actually a question on developers before also. So, I'm sorry we didn't get to it fully. We'll get to it now in this question. Look, the whole question comes down to, there was a thesis out there about a year and a half ago that maybe one, maybe two extremely large models are going to run away with the bulk of the market share. We always felt that was both technically and economically infeasible. And I'll describe why. If you run an extremely large model on public clouds, the model by its nature is going to be expensive because a very large model needs a lot more compute, a lot more network, a lot more storage, a lot more memory, and we can see some of those dynamics play out. If you can drop the model size, you can drop all of that by 90%. I would actually tell you 99% reduction in the compute and memory and network costs, but let's call it 90% just for the sake of argument. So if you are running a -- like one of our clients was describing to me, they run a couple of billion transactions through their internal systems each day. If they had to go service those out to a large public cloud, the bill per day would have come back to be a couple of $100 million. You multiply that by 250, that's kind of an infeasible cost. If you can drop it by 90%, you're now bringing it down to the $10 million to $20 million a day. If you can actually run it using some of our Red Hat technologies on-premise, you can drop it by another 50%. You're not talking 5% to 10%. For what it can do, that is a very attractive proposition. So, now getting back to the models. If you have no idea what you're going to do, if you have no idea what you might be looking for, you go to a very large model because it contains all the possible elements. If you have a sense of what you need to do, I need to summarize emails. You need an English-language model if you're sitting here in the United States. If you are going to go change your Java or C++ or Python programmers to be more productive, you don't need a model that can write poetry and draw images. You need a model that understands programming languages. So, we are very, very proud of what our team has done. We can produce models that can do these things. So, these are two distinct models, one for programming, one for business language. They are one-tenth or less than that of the size of the extremely large models. But you can look on the leader boards, they perform quite as well as the largest models. So, that is kind of what our strategy is. However, if our clients want other models, we are also happy to work with other models and we have had that perspective. So, why open-source since that is part of your question? Why open-source is because often we find that clients want to increase the model's efficacy by adding their own unique language. People might want to write emails in a certain way. They might want to program in a certain way. They like comments in a certain style. I call that refining the model. We have a technique called InstructLab, but then clients get concerned. Wait, if I add my data, I don't want to give that away and back into a more public format. Can I keep that to myself? So, open sourcing our models under the Apache license gives our clients the freedom that what they add onto our underlying open model, they can keep to themselves. Now, to the developer point, putting all of that machinery into Red Hat Linux now gives us an avenue to open it up to developers as they can go experiment and play. By the way, I will turn around and tell you that for a developer who's not running production, who's just playing with things like all people do it. On a MacBook, you can begin to play around with models that are in the low tens of billions of parameters. That's a massive market that opens up. They get the freedom and flexibility that they don't have to give it back to us unless they want to. I am not actually concerned about this gives away the IP, as we have found through whether it's Red Hat Linux or whether other people have found through Mongo or other people have found through Hadoop, enterprises do look for and the last few days have certainly shown us, people look for patching, people look for security, people look for backward compatibility. There's a lot of enterprise reasons why people will still do business with us. But the open-source nature of what you asked, I'm so glad you did, allows us to expand that market into the millions of developers who do run Linux on their own machines or their corporate machines or their laptops and they can go experiment, add their innovation and either give it back to the community or actually reserve it for their enterprise. So, that's how we kind of tap into the whole developer ecosystem. So, let me now wrap up the call. In the second quarter of 2024, we executed on our strategy to deliver revenue growth and cash generation. We saw strong performance across our portfolio. We are excited about our early traction in generative AI. We look forward to sharing our progress with you as we move through the rest of the year. Thank you all.\nOlympia McNerney: Thank you, Arvind. Operator, let me turn it back to you to close out the call.\nOperator: Thank you for participating on today's call. The conference has now ended. You may disconnect at this time." + } +] \ No newline at end of file