Advanced Micro Devices (NASDAQ: AMD) is facing a turbulent period as its stock tumbled following disappointing AI-related guidance. While the company posted a solid fourth-quarter revenue of $7.7 billion—up 24% year-over-year—and highlighted strong growth in data center and client segments, investors were left unimpressed due to the lack of clear AI sales guidance. AMD’s stock has slumped over 25% since the end of 2023, significantly underperforming the broader semiconductor index and Nvidia, which has surged 160% in the same period. The lack of an annual AI revenue outlook, coupled with AMD’s struggles to gain traction in the high-margin AI accelerator market, has reinforced concerns that the company is losing ground to Nvidia in generative AI. Additionally, the arrival of DeepSeek—a Chinese startup that claims to train competitive AI models with significantly lower costs—has added fresh concerns about the long-term profitability of AMD’s AI chips.
Lack Of AI Sales Guidance & Slower Data Center Growth
One of the biggest red flags for investors has been AMD’s decision to withhold a specific AI revenue forecast for 2025. In contrast to its previous earnings calls, where it provided clear sales projections, the company opted not to set a firm growth target for its Instinct AI accelerator chips, raising doubts about its confidence in competing with Nvidia’s dominance in generative AI. While CEO Lisa Su reiterated that AMD sees strong demand for its AI GPUs, she admitted that AI-related revenue in the first half of 2025 would be similar to the second half of 2024, contradicting investor expectations of continued linear growth in a rapidly expanding market. Analysts, including Citi and Bank of America, have downgraded AMD’s stock, citing concerns about its slower AI revenue ramp and margin dilution. AMD’s data center segment, despite growing 69% year-over-year to $3.9 billion, failed to meet Wall Street estimates, with revenue falling short of the expected $4.09 billion. The company’s overall AI accelerator business brought in over $5 billion in 2024, but this figure pales in comparison to Nvidia’s AI-related revenue, which has exceeded $100 billion annually. Additionally, AMD’s MI300X accelerator, which was supposed to compete with Nvidia’s H100, has yet to see widespread adoption at the scale Nvidia enjoys. While AMD has secured some key cloud partnerships, including Microsoft, Meta, and IBM, its AI chips remain a secondary choice rather than a primary alternative. Investors were also unsettled by AMD’s slower data center growth in the first half of 2025, driven by product transitions and a cautious spending environment from major customers. The market reaction underscores the challenge AMD faces: without clear guidance, it becomes difficult for investors to model future growth, especially when Nvidia continues to report surging AI-related sales.
Rising Threat From DeepSeek & Changing AI Infrastructure Economics
The AI chip market, once seen as a duopoly between Nvidia and AMD, is facing a new disruptor: DeepSeek, a Chinese startup that recently claimed it was able to develop an advanced AI model with significantly fewer chips and lower overall costs. This announcement shook the AI investment community and contributed to a broader selloff in AI-related stocks, including AMD and Nvidia, as investors recalibrated their expectations on the total addressable market for AI chips. DeepSeek’s claims suggest that AI compute requirements may not scale as aggressively as previously expected, raising concerns that companies may not need to purchase as many high-end GPUs as initially projected. This puts AMD in a precarious position, as its AI strategy heavily relies on capturing a share of the expected multi-billion-dollar AI infrastructure buildout. The company has positioned its Instinct MI300 series as an alternative to Nvidia’s H100 and H200, but if AI firms find ways to train models more efficiently without needing thousands of GPUs, AMD’s growth potential in the segment could be significantly constrained. Adding to these concerns, cloud hyperscalers such as Amazon, Google, and Microsoft are increasingly investing in their own custom AI chips, reducing their reliance on third-party providers like AMD and Nvidia. While AMD maintains that its AI chips will eventually scale into “tens of billions of dollars” in annual revenue, it remains unclear how much of this market will actually be available if alternative AI training methodologies continue to gain traction. The situation highlights a broader industry shift where software innovations—such as model efficiency improvements—could meaningfully impact hardware demand, leaving AMD in a vulnerable position as it plays catch-up to Nvidia’s entrenched dominance in AI accelerators.
Nvidia’s Continued Dominance In Generative AI & Data Center GPUs
Nvidia remains the undisputed leader in AI acceleration, with an estimated 89% market share in the data center GPU market in 2024. AMD, by comparison, holds just 10.3%, and its competitive positioning remains a key concern for investors. Despite efforts to close the gap, AMD’s MI300X and upcoming MI350 series GPUs still lag behind Nvidia’s offerings in terms of both performance and ecosystem support. While AMD has been investing heavily in its ROCm software stack to improve compatibility with AI frameworks like PyTorch and TensorFlow, developers and enterprises remain largely committed to Nvidia’s CUDA ecosystem, which has been refined over decades and remains the industry standard for AI workloads. This software moat is a major barrier to AMD’s AI aspirations, as even superior hardware specifications may not be enough to drive mass adoption if software compatibility remains an issue. Additionally, Nvidia’s aggressive pace of innovation continues to widen the gap. The company is set to launch its next-generation Blackwell AI architecture in 2025, expected to deliver another major leap in AI performance. AMD, on the other hand, is still ramping up its MI325 and MI350 series, which will enter production in mid-2025, meaning it could remain at least one product cycle behind Nvidia. Furthermore, Nvidia’s growing partnership ecosystem—including collaborations with major AI firms like OpenAI, Meta, and Google—ensures that its GPUs remain the preferred choice for AI training and inference at scale. While AMD’s management remains optimistic about its long-term AI growth trajectory, the company has yet to prove that it can break Nvidia’s stranglehold on the market. Without a major competitive breakthrough in either performance or ecosystem adoption, AMD may struggle to achieve significant market share gains in the lucrative AI data center space.
Final Thoughts

AMD’s recent stock slump is not a one-time occurrence and the stock has given largely negative returns over the past 6 months (and even longer) as shown in the chart. While its valuation multiples (an LTM EV/ Revenue multiple of 6.65x which is well below peers) may appear attractive, one cannot ignore the challenges the company faces as it attempts to gain ground in the AI chip market. While it has made significant strides in data center growth and continues to expand its footprint in cloud computing, the lack of AI revenue guidance, rising competition from disruptive players like DeepSeek, and Nvidia’s continued dominance in generative AI pose significant risks. We believe that AMD remains a formidable player in the semiconductor space but its ability to capture a meaningful share of the AI infrastructure boom is far from certain. With Nvidia maintaining a significant lead and alternative AI training approaches threatening to reduce the need for massive GPU investments, AMD’s AI ambitions must be closely scrutinized in the coming quarters before investing in the stock.