100 Leaders Shaping the Future of Artificial Intelligence

They write the script that the rest of us follow.

Sep 17, 2025 - 15:30
 0  4
100 Leaders Shaping the Future of Artificial Intelligence

Any honest survey of power in A.I. confronts an awkward truth: influence often sits with those whose choices carry the greatest potential for harm. This reality is hardly unique to A.I. but, unlike traditional industries where effects might unfold over months or years, A.I.’s impact penetrates more sectors, into further pockets of humanity, in real time, creating a concentration of power that is both more immediate and more pervasive than any prior technological force. Yet the same influence that amplifies bias and threatens civil liberties also enables unprecedented breakthroughs—accelerating drug discovery, democratizing education and tackling challenges from climate change to accessibility that have long seemed intractable. This duality creates the central paradox of mapping power in A.I. Leave them out, and the picture is incomplete. Include them, and you risk lending credence to models developed through murky data rights and ethically ambiguous sourcing. The task is to chronicle power without mistaking it for virtue.

To evaluate power in A.I. requires an intensely contemporary lens, complicated by the fact that this is now a geopolitical competition as much as a technological one. The rapid succession of model releases, regulatory shifts and market disruptions creates constant upheaval. Previously unknown players can capture outsized influence through a single, strategic move. Countries from Saudi Arabia to Canada treat A.I. development as technological sovereignty, while export restrictions and trillion-dollar infrastructure investments reshape global power relationships at record speed. The result is a landscape where enduring power becomes all the more significant precisely because it is so difficult to maintain. Amid relentless competitive upheaval and billion-dollar talent wars where entire teams become acquisition targets, only a handful ride out the chaos. Their staying power is what makes them remarkable.

Power manifests in two forms, each capable of profoundly reshaping the industry, though increasingly, these forces reflect a fundamental tension between moving fast and moving responsibly. Capital dictates which projects live or die, where markets move and how whole industries pivot. Still, this year’s honorees show how safety and ethics have evolved from academic afterthoughts to central preoccupations shaping investment decisions and product development. Equally significant is the power of ideas—the research that establishes new paradigms, the frameworks that guide responsible development and the thought leadership that shapes collective understanding of what A.I. should become. Financial resources elevate certain voices and ideas, and intellectual contributions unlock access to capital and institutional support. Each feeds the other. Together, they write the script that the rest of us follow.

2025 A.I. Power Index

1. Sam Altman

Sam Altman continues to push product iteration at breakneck speed while rapidly scaling revenue at the world’s most prominent A.I. company. As of July, ChatGPT had 700 million weekly users and reached $12 billion in annualized revenue—double its sales from the start of the year. OpenAI also recently announced a $1 billion data center in Norway as part of the Stargate project’s European expansion, launched a new study mode for students to capture the education market, and released the highly anticipated GPT-5 model. Last week, OpenAI announced a $300 billion deal with Oracle—one of the largest cloud contracts in history—setting off alarm bells amongst analysts and experts that the industry has reached “peak bubble.” Two days later, OpenAI and Microsoft, its largest shareholder, signed a non-binding deal allowing OpenAI to restructure into a for-profit company.  Altman himself has reportedly expressed concerns about the overvaluation of some A.I. startups.

Like most major tech CEOs, Altman’s influence extends well beyond OpenAI. This summer, when he warned the Federal Reserve that entire job categories could disappear due to A.I. Even as he champions efforts to “democratize” A.I.’s economic benefits, he has also cautioned of a looming A.I. “fraud crisis,” positioning himself as both an evangelist of innovation and a voice of restraint on societal risks. His clout has endured despite governance setbacks, including a brief ouster by OpenAI’s board in 2023 and his resignation from the company’s safety committee last September.

Altman also wields outsized influence through angel investments in more than 400 companies, with equity stakes reportedly worth $2 billion. His WorldCoin project—compensating users with cryptocurrency in exchange for eye scans collected by “Orb” devices—aims to combat fraud and advance universal basic income, reflecting his broader vision for A.I.-enabled economic systems. Altman’s leadership of OpenAI, direct engagement with governments, and sweeping investment portfolio establish him as one of the key architects of A.I.’s commercial and policy future. 

Yesterday, OpenAI announced new measures to ensure teen safety. To separate users who are under 18 years old, the company wrote in a blog post, it is “building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults, but believe it is a worthy tradeoff.”

Sam Altman. Getty Images

2. Jensen Huang

Jensen Huang has leveraged Nvidia’s A.I. infrastructure dominance to reshape global A.I. development and policy. As co-founder and CEO since 1993, Huang has transformed Nvidia from a company making processors for video games into the backbone of the A.I. revolution. Nvidia’s GPUs, first introduced in 1999, now power the majority of modern A.I. systems—and, this summer, helped Nvidia become the world’s first public company to reach $4 trillion in market value.

In recent months, the executive has also overseen various product releases at Nvidia geared towards physical A.I. systems like robotics and self-driving cars as the chipmaker eyes a future beyond GPUs. “We stopped thinking of ourselves as a chip company long ago,” said Huang in June.

Huang’s influence extends into geopolitical A.I. policy. In July, he persuaded President Donald Trump to reverse bans on A.I. chip sales to China and agreed to give the U.S. government 15 percent of Nvidia’s revenue from such sales in return, demonstrating his ability to shape international A.I. trade relationships amid ongoing security concerns over semiconductor exports to China. His appointment in April to the U.S. Department of Homeland Security’s AI Safety and Security board positions him at the center of national A.I. security policy development.

Beyond policy influence, Huang drives A.I. advancement through strategic investments in research infrastructure. He contributed $50 million to Oregon State University for a $200 million supercomputing institute and $30 million to Stanford University to establish the Jen-Hsun Huang School of Engineering Center. His recognitions include the VinFuture Grand Prize and the 2025 Edison Achievement Award. This combination of market leadership, policy influence and strategic research investment establishes Huang as a key architect of global A.I. infrastructure and governance frameworks.

Jensen Huang. Courtesy of Nvidia

3-4. Dario & Daniela Amodei

Dario Amodei and Daniela Amodei built Anthropic into a $183 billion A.I. powerhouse in just four years, transforming it from a research-driven startup into one of the most influential players in the industry. Today, Anthropic’s Claude model powers Microsoft’s GitHub Copilot and Meta’s internal coding assistant, Devmate, embedding the company’s technology at the heart of the global software ecosystem.

As CEO and president, respectively, the sibling co-founders have made safety and transparency core to Anthropic’s mission. Anthropic was the first leading A.I. company to endorse California Governor Gavin Newsom’s landmark A.I. safety bill, which requires companies to disclose how their models are trained and tested. “It’s saying you have to tell us what your plans are. You have to tell us the tests you run. You have to tell the truth about them,” Dario told Politico. 

This year, the Amodeis expanded Claude’s reach across multiple sectors. In April, they launched Claude for Education to bring advanced A.I. tools into classrooms. In May, they unveiled the Claude 4 series, including Claude 4 Opus, which they claim delivers world-leading coding capabilities. In July, Anthropic introduced a new Claude interface tailored for analyzing financial services market data, underscoring its push into specialized enterprise applications.

The Amodeis have also been outspoken about A.I.’s broader social impact. After Dario predicted that automation could drive U.S. unemployment as high as 10 to 20 percent within five years, the company launched the Economic Futures Program in June, funding independent research on A.I.’s economic disruption and policy responses.

Not all of Anthropic’s milestones have been without controversy. In September, the company agreed to pay a record $1.5 billion copyright settlement to authors and publishers, after a judge ruled it had illegally downloaded copyrighted works to train its models. The landmark case, described by some observers as the industry’s “Napster moment,” may set the precedent for how A.I. companies compensate authors, artists, and creators in the future.

Dario & Daniela Amodei. Courtesy of Anthropic

5. Elon Musk

  • Founder, xAI

Best known for leading Tesla, SpaceX and X (formerly Twitter), Elon Musk added another disruptive venture to his empire in July 2023 with the launch of xAI, the generative A.I. company behind the Grok chatbot. In less than two years, xAI has vaulted into the front ranks of the industry, powered by Musk’s characteristic mix of audacious vision, aggressive financing and headline-grabbing bets on infrastructure.

Last year, xAI unveiled Colossus, a Memphis-based supercomputing facility billed as the world’s largest. Musk has framed Colossus as essential to accelerating A.I. toward human-level reasoning, though the project has drawn criticism for its massive environmental footprint and water demands. Musk has since floated plans for a wastewater recycling facility to ease community pushback.

xAI’s rapid expansion has included a $33 billion acquisition of X and the purchase of Hotshot, an A.I. video-generation platform. To fuel its growth, the company has raised billions—$6 billion in equity funding led by Andreessen Horowitz and BlackRock, plus a $10 billion debt and equity deal in mid-2025. In July, xAI secured a $200 million U.S. Department of Defense contract and rolled out Grok 4, a model that scored at state-of-the-art levels on ARC-AGI-2 benchmarks. Musk has touted Grok 4 as a step toward A.I. capable of discovering new technologies and even “new physics.” Yet, its permissive “free speech” standards have generated backlash, with critics citing antisemitic, sexually explicit and violent outputs.

Musk has infused A.I. directly into his other companies. Tesla relies on advanced neural networks and self-supervised learning systems to power its Autopilot and Full Self-Driving software. Meanwhile, X has integrated Grok across its social media platform, offering subscribers conversational A.I. features and experimenting with automated content moderation, recommendation systems and A.I.-generated media—all part of Musk’s broader push to turn the platform into an “everything app.”

Musk has also set his sights on talent wars. Earlier this month, he announced plans for a Seattle engineering hub, widely seen as an attempt to siphon talent from nearby rivals such as Microsoft, even as Musk sues the company over alleged anti-competitive ties with OpenAI.

Elon Musk. Getty Images

6. Larry Ellison

  • Founder & CTO, Oracle

Larry Ellison is positioning Oracle as the infrastructure backbone for the global A.I. revolution. As co‑founder and CTO of one of the world’s largest cloud providers, Ellison has orchestrated a series of landmark deals that have cemented Oracle as a critical supplier for frontier A.I. models. Last week, Oracle secured a massive $300 billion, five-year cloud infrastructure contract with OpenAI—a deal so large it sent Oracle’s stock soaring and briefly made Ellison the world’s richest person.

This follows an extraordinary quarter where Oracle said it had $455 billion in remaining performance obligations from existing contracts as of August 31, indicating strong future revenue from its cloud infrastructure business. The company inked four separate multibillion-dollar deals in just three months this year, including the $500 billion Stargate Project in partnership with OpenAI and SoftBank. While competitors like AWS and Microsoft focus on broad cloud ecosystems, Oracle has carved out a specialized niche: providing the raw compute capacity and speed that cutting-edge A.I. models demand. His strategy extends globally with a $3 billion European expansion, including a $1 billion investment in the Netherlands’ A.I. infrastructure.

Beyond infrastructure, Ellison views A.I. not merely as a technological advancement, but an essential 21st-century infrastructure, advocating for centralized datasets to drive breakthroughs in healthcare, energy and national security.

Larry Ellison. Getty Images

7. Alexandr Wang

  • Chief A.I. Officer, Meta

At just 28, Alexandr Wang has become one of the most consequential figures shaping the trajectory of A.I. This summer, the founder and former CEO of Scale AI joined Meta to lead its newly formed Meta Superintelligence Labs (MSL) after Meta acquired a 49 percent stake in Scale for $14.3 billion. It marks one of the largest corporate realignments in the A.I. industry to date.

MSL will serve as an umbrella organization overseeing four separate A.I. units at Meta. In addition to A.I. products, infrastructure and long-term research, the division includes TBD Labs, a group stacked with new hires from rivals like OpenAI, Google and Anthropic that is focused on achieving superintelligence, a form of A.I. with capabilities exceeding that of humans. Wang’s meteoric rise has been defined by his commitment to scale. Since launching Scale in 2016, he has built the company into a backbone of enterprise and government A.I. deployment. Scale’s core business is preparing and organizing the massive datasets that train A.I. systems—turning raw information like images, video, text and sensor data into structured inputs machines can learn from. Over time, the company has expanded into building infrastructure that helps organizations test, fine-tune and securely deploy A.I. models. 

Wang has also become a sought-after presence on global stages from Davos to the World Economic Forum, where his views on the responsible deployment of A.I. resonate with policymakers, corporate leaders and researchers alike.

Now, with his new role at Meta, Wang has evolved from entrepreneur to strategist at the nexus of corporate power and frontier research. His influence extends beyond technical infrastructure into shaping how advanced A.I. will be built, governed and integrated into society. As the race toward artificial general intelligence accelerates, Wang’s trajectory suggests he will remain pivotal in guiding not only how A.I. develops, but how the world learns to live with it.

Alexandr Wang. Getty Images

8. Mustafa Suleyman

  • CEO, Microsoft AI

Mustafa Suleyman is transforming Microsoft’s Copilot into a personal A.I. partner. In April, the Microsoft AI CEO unveiled Copilot’s Memory feature, enabling the assistant to remember users’ preferences—favorite foods, hobbies and important dates—for more personalized interactions. Shortly after, the company introduced Deep Research, which leverages complex reasoning to gather and analyze information, now integrated into Microsoft’s A.I. agent builder for developers. Copilot Vision followed on both Windows and mobile devices, allowing the A.I. to “see” users’ screens and act on what it observes. These updates come after Suleyman consolidated leadership of Copilot in 2024 following his departure from Inflection AI to lead Microsoft’s consumer A.I. efforts. 

Beyond overseeing the integration of A.I. into consumer-facing products, Suleyman has made headlines for his commentary on the risks of linking consciousness to A.I. systems. “Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they’ll soon advocate for A.I. rights, model welfare and even A.I. citizenship,” said the executive in an August essay. “This development will be a dangerous turn in A.I. progress and deserves our immediate attention.”

Suleyman co-founded DeepMind (now Google DeepMind) in 2010, helping build the company into one of the most influential A.I. labs in the world before its acquisition by Google. He is widely recognized for his contributions to reinforcement learning, ethics and responsible A.I., shaping both the technology and the governance frameworks that guide it. Suleyman’s work bridges the gap between cutting-edge research and real-world applications, making him a central figure in the global effort to develop A.I. that is safe, ethical and broadly beneficial.

Mustafa Suleyman. Getty Images

9. Satya Nadella

  • CEO & Chairman, Microsoft

Satya Nadella helped Microsoft enter the A.I. race early, leading the tech giant’s initial investment into OpenAI, then a little-known research lab, in 2019. That bet proved prescient. The partnership has since grown into a $13 billion alliance, giving Microsoft exclusive access to OpenAI’s most advanced models while positioning Azure as the default infrastructure for their deployment. Nadella has expanded this strategy by backing other A.I. developers, including France’s Mistral AI, to diversify Microsoft’s capabilities further.

Beyond high-profile partnerships, Nadella has aggressively pushed to build Microsoft’s in-house A.I. systems. In March 2024, Nadella boldly hired Mustafa Suleyman, co-founder of DeepMind and Inflection AI, to lead Microsoft’s consumer A.I. business. The appointment underscored Nadella’s vision of putting A.I. at the center of Microsoft’s consumer-facing products, from Copilot in Office and Windows to Bing search. Bringing in one of the field’s most respected entrepreneurs also signaled Nadella’s ambition to stay ahead in the global A.I. race.

In August, Microsoft unveiled its first in-house models: MAI-Voice-1, a speech system, and MAI-1-preview, designed for everyday queries. Earlier this month, Microsoft told employees it would significantly expand investments in physical A.I. infrastructure to make its models more competitive. On September 16, it announced a more than $30 billion commitment over four years to grow U.K. operations and A.I. infrastructure.

Internally, Nadella has acknowledged A.I.’s disruptive potential, preparing teams for structural shifts as automation changes how work gets done. Under his leadership, Microsoft has gone from a software darling to A.I. behemoth, where every product—from Office to Azure—is being reimagined with machine learning at the core.

Satya Nadella. Getty Images

10. Aravind Srinivas

  • Founder & CEO, Perplexity

Aravind Srinivas has built Perplexity AI into a formidable player in generative A.I., reaching a valuation of $20 billion after a series of rapid funding rounds. The company, which he founded in 2022 with Jonny Ho, Andy Konwinski and Denis Yarats, combines a traditional search index with large language models. In A.I., “perplexity” measures how well a model predicts text—the lower the perplexity, the more accurate the model. The three-year-old startup has attracted significant investment from industry giants such as Nvidia and Jeff Bezos, and reports about Apple’s interest in acquiring Perplexity have surfaced in recent months.

“The most successful people have never been the ones with the most answers; they are always the ones with the most questions,” Srinivas tells Observer. “In a world where you can easily create fake content with A.I., accurate answers and trustworthy sources become even more essential.”
 “This gets at the heart of what makes Perplexity different from other A.I. technologies and from search companies,” Perplexity’s head of communication, Jesse Dwyer, adds. “We believe that a good answer should only create more questions, and that’s why we built Perplexity as an ‘answer engine’—to help people ask more questions.”

In July, Perplexity launched its A.I.-powered browser, Comet, which integrates advanced search capabilities with task automation features. In August, Perplexity made an unsolicited $34.5 billion all-cash offer to acquire Google’s Chrome browser. Although the offer exceeded Perplexity’s own valuation, it underscored the company’s ambition to challenge established players in the browser market and expand its user base. The company has also rolled out a revenue-sharing model with publishers, allowing content creators to earn a portion of the value generated when their material is surfaced through Perplexity’s platform. Earlier this week, The Information reported that Perplexity is allegedly struggling to grow revenue through advertising and online shopping, but that, despite buyers and sellers reporting frustrating user experiences, site traffic has continued to grow exponentially.

Aravind Srinivas. Courtesty of Perplexity

11. Sundar Pichai

  • CEO, Google & Alphabet Inc.

Under Sundar Pichai’s leadership since 2015, Google has established itself as a global A.I. powerhouse. Its DeepMind unit produced the groundbreaking AlphaGo, its researchers pioneered transformer-based models through the pivotal Attention Is All You Need paper, and three of last year’s Nobel Prize winners either worked at Google or trained within its ranks. Today, the company’s flagship A.I. assistant Gemini counts more than 450 monthly users, while its A.I.-generated summaries reach over 2 billion customers.

Much of that growth stems from the global rollout of AI Mode, an agentic search feature now serving more than 100 million monthly active users in the U.S. and India. Powered by Gemini 2.5, the tool enables conversational, multi-step queries that bring A.I. reasoning directly into Search.

Some of the company’s more recent A.I. updates include adding agentic features to A.I. mode, expanding accessibility for Gemini Deep Think—a reasoning model that won this year’s gold medal at the International Mathematical Olympiad (IMO)— unveiling new image editing models, and releasing the world model Genie 3. Like its Silicon Valley rivals, Google is also making progress on developing A.I.-powered smart glasses through partnerships with Warby Parker, Gentle Monster and Samsung.

To sustain these ambitions, Google plans to spend $85 billion this year, funding data centers, servers, networking and talent. This summer, Pichai brought on Varun Mohan and Douglas Chen, the CEO and co-founder of A.I. software firm Windsurf, to strengthen Google’s agentic coding tools. With Pichai’s conviction that A.I. will reshape the future of computing, the company shows no signs of slowing its pursuit of the next breakthroughs.

Sundar Pichai. Getty Images

12. Ilya Sutskever

  • Founder, Chief Scientist & CEO, Safe Superintelligence

Ilya Sutskever leads the pursuit of safe superintelligence through one of A.I.’s most well-funded startups. As co-founder, chief scientist and CEO of Safe SuperIntelligence Inc. (SSI), launched in June 2024, Sutskever secured $2 billion in funding by April 2025, achieving a $32 billion valuation focused exclusively on developing safe artificial general intelligence with “safety always remaining ahead” of capabilities.

Sutskever’s leadership faced immediate tests from Big Tech competition. When Meta attempted to acquire SSI earlier in 2025, he and his team declined, choosing independent development over integration into a larger tech ecosystem. Meta subsequently poached Daniel Gross as part of a broader hiring spree, leading to Sutskever’s promotion to CEO in July. This resistance to acquisition pressure while maintaining talent and focus demonstrates his commitment to SSI’s safety-first superintelligence mission.

His technical authority stems from co-developing AlexNet, the foundational deep neural network cited more than 181,000 times that enabled modern computer vision advances, and co-founding OpenAI in 2016. This combination of leading a $32 billion safety-focused A.I. company, successfully resisting Big Tech acquisition attempts, and maintaining focus on superintelligence development positions Sutskever as a key architect of the race toward AGI with systematic safety considerations.

Ilya Sutskever. Getty Images

13. Mira Murati

  • Founder, Thinking Machines Lab

Mira Murati, who previously served as CTO of OpenAI, had a brief stint as the ChatGPT-maker’s CEO and held a senior engineering role at Tesla, co-founded Thinking Machines Lab in late 2024 to address key gaps in understanding foundation models and other machine learning systems. Within five months of launching, Murati secured $2 billion in seed funding from investors like Andreessen Horowitz, Nvidia and Jane Street, bringing the company’s valuation to $12 billion.

Last week, Thinking Machines Lab published the research behind one of its first projects, investigating why so many A.I. models produce inconsistent or seemingly random results. The post appeared on the company’s new blog, Connectionism, and suggested solutions to boost LLM determinism. Murati said in July that the startup plans to unveil its first product later this year, hinting that it will include an open-source component and support researchers and startups as they develop custom models.

With a team drawn from OpenAI, Character.ai, Mistral and other top A.I. companies, Thinking Machines Lab has become highly sought-after talent—Meta reportedly tried to poach a dozen employees with offers totaling more than $1 billion, but none accepted. Murati emphasizes the collective nature of scientific progress and, rather than focusing on fully autonomous A.I. systems, aims “to build multimodal systems that work with people collaboratively” and distribute her startup’s research across the broader A.I. community.

Mira Murati. Getty Images

14. Demis Hassabis

  • Founder & CEO, Google DeepMind

As CEO and co-founder of Google DeepMind and founder of Isomorphic Labs, Demis Hassabis leads a 6,000-person team pushing Google’s A.I. efforts toward artificial general intelligence. His pioneering work in protein structure prediction earned him the 2024 Nobel Prize in Chemistry and a knighthood in the same year. Through DeepMind, Hassabis created AlphaFold, a breakthrough tool that has revolutionized biology—used by more than two million researchers to accelerate drug development and medical discoveries. His impact stems from a rare ability to turn theoretical advances into practical systems that reshape entire fields—and to deliver them at scale.

In June, Google processed nearly one quadrillion tokens, laying the computational foundation for advances across industries. The following month, DeepMind rolled out AlphaEarth Foundations for climate monitoring, launched Aeneas to help historians decode ancient Roman inscriptions and achieved “gold medal” performance in the International Mathematical Olympiad. The projects highlight A.I.’s potential to drive progress in areas as diverse as environmental science, classical studies and advanced mathematics. Meanwhile, Hassabis is applying these capabilities to drug discovery at Isomorphic Labs, with the potential to accelerate pharmaceutical development timelines dramatically.

Demis Hassabis. Getty Images

15. Reid Hoffman

Reid Hoffman may be best known as the co-founder of LinkedIn, but in recent years, he has also become one of A.I.’s most prominent backers, investing in startups spanning communication, healthcare and beyond. In January, he doubled down on his optimism with the book Superagency, co-authored with tech writer Greg Beato, arguing that A.I. will enhance rather than diminish human agency.

What strikes Hoffman most is how quickly A.I. has entered the mainstream, adopted even by people outside the tech world for everything from managing relationships to boosting workplace productivity. “I don’t recall a similar level of wide-ranging mainstream adoption just a couple of years into the Web 1.0 era,” he tells Observer. “As much as there are obvious concerns and even resistance to these new technologies, the public’s overall embrace has been eye-opening.”

In 2022, Hoffman co-founded Inflection AI, which was effectively absorbed by Microsoft last year. More recently, his focus has shifted toward healthcare. In June, he led a $12 million funding round for Sanmai Technologies, a startup pairing A.I. with ultrasound. And this year, he helped launch Manas AI, an A.I.-powered drug discovery company that raised $24.6 million in seed funding from Hoffman, General Catalyst and Greylock Partners. The venture aims to accelerate and lower the cost of developing cancer and rare disease treatments through generative computational chemistry, biology and proprietary A.I. systems.

Manas AI’s founding team reflects that ambition. Hoffman launched the company with Siddhartha Mukherjee, the Pulitzer Prize–winning cancer physician and author of The Emperor of All Maladies. Jonathan Baell, a renowned medical chemist with more than two decades of experience, serves as chief scientific officer.

Hoffman tells Observer that he’s confident widespread user adoption, not investor hype, is driving A.I.’s impact. But that doesn’t mean the technology’s popularity doesn’t come with drawbacks, as repetitive use of learning-based A.I. systems could widen the gap between leading A.I. models and emerging competitors. Such concentration is “concerning on multiple levels,” Hoffman says, noting that early design choices and development contexts will shape A.I.’s trajectory and whose interests it ultimately serves. “It could lead to a world where we see a decline in the kind of entrepreneurial dynamism that has traditionally been the source of new job creation, competition, innovation and broad-based productivity growth.”   

Reid Hoffman. Courtesy Manas AI

16. Fei-Fei Li

As CEO and co-founder of World Labs, co-director of Stanford’s Human-Centered AI Institute, and a United Nations advisor on A.I. policy, Fei-Fei Li drives both technical innovation and frameworks for responsible development. Li’s impact on A.I. began with ImageNet in 2006, which provided the foundational dataset for the computer vision revolution and accelerated deep learning adoption across industries. Her ongoing contributions have earned her the 2025 Queen Elizabeth Prize for Engineering, the 2024 VinFuture Grand Prize in AI, and the 2023 Intel Lifetime Achievement Award. In September 2024, Li launched World Labs with $230 million in funding from Andreessen Horowitz and Nvidia’s venture arm. The company’s spatial A.I. technology generates interactive, modifiable 3D scenes from single photographs, with applications in entertainment, gaming, simulation and digital content creation.

Beyond technical innovation, Li promotes A.I. accessibility through her nonprofit AI4ALL and policy work, including testimony before Senate committees and co-authoring a report proposing A.I. safety regulations in California. Her combined roles in academia (Stanford HAI), industry (World Labs), and policy (UN advisory) give her unique leverage to advance spatial intelligence while ensuring its ethical implementation. This dual focus on frontier technology and responsible development positions Li as a leading architect of A.I.’s societal integration.

At the Ai4 conference in August, Li encouraged educators to reframe A.I. as a tool for fostering curiosity, emphasizing prompts as the start of learning exploration rather than a shortcut.

Fei-Fei Li. Getty Images

17. Marc Andreessen

  • Founder, Andreesen Horowitz

Marc Andreessen’s reputation as one of Silicon Valley’s most aggressive backers of A.I. ventures has only grown stronger in 2025. In July, his venture capital firm Andreessen Horowitz (a16z) invested $2 billion into Thinking Machines Lab, the new startup founded by former OpenAI CTO Mira Murati. The deal valued the company at $12 billion before it had even launched a product—underscoring Andreessen’s conviction in betting early on high-profile A.I. talent.

A16z is also in early discussions to raise a $20 billion fund dedicated to growth-stage A.I. startups, with commitments to major portfolio companies like Databricks, Elon Musk’s xAI and Mistral. According to Crunchbase, the firm was the most active post-seed investor in A.I. during 2024, completing 42 deals and outpacing nearly every other venture capital competitor.

To support its rapidly expanding portfolio, a16z launched Oxygen last October, an initiative that gives startups access to more than 20,000 Nvidia GPUs. The program not only addresses the acute shortage of computing power but also cements Andreessen’s influence over the infrastructure that fuels cutting-edge A.I. research. His broader vision is equally bold: scaling U.S.-based factories for robotics and A.I., making open-source models the global standard, and preparing for a future in which A.I. replaces nearly every job—except, as he often quips, his own.

Beyond capital deployment, Andreessen has moved directly into shaping A.I. policy. In August, he announced the formation of Leading the Future, a new political action committee into which he and other industry leaders have committed $100 million. The PAC is designed to advance pro-innovation regulatory frameworks in Washington and counterbalance growing calls for restrictive oversight.

Marc Andreessen. Getty Images

18. Julie Sweet

Julie Sweet represents the practical reality of A.I. adoption, providing real-world insights into enterprise A.I. challenges that purely technology-focused leaders cannot offer. Since becoming CEO of consulting and technology services giant Accenture in 2019, she has nearly doubled the company’s market capitalization from $90 billion to approximately $176 billion, with revenues growing from $41 billion to $65 billion. Under Sweet’s leadership, Accenture has emerged as the definitive authority on enterprise A.I. implementation, combining traditional consulting expertise with hands-on technology deployment capabilities. Unlike strategy-focused firms, Accenture takes on operations and services on behalf of clients, deploying employees to provide actual implementation rather than just recommendations.

Earlier this month, Sweet announced that Accenture is training all 700,000+ employees in agentic artificial intelligence, representing the most extensive corporate A.I. training program in history and reflecting Sweet’s understanding that enterprise A.I. success requires human-A.I. collaboration at unprecedented scale. Her candid acknowledgement that A.I. adoption at most large companies is “slower and harder than hoped” provides crucial guidance for business leaders navigating similar transformations. Sweet regularly challenges the A.I. hype cycle, noting that while CEOs are obsessed with A.I., implementing it to save money and boost productivity remains difficult. Accenture’s study of 3,000 global executives found that 85 percent of C-suites plan to increase A.I. spending this year, yet many struggle with actual deployment.

Accenture has booked $1.8 billion in A.I. revenue this fiscal year and delivered over 2,000 generative A.I. projects. Sweet’s prescient $3 billion investment in A.I. capabilities before ChatGPT’s mainstream debut has positioned the company as the go-to partner for enterprises struggling with A.I. implementation complexity.

Her response to the existential question “Who needs Accenture in the age of AI?” has been characteristically bold: rather than defending traditional consulting models, Sweet is reinventing Accenture as an A.I.-native organization that can help clients navigate the same transformation challenges the company faces internally. Her philosophy that “A.I. is only a technology” and that “the value comes from reinvention of how we work” reflects a mature understanding of enterprise transformation that goes beyond technological capability to organizational change management.

Julie Sweet. Getty Images

19. Joelle Pineau

Joelle Pineau’s move from leading Meta’s FAIR research division to shaping Cohere’s A.I. strategy marks a pivotal moment in enterprise A.I. After nearly eight years at Meta, where she championed open-source projects like Llama and PyTorch, Pineau stepped down in April. Four months later, Cohere closed a $500 million round led by Radical Ventures and Inovia Capital, with backing from AMD, Nvidia, PSP Investments and Salesforce Ventures, lifting its valuation to $6.8 billion. The funding coincided with Pineau’s appointment as chief A.I. officer. Pineau disputes the notion that A.I. is an opaque “black box.” While complex, she argues, enterprise models can often be traced more clearly than human reasoning. “It’s not impossible to understand how a prompt leads to an output,” she tells Observer. 

Her focus at Cohere diverges from rivals chasing artificial general intelligence. “The approach Cohere is taking is more focused than players that are chasing AGI or general superintelligence, and that gives us a leg up in the enterprise market,” she explains. By prioritizing privacy, security and compatibility with sensitive enterprise data, Cohere is positioning itself in critical sectors like finance, healthcare, telecom and government.

Pineau also applies computer security principles to A.I. development, contending that “open protocols are in fact more secure, because flaws are discovered much faster and properties are better understood.” This philosophy underpins Cohere’s open science model, even as she stresses that “enterprises can’t afford to have data leak—whether it’s internal proprietary data or sensitive customer data.”

The recent boom in A.I.-assisted software development has particularly impressed her. LLMs, she notes, now accelerate code generation, bug fixes and developer support while enabling “almost anyone, even with very little computer science training, to implement their ideas quickly.” Such advances also open the door to A.I. systems that can self-improve. Her vision emphasizes building systems that are “traceable, controllable and customizable,” with rigorous evaluations that consider both technical performance and broader impacts like bias, transparency and safety.

Pineau joins Cohere at a time of rapid expansion. Its North platform, built for secure A.I. agents, is now complemented by tailored enterprise models and partnerships with Oracle, Dell, LG and RBC. Her arrival signals Cohere’s intent to fuse ethical research with scalable impact in the enterprise A.I. market.

Joelle Pineau. Photo by Kimberly Wang, Courtesy of Cohere

20. Geoffrey Hinton

  • Professor Emeritus, University of Toronto & 2024 Nobel Laureate in Physics

Geoffrey Hinton continues to draw attention to A.I.’s risks. In July, the “Godfather of A.I.” warned that A.I. might develop a language humans cannot understand, obscuring their intentions and making oversight more difficult. He has criticized many tech leaders for downplaying A.I.’s dangers, calling attention to only a rare few who take safety seriously. Earlier this summer, Hinton reflected on his career, expressing regret for not recognizing A.I.’s potential harms sooner and warning that unchecked development could surpass human control. While the academic formerly estimated that A.I. had a 10 percent chance of one day wiping out humanity, a lack of progress on regulation has since caused this prediction to jump to 20 percent, he said in July.

In October 2024, Hinton won the Nobel Prize in Physics along with John Hopfield for their foundational work in deep learning and subsequently used some of the prize winnings to establish a new award for machine learning researchers. Though he left Google in 2023, he continues to influence the field through his role at the Vector Institute, an A.I. safety research hub. Even now, his dual legacy as a technical architect and moral conscience makes him one of the most authoritative and urgent voices in shaping how society prepares for the next stage of A.I. 

Geoffrey Hinton. Johnny Guatto/University of Toronto

21. Lisa Su

  • CEO, AMD

Lisa Su is transforming AMD from a niche chipmaker into a crucial player in the A.I. era, as Big Tech races to secure powerful semiconductors to fuel energy-hungry data centers.  A Taiwanese-born American engineer with a Ph.D. in electrical engineering from MIT, Su became AMD’s CEO in 2014, when the company was struggling to stay afloat. By focusing on high-performance computing and doubling down on semiconductor innovation, she orchestrated one of the most remarkable turnarounds in tech history, positioning AMD as a rival to giants like Nvidia and Intel. 

Under Su’s leadership, AMD has become a critical supplier of chips powering cloud computing, gaming and increasingly A.I. Su has predicted that the global demand for A.I. chips could rapidly increase to more than $500 billion in a few years. At AMD’s Advancing A.I. event in June, Lisa Su introduced the Instinct MI350 series, a family of GPUs that delivers four times the computing power and 35 times better inference performance than previous generations. She also previewed Helios, a rack-scale A.I. system designed for large-scale training and deployment, set to launch in 2026. Looking further ahead, at a recent investor conference, AMD executives said the company’s next-generation chip, MI450, will outperform any rival hardware, including Nvidia’s Rubin Ultra. 

Su’s vision extends beyond hardware. She has championed ROCm, AMD’s open-source software stack, which provides developers with a flexible, efficient platform to build advanced A.I. applications. Su has been widely recognized as one of the most influential executives in technology and is the first woman to receive the IEEE Robert N. Noyce Medal, awarded for outstanding contributions to the microelectronics industry.

Lisa Su. Getty Images

22. Vinod Khosla

  • Founder, Khosla Ventures

Vinod Khosla has been one of Silicon Valley’s most aggressive backers of A.I., channeling hundreds of millions of dollars into startups across sectors. His firm has invested in companies like Vivodyne, which uses A.I. to accelerate drug discovery; Physical Intelligence, a robotics manufacturer; and Basis, an agentic A.I. platform. Beyond venture capital, Khosla has shaped climate-tech innovation as a board member at Bill Gates’ Breakthrough Energy Ventures, where he has overseen investments in A.I.-powered clean energy leaders like QuantumScape, Commonwealth Fusion Systems and Koloma.

Khosla was also one of the earliest and most vocal backers of OpenAI. His conviction in A.I.’s potential is unmatched: he has argued it is entirely rational to invest trillions in the technology, predicting that A.I. could replace 80 percent of jobs within five years and make traditional college degrees obsolete. For Khosla, the rise of A.I. is not just a technological revolution but a societal one, urging young people to approach careers flexibly as industries are transformed.

Vinod Khosla. Getty Images

23. Stéphane Bancel

Stéphane Bancel represents one of the most radical artificial intelligence integrations in pharmaceutical development, transforming Moderna from a biotech company into a laboratory for A.I.-driven organizational evolution. Since partnering with OpenAI in 2023, Bancel has deployed over 3,000 custom GPTs across Moderna’s operations, with employees averaging 120 weekly ChatGPT Enterprise conversations. His ambitious mandate: every employee should use ChatGPT at least 20 times daily.

In May, Bancel merged Moderna’s HR and Technology departments under a single leader, reflecting his belief that A.I. fundamentally changes how work is structured and performed. This organizational revolution positions Moderna as a living experiment in A.I.-native corporate structure, with specialized A.I. tools like “Dose ID GPT” automating clinical trial dose optimization and the legal team achieving 100 percent ChatGPT Enterprise adoption. Bancel claims that while launching 15 new mRNA products in five years would traditionally require 100,000 employees, Moderna can achieve this with its current 6,000 staff through A.I. leverage. His bold prediction extends even further: with A.I. assistance, scientists will “understand most diseases” within 3-5 years, representing “a first for humans on the planet.”

However, this vision comes with difficult realities. In July, Bancel announced plans to reduce Moderna’s workforce by 10 percent (approximately 580 employees) by the end of 2025 while targeting $1 billion in cost reductions. Despite A.I. investments, Moderna’s 2025 revenue guidance of $1.5-2.5 billion represents a significant decline from 2024’s $3.0-3.1 billion, highlighting the complex challenge of balancing A.I. transformation with commercial performance. Bancel’s willingness to reduce the traditional workforce while massively scaling A.I. deployment demonstrates the potential and the challenges that define the current A.I. era. His approach provides a blueprint for how artificial intelligence can transform entire industries, not just individual functions.

“We believe very profoundly at Moderna that ChatGPT and what OpenAI is doing is going to change the world,” Bancel has said. “We’re looking at every business process and thinking about how to redesign them with AI.” While other CEOs discuss A.I. adoption, Bancel implements it at unprecedented depth.

Stéphane Bancel. Getty Images

24. Eric Schmidt

  • Founding Partner & General Partner, Innovation Endeavors

Through his venture capital firm Innovation Endeavors and family office Hillspire, former Google CEO Eric Schmidt is channeling billions into frontier A.I. projects, from space automation to public-interest research. His recent moves include closing Innovation Endeavors’ $630 million Fund V and backing startups like Bauplan Labs and IntuigenceAI, underscoring his appetite for disruptive technologies.
Schmidt and his wife, Wendy, also launched Schmidt Sciences in 2024, a nonprofit dedicated to applying science and technology to global challenges. In February, the institution announced a $10 million initiative to support A.I. safety research, awarding early grants to figures like Yoshua Bengio and OpenAI board member Zico Kolter.

Schmidt has become an increasingly vocal commentator on A.I.’s potential. In a May 2025 TED talk, he argued that A.I. is “wildly underhyped,” capable of breakthroughs in complex autonomous tasks. Yet he tempers his optimism with caution, warning of the “dangerous point” when systems begin self-improving. Citing the proliferation risks highlighted by China’s DeepSeek release, Schmidt has urged U.S. lawmakers to shore up domestic energy infrastructure to ensure the country stays competitive.

Getty Images

25. Doug McMillon

  • CEO, Walmart

For over a decade, Walmart CEO Doug McMillon has guided the retail giant through an A.I. transformation that touches nearly every corner of its business. Under his leadership, the company has prepared more than 850 million data points for A.I. training, rolled out its first A.I.-powered merchant assistant, “Wally,” and added senior roles dedicated to accelerating adoption—including an EVP of A.I. Acceleration, Product and Design, and an EVP of A.I. Platforms.

This year, Walmart unveiled a suite of new agents designed for stakeholders across its ecosystem. “Marty” serves sellers, “Sparky” supports customers, and a “Super Agent” provides associates with a single point of entry to all A.I. tools. The company also introduced a “Developer Agent” to streamline software launches. Together, these systems aim to boost efficiency while standardizing how Walmart employees, partners and shoppers interact with the company’s digital infrastructure.

Walmart is also applying A.I. to its vast physical footprint. The retailer can optimize truck load packing using digital twins and “detect, diagnose and remediate issues” before they disrupt operations. The impact has been tangible: in 2024, Walmart reported a 30 percent drop in global emergency alerts and a 19 percent reduction in refrigeration maintenance spending across its U.S. stores thanks to digital twin technology.

McMillon has been vocal about Walmart’s shift from a retail company to a technology leader. “We are really good when it comes to technology these days,” he said earlier this year, underscoring his belief that A.I. is central to Walmart’s long-term competitiveness.

Earlier this month, Walmart announced a partnership with OpenAI to design an A.I. certification program for its workforce, slated to debut in 2026. The initiative signals Walmart’s intent not only to adopt cutting-edge tools, but also to upskill its employees at scale, ensuring its 2.1 million associates worldwide are prepared for the next wave of A.I.-driven retail.

Doug McMillon. Courtesy of Walmart

26. Teresa Heitsenrether

  • Chief Data & Analytics Officer, JPMorganChase

Teresa Heitsenrether is driving A.I. adoption at one of the world’s largest banks. As JPMorganChase’s chief data and analytics officer, she sets the firm’s data strategy, governance standards and oversees the development and rollout of A.I. products. Over the past year, she led the launch of the bank’s proprietary LLM Suite, which she credits with “driving a cultural transformation” across the company. In February, she told The Wall Street Journal that more than 200,000 employees now use the suite “pretty actively every day,” saving several hours of repetitive work each week. The bank has also deployed generative A.I. in its consumer call centers, giving agents faster, more accurate answers to questions on loans, accounts and credit cards.

Today, JPMorganChase has more than 400 A.I. use cases in production spanning asset management, private banking and consumer services. A member of the firm’s Operating Committee, Heitsenrether reports directly to CEO Jamie Dimon, who has acknowledged A.I. will slow hiring and eliminate some jobs but insists the firm must embrace the technology to gain efficiencies, noting that “attrition is your friend.” In April, Heitsenrether reportedly told The Economic Times the bank would continue to spend $17 billion annually on technology, supported by a central team of more than 2,000 A.I. and machine learning experts.

Heitsenrether joined JPMorganChase in 2004 and previously served as global head of securities services, where she oversaw a 22 percent revenue increase and grew assets under custody by nearly $9 trillion. In July, she told Forbes that “A.I. is 100 percent a business issue” and must be integrated into every line of business. Her leadership has earned her recognition as one of American Banker’s most powerful women in finance and one of Barron’s 100 most influential women in the U.S.

Teresa Heitsenrether. Courtesy of JPMorganChase

27. Peter Thiel

  • Managing Partner, The Founders Fund

Peter Thiel’s empire shows no sign of slowing, largely thanks to his high-stakes bets on A.I. This year, his venture capital firm Founders Fund closed a $4.6 billion growth fund, underscoring renewed confidence in frontier innovation spanning A.I., defense and advanced manufacturing. One of its boldest moves came in a $2.5 billion round in Anduril, which develops autonomous systems for military use. The round pushed the defense-tech company’s valuation to $31 billion.

Thiel’s fingerprints are on some of the most influential players in the field. Founders Fund was an early backer of OpenAI and Scale AI. This year, Founders Fund added another major stake, leading a $400 million round for Cognition AI—the San Francisco startup behind Devin, an A.I. software engineer now in use at Goldman Sachs—doubling its valuation to $10.2 billion.

Thiel co-founded Palantir Technologies, the controversial data analytics company that creates sophisticated software platforms to help government agencies and corporations operationalize vast amounts of disparate data. Palantir provides what former employees describe as “extravagant plumbing with data,” and positions itself as selling not just software but “the idea of a seamless, almost magical solution to complex problems” to governments and Fortune 500 companies. In August, the company reported it had surpassed $1 billion in quarterly revenue for the first time.

Despite his sweeping portfolio, Thiel maintains a contrarian pragmatism about A.I.’s impact. On the Interesting Times podcast, he described the technology as “more than a nothing burger” but “less than the total transformation of our society.” For Thiel, A.I. is not an end in itself but a bulwark against what he calls technological stagnation. Without breakthroughs in areas like space travel or cures for dementia, he warns, innovation risks sputtering out.

Peter Thiel. Getty Images

28. Masayoshi Son

  • Founder & CEO, SoftBank

Masayoshi Son has leveraged SoftBank Group’s Vision Fund to shape the global A.I. landscape, investing in over 400 companies, including Chinese tech giant Alibaba, and earning the nickname “Mr. Ten Times” for his colossal funding strategies. Vision Fund reported robust earnings for the second quarter of 2025, posting over $2.87 billion in profits fueled largely by A.I.-focused investments. 

At the center of the Vision Fund’s A.I. portfolio is Arm, which is now developing its own A.I. chips. SoftBank has gone on an acquisition and investment spree, purchasing Graphcore, paying $6.5 billion for Ampere Computing, and acquiring a $2 billion stake in Intel. Son has also quietly rebuilt his relationship with Nvidia, amassing a $4.8 billion stake by June. 

Son’s ambitions extend beyond backing individual companies. In early 2025, he announced the $500 billion Stargate Project, a U.S.-based A.I. infrastructure initiative in partnership with OpenAI and Oracle, with an initial $100 billion already committed to scaling compute capacity and advanced research. Around the same time, SoftBank unveiled plans for Project Crystal Land, a proposed $1 trillion A.I.-robotics hub in Arizona designed to anchor U.S. leadership in robotics and semiconductor manufacturing.

With bets across chips, data centers and language models, Son’s portfolio reflects his belief that control over A.I.’s entire value chain is key to long-term leadership. Through these efforts, SoftBank is positioning itself as a global architect of the next digital era, spanning semiconductors, high-performance computing, robotics and regional technological sovereignty.

Masayoshi Son. Getty Images

29. Jonathan Ross

  • Founder & CEO, Groq

Jonathan Ross is positioning Groq as a formidable challenger in the A.I. chip market with its language processing units (LPUs), designed to dramatically boost compute speed while cutting costs. His philosophy is simple: the market needs both speed and accuracy. “While GPUs could train large models, they were expensive and slow to run,” Ross tells Observer. Groq’s LPUs, by contrast, can run trillion-parameter models quickly and cost-effectively. For Ross, the challenge is existential: “You can only run A.I. if you have enough inference compute. It’s like cars; you need oil to drive.” He warns that demand for inference compute is outpacing global manufacturing capacity—a trend that could concentrate A.I.’s benefits among a few.

Over the past six months, Ross has aggressively expanded Groq’s footprint through high-profile partnerships. In February, he secured $1.5 billion from Saudi Arabia to grow its A.I. infrastructure, including Groq’s Dammam data center, a deal expected to generate $500 million in revenue this year. In April, Groq partnered with Meta to give developers the fastest, most cost-effective way to run Llama 4 models. May brought an exclusive partnership with Bell Canada to power Canada’s largest sovereign A.I. infrastructure project, alongside new data centers in Houston and Dallas. In July, Groq opened its first European data center in Helsinki, Finland, in partnership with Equinix, and plans to announce its first Asia-Pacific location later this year. By August, Ross unveiled a collaboration with OpenAI to provide developers instant access to openly licensed models at scale. 

Despite recently cutting 2025 revenue projections from $2 billion to $500 million, Groq closed a $750 million fundraising round this week, at a $6.9 billion valuation. Ross compares the industry to oil exploration: “If you measure by attempts, most A.I. projects fail. If you measure by spend, most yield positive returns. Some A.I. ‘strikes’ have been very profitable—more than the total investment to date.”

For Ross, A.I.’s accelerating pace is undeniable. In one client meeting, he requested a new feature from his engineers. Hours later, A.I. had built and deployed it. “The next phase shift will be when that can happen before the meeting ends,” he says.

Jonathan Ross. Courtesy of Grok

30. Tareq Amin

Tareq Amin is at the center of Saudi Arabia’s push to build a national A.I. infrastructure. As CEO of Humain, the kingdom’s new state-backed A.I. venture, Amin oversees a major rollout that includes a deal with Nvidia for 18,000 GB300 Blackwell chips, part of a projected $600 billion A.I. investment opportunity. “People think A.I. is about apps or chatbots. The real game is infrastructure: compute, data, energy and connectivity,” Amin tells Observer. “Without solving those, you’re layering gimmicks on top of fragile systems.”

In May, Humain partnered with Cisco to co-design A.I. systems built for scale, security and long-term economic growth. Later in the summer, the company helped bring OpenAI’s new open models to market through a partnership with Groq, offering real-time inference with a 128,000 token window context. Amin says his experience in telecoms, which includes building Rakuten’s Open RAN network and briefly leading Aramco Digital, taught him that “infrastructure is everything, and impossible is nothing.”

At Humain, Amin guides an A.I. buildout that integrates high-performance compute, public-private partnerships and local access, laying the groundwork for Saudi Arabia’s A.I. ambitions on a regional and international scale. But his mission extends beyond the kingdom; Amin wants to ensure that countries like Saudi Arabia, and the Global South more broadly, develop the infrastructure to control their own A.I. instead of depending on others for access. “When we talk about A.I. development, what keeps me up at night is who gets left out,” he says. “If A.I. remains concentrated in a few geographies or companies, then inequality will grow, and nations will lose sovereignty.” 

Tareq Amin. Courtesy of Humain

31-33. Nick Frosst, Aidan Gomez & Ivan Zhang

  • Founders, Cohere

Following a $500 million funding round in August (led by Radical Ventures and Inovia Capital), Nick Frosst, Aidan Gomez and Ivan Zhang’s Cohere reached a $6.8 billion valuation, establishing the Toronto-based startup as one of the world’s most promising A.I. companies. Since December 2024, Cohere has released multiple flagship models demonstrating technical superiority. Rerank 3.5, released in December, delivers 26.4 percent improvement on cross-lingual search and 23.4 percent better performance than Hybrid Search on financial services datasets across 100+ languages. Secure A.I. agents platform Cohere North launched in January, outperforming Microsoft Copilot and Google Vertex AI Agent Builder across multiple benchmarks. Command A, released in March, operates on just 2 GPUs (versus competitors requiring 32) and achieves 1.75x faster token generation than GPT-4o. And with April’s release of Embed 4, Cohere introduced a multimodal embedding model enabling search across complex PDF reports and presentations with text, images, tables, graphs and diagrams across 100+ languages.

An efficiency-first approach reflects the founders’ contrarian philosophy. “The industry got obsessed with throwing more money and chips, leading to better outcomes, but we’ve proven that wrong repeatedly,” Zhang tells Observer. Frosst also advocates for A.I. efficiency over scale, consistently arguing against the “bigger is better” approach. This position proved prescient with recent market shifts toward more efficient models. 

Under their leadership, Cohere more than doubled its annualized revenue from $35 million in March to over $100 million by May this year, thanks mainly to its partnerships with governments, enterprises and specialized sectors.

“We knew enterprise adoption would be slower than consumer, but we’re getting to the point where people realize this isn’t just another productivity tool,” Zhang says. “It’s not experiential anymore; it’s becoming real infrastructure.”

Government partnerships include operational transformation and A.I. safety research with the Canadian AI Safety Institute (CAISI) and the U.K.’s AI Security Institute (AISI), plus Second Front for secure U.S. and allied government A.I. via the 2F Game Warden platform. The startup also recently agreed to examine how its technology can boost public service operations across Canada’s federal government. 

Major enterprise deployments include Oracle powering 200+ A.I. features across NetSuite for tens of thousands of enterprises, SAP for comprehensive global A.I. integration, Dell as the first on-premises North provider, and co-developing North for Banking with Canada’s largest bank, RBC. Regional expansion covers Japan through Fujitsu’s exclusive partnership with co-developed Takane LLM, South Korea via LG CNS, the Middle East through stc Group’s $83B telecom partnership, and Canada with Bell’s sovereign A.I. Fabric solution.

As Zhang explains, Cohere’s enterprise focus stems from simple math. “Enterprises will pay for A.I. that solves their actual business problems. You can’t just take a general consumer model and expect it to work in a regulated environment,” he says. Rather than chasing “flashiest demos,” the team built “infrastructure that works when you need to deploy at scale with real stakes.”

Aidan Gomez, Ivan Zhang & Nick Frosst. Courtesy of Cohere

34. Cristiano Amon

During his keynote at COMPUTEX 2025, Cristiano Amon described the integration of Snapdragon, its line of chips for personal devices, into Microsoft’s Copilot+ PCs as one of Qualcomm’s most impactful launches in four decades, reinventing the Windows ecosystem and previewing deeper A.I. integration ahead. “The mobile industry is being redefined and a new generation of personal A.I. devices is emerging,” Amon tells Observer. But the most exciting development, the CEO says, is that, “With A.I., a computer can understand human language, can understand what we see and can understand what we hear, and that’s fundamentally changing how we use our devices. It’s changing how these devices are built, the amount of computing that goes into these devices, and how we interact with these applications.”

In May, Qualcomm announced a strategic return to the data center market, unveiling custom data center CPUs built to interface directly with Nvidia’s GPUs, signaling a deeper commitment to A.I. infrastructure. In June, Qualcomm acquired Alphawave Semi, further accelerating the company’s expansion into data centers, which Amon says “represents a new growth opportunity” for Qualcomm and is “a logical extension” of the company’s diversification strategy. And yet, that same month, it doubled down on its commitment to dominating edge A.I., launching an artificial intelligence research and development center in Vietnam specifically focused on localized A.I. solutions for personal computing devices and automotive and IoT applications. (Qualcomm partnered with BMW to develop the recently announced Snapdragon Ride Pilot Automated Driving System, which will debut on the iX3 next year.)

These moves followed strong third-quarter earnings: a 25 percent rise in profits and 10 percent boost in revenue, suggesting Amon’s A.I. strategy of putting the technology directly into the hands of consumers is paying off. Qualcomm earned this year’s Edge AI and Vision Product of the Year Award in the Edge A.I. Processors category for the on-device performance of the Snapdragon 8 Elite platform. Most recently, the company teased its latest flagship chipset for mobile, the Snapdragon 8 Elite Gen 5, which will likely power the next generation of flagship Android phones.

Cristiano Amon. Courtesy of Qualcomm

35. Simon Kohl

  • Founder & CEO, Latent Labs

From Nobel Prize-winning protein prediction to breakthrough drug design, Simon Kohl has positioned himself at the forefront of A.I.’s transformation of biology. He co-led Google DeepMind’s protein design team and was a senior research scientist on DeepMind’s AlphaFold2, the project that earned Demis Hassabis and John Jumper the 2024 Nobel Prize in Chemistry. “Having co-developed AlphaFold2, I’ve seen firsthand how A.I. can solve incredibly complex problems,” Kohl tells Observer.

Before leaving DeepMind, Kohl built AlphaFold2’s widely used uncertainty prediction system “pLDDT” and set up DeepMind’s wet lab at London’s Francis Crick Institute. When Kohl realized it was possible to “move beyond just predicting biological structures to actually designing them from scratch,” Kohl decided to found Latent Labs. “We were at an inflection point where generative A.I. could make biology programmable.” In 2024, Latent Labs was one of the early-stage startups to receive support from AWS through its Generative A.I. Accelerator. 

In February, Latent Labs raised $50 million in venture capital funding. Angel investors include Google Chief Scientist Jeff Dean, Cohere founder Aidan Gomez and ElevenLabs founder Mati Staniszewski

In July, the company launched LatentX, achieving 91 percent to 100 percent hit rates for macrocycles and 10 percent to 64 percent for mini-binders across seven therapeutic targets in wet lab experiments. Unlike traditional methods that predict existing structures, LatentX simultaneously designs the molecular sequence and 3D structure of proteins in real-time, following atomic-level physical rules to create entirely novel molecules. “We’re not just understanding nature anymore, we’re becoming capable of authoring it with precision,” he says. “Scientists achieve in 30 candidates what previously required testing millions, turning months of experiments into seconds of computation.” Traditional drug discovery hit rates are typically below 1 percent, Kohl explains.

Bringing experience from DeepMind, Microsoft, Google, Stability AI, Exscientia, Mammoth Bio, Altos Labs and Zymergen, Kohl’s team is prioritizing oncology, autoimmune diseases and rare genetic disorders—areas where conventional drug discovery faces significant challenges. The company is particularly focused on macrocycles, which combine the precision of biologics with the oral deliverability of small molecules. In direct laboratory comparisons, Latent Labs has outperformed results from major technology companies and leading academic institutions, leveraging their team’s AlphaFold experience combined with enterprise-grade platform engineering.

Rather than developing proprietary medicines, Latent Labs licenses its technology through a web-based platform, making advanced A.I. accessible to academic institutions, biotech startups and pharmaceutical companies. While making the technology broadly accessible, Latent Labs maintains strict biosafety protocols, actively engages with regulators on dual-use concerns, and validates all computational designs in its physical laboratory to ensure real-world safety.

“We envision a future where effective therapeutics can be designed entirely in a computer, much like how space missions or semiconductors are designed today,” he says. Kohl acknowledges the growing complexity of biological systems and the need for equally sophisticated safety frameworks as these powerful generative tools become more widespread. “Biology remains fundamentally messy,” he says. “A.I. currently amplifies our capabilities, but it still requires deep scientific intuition to ask the right questions and interpret what the models tell us.”

Simon Kohl. Photo by JL Creative, Courtesy of Latent Labs

36. Yoshua Bengio

  • Professor, Université de Montréal; Co-President & Scientific Director, LawZero; Founder & Scientific Advisor, Mila – Quebec AI Institute

Yoshua Bengio, a pioneer in A.I. research, is now a leading voice advocating for the technology’s safe adoption. In June, Bengio launched the nonprofit LawZero to address a need for safety guardrails amid A.I.’s rapid development. The nonprofit has secured more than $35 million from the likes of former Google CEO Eric Schmidt, Skype co-founder Jaan Tallinn and the Gates Foundation.

Bengio’s pivot to A.I. safety picked up steam after OpenAI’s o1 model, released in September 2024, demonstrated advanced reasoning capabilities—a breakthrough that revealed both immense potential and concerning autonomous behaviors. “Evidence of A.I. acting against human directives to achieve an end goal or ensure its own survival solidifies the urgent need for action,” Bengio tells Observer, noting that the superior reasoning capabilities of A.I. have resulted in “attempts to copy their code to escape replacement or hacking games to win.”

In February, Bengio led the first International Report on A.I. Safety, establishing global frameworks for A.I. risk assessment. At the World Summit AI in April, he sounded alarms on the risks of agentic A.I. systems. Beyond existential risks, Bengio warns that advanced A.I. could enable “concentration of power that is in direct contradiction with the principles of democracy.” His roles as founder and scientific advisor of Canada’s Mila (Quebec Artificial Intelligence Institute) and professor at the University of Montreal provide him with research infrastructure and academic credibility to advance safety standards. Through this work, he helped establish Montreal as an A.I. hub that, as he tells Observer, “fosters open collaboration and prioritizes A.I. for social issues like healthcare and climate change” as an alternative to Silicon Valley’s profit-driven approach.

Bengio’s mix of early A.I. breakthroughs and current safety advocacy gives him influence over both technology and regulation as governments wrestle with A.I. governance. Backed by LawZero’s funds and his academic platforms, he advocates for an alternative “Scientist AI” approach focused on understanding rather than autonomous action. “We are collectively racing ahead towards A.I. models achieving human-level competence without knowing how to align and control them reliably,” he says. “If nothing significant is done, the current trajectory could lead to the creation of superintelligent A.I. agents that compete with humans in ways that could compromise our future.”

Yoshua Bengio. Getty Images

37. Timnit Gebru

  • Founder & Executive Director, The Distributed AI Research Institute (DAIR)

As founder of the Distributed Artificial Intelligence Research Institute (DAIR), Timnit Gebru tirelessly advocates to keep ethics at the forefront of the A.I. conversation. Gebru launched DAIR, a community-based A.I. research institute “free from big tech’s pervasive influence,” in 2021 with $3.7 million from notable backers like the Ford Foundation, MacArthur Foundation and Open Society Foundation. 

Gebru is a vocal critic of artificial general intelligence, arguing its guiding framework is rooted in 20th-century Anglo-American eugenics. She contends that many of the same discriminatory attitudes—racism, xenophobia, classism, ableism and sexism—persist in today’s AGI movement, leading to systems that harm marginalized groups and centralize power while using the language of “safety” and “benefiting humanity” to avoid accountability.

She also examines A.I.’s role in geopolitics. In The New York Times last December, she argued that much technological progress is tied to government and military funding, not community needs. In an April essay for Scientific American, she warned that replacing federal workers with chatbots would be “a dystopian nightmare,” citing generative A.I.’s propensity to hallucinate even basic tasks like transcription.

Her ongoing influence on A.I. ethics has earned her accolades such as the 2025 Miles Conrad Award, a lifetime achievement honor from the National Information Standards Organization. Gebru is currently working on a memoir and manifesto, The View from Somewhere, scheduled for publication in fall 2026.

Timnit Gebru. Getty Images

38-40. Timothée Lacroix, Guillaume Lample & Arthur Mensch

  • Founders, Mistral AI

Guillaume Lample, Timothée Lacroix, and Arthur Mensch became France’s first A.I. billionaires this month, after a Series C funding round valued their startup, Mistral AI, at $13.7 billion. This follows a $640 million Series B in June, which made Mistral Europe’s highest-valued A.I. company, at $6 billion. Founded in 2023, Mistral AI is widely regarded as one of France’s most promising tech startups and the only European firm positioned to compete with OpenAI.

Earlier in 2025, the startup released Magistral, its first reasoning-focused model trained with reinforcement learning. It also improved on models like Codestral for enhanced code generation and unveiled new products, including Mistral OCR to understand documents, and a mobile app for their chatbot Le Chat. As part of a joint venture with Nvidia, MGX and Bpifrance, Mistral is building a 1.4 gigawatt A.I. data center campus outside Paris, aiming to be Europe’s largest A.I. infrastructure cluster. In August, The Information reported that Apple was considering acquiring Mistral in response to pressure from investors and analysts, “increasingly agitating for the company to do something big if it wants a chance to remain relevant.”

Earlier this month, the company released two new features to all users on Mistral’s free tier. This is in direct contrast to competitors like OpenAI, which limit premium features to paid subscriptions. 

Guillaume Lample, Timothée Lacroix & Arthur Mensch. Observer

41. François Chollet

François Chollet, one of A.I.’s most respected researchers and skeptics, has launched a new chapter in his career with Ndea, a research lab dedicated to advancing artificial general intelligence (AGI). Co-founded in January with Zapier co-founder Mike Knoop, Ndea blends frontier research with lessons from large-scale software engineering, aiming to rethink how AGI should be pursued. “The most pervasive and wrongheaded assumption today is that we can achieve human-level general intelligence by simply scaling up our current deep learning models, particularly large language models,” Chollet tells Observer. “My approach is to be very transparent about the limitations of our current technology and what we need to go beyond them. I’m not trying to sell a dream of imminent AGI.”

Chollet envisions A.I. making scientific discoveries the way humans do, and sees Ndea as the vehicle to unlock this potential. “The most brilliant minds are not attracted to turning a crank on a bigger machine. They are attracted to deep, fundamental scientific questions,” he says. “This attracts a different kind of talent, and that’s what we’ve seen at Ndea so far.”

Chollet is best known as the creator of Keras, the widely adopted deep learning library used at companies from YouTube to Spotify, as well as ARC-AGI, a benchmark regarded as one of the most important tests of genuine general intelligence in machines. His critiques of hype in the field, including his stance that current models fall short of true reasoning, have positioned him as both a visionary and contrarian voice in A.I. debates. “I’m worried about the effect that generative A.I. is having on human culture and communications. It is massively polluting our information ecosystem with synthetic content. This systematically devalues genuine human knowledge and expression,” he says. “The more we rely on generative A.I. for cultural creation, the more we will tend towards a stagnant state of perpetual remixing. Slop remixed from slop remixed from slop. Leaving us to sift through a digital landfill for a single original idea.”

Chollet and Knoop are also co-founders of the ARC Prize, a nonprofit focused on AGI development. Chollet authored the influential paper On the Measure of Intelligence.

François Chollet. Courtesy François Chollet

42. Marco Argenti

  • CIO, Goldman Sachs

As chief information officer of Goldman Sachs, Marco Argenti leads the entire firm’s A.I. strategy, including the launch of the A.I. coding agent Devin, developed by startup Cognition Labs. As the first major bank to pilot agentic A.I., Goldman maintains strict controls—Devin’s code goes through the same human review and rigorous process as any developer’s work. Argenti emphasizes that safety and risk management remain paramount even as the firm pursues automation. “Giving agency to an A.I. without understanding what the A.I. produces…is a recipe for failure,” Argenti tells Observer.

Goldman’s broader A.I. initiatives include a developer copilot that delivered productivity gains of up to 20 percent within its first year; the in-house natural language interface dubbed the GS AI Assistant, now available to all employees; translation tool Translate AI; and Banker Copilot, currently in pilot phase. Argenti, who oversees more than 12,000 Goldman engineers, says the internal response has been enthusiasm rather than resistance. The GS AI Assistant already processes over one million monthly prompts across bankers, traders, asset managers and wealth managers. Argenti deploys Goldman’s proprietary “wrapped shield of guardrails” to ensure compliance and eliminate hallucinations, but warns that this transformation still requires fundamental skill shifts. “Not only managers but also those who used to be individual contributors will have to develop three managerial skills at a minimum: the ability to describe, delegate and supervise.” Supervision, he reiterates, is the most critical. “Where does the model end, and where do applications start? Depending on where you draw the line, the market for applications and software can look very different.”

Despite Goldman’s aggressive adoption of A.I. agents, Argenti maintains a people-first philosophy. Early-career workers are “more critical than ever,” he says, noting the younger workforce’s natural ability to manage A.I. agents having grown up alongside generative A.I. Argenti also serves on Goldman’s board’s management committee and risk committee. Outside of Goldman, Argenti is a member of the Fred Hutchinson Cancer Center advisory board, where he supports the Big-Tech-backed Cancer AI Alliance for responsible A.I. innovation in cancer research.

Marco Argenti. Courtesy of Goldman Sachs

43. Joy Buolamwini

Joy Buolamwini, a Rhodes scholar, researcher and author of the 2023 book Unmasking AI: My Mission to Protect What Is Human in a World of Machines, has emerged as one of the most visible voices calling out bias in A.I. Her work blends advocacy, scholarship and public engagement, spotlighting the dangers of deploying A.I. systems that perpetuate racial and gender inequities.

In 2016, Buolamwini founded the Algorithmic Justice League, a nonprofit that combines art and research “to illuminate the social implications” of A.I. The group’s July 2025 report, The Comply to Fly, scrutinized the Transportation Security Administration’s facial recognition program in U.S. airports. After two years of study, it found troubling gaps in transparency around traveler rights and data handling, raising questions about a program initially described as “voluntary” but often operating otherwise. The report urged TSA to halt the rollout, while Buolamwini has been outspoken about how travelers can opt out of facial recognition at security checkpoints.

This year, she joined the inaugural Accelerator Fellowship Program at Oxford University’s Institute for Ethics in A.I., where she is focusing her research on addressing bias in algorithmic systems. Her scholarship continues to influence the field, including co-authored work with Timnit Gebru on disparities in algorithmic gender classification. Speaking at Duke University in February, Buolamwini framed the issue in cultural terms, asking her audience, “Show of hands. How many have heard of the male gaze? The White gaze? The postcolonial gaze? To that lexicon, I add the coded gaze, and it’s really a reflection of power. Who has the power to shape the priorities, the preferences—and also at times, maybe not intentionally—the prejudices that are embedded into technology?”

Joy Buolamwini. Getty Images

44. Raquel Urtasun

  • Founder & CEO, Waabi

As founder and CEO of Waabi and professor of computer science at the University of Toronto, Raquel Urtasun is pioneering the next generation of autonomous trucking. In June, Urtasun raised $200 million in a Series B round, led by returning investors Uber and Khosla Ventures with new backing from Nvidia, Porsche and Ingka Investments. Urtasun tells Observer, “Our goal has always been to work closely with the entire industry ecosystem,” with regard to the substantial portion of Waabi’s funding coming from industry peers. “We don’t see being a partner and a disruptor in the industry as being in conflict.”

Waabi aims to achieve level 4 autonomous driving in commercial trucks by the end of 2025, just four years after inception. Waabi is the only autonomous vehicle company building on what founder Urtasun calls AV 2.0, a generative A.I. system designed to “reason” rather than rely on preprogrammed responses that don’t scale. “The traditional AV 1.0 approach we’ve seen to date faces critical barriers to deployment,” she says. “It’s reliant on hand-coded rules and vast amounts of real-world driving data, which is capital-intensive and doesn’t scale.” 

Waabi’s AV 2.0 leverages “an end-to-end interpretable and verifiable A.I. model powered by the industry’s most realistic neural simulator.” Its algorithms generate countless driving scenarios, test vehicle responses, and choose optimal actions within fractions of a second. The company has already commercially tested its self-driving trucks with Uber Freight on a 385-kilometer Dallas–Houston route and is collaborating with Volvo to co-develop autonomous long-haul trucks for broader market rollout.

In August, Urtasun announced that Uber Freight founder and CEO Lior Ron would join Waabi as COO. The two previously worked together at Uber’s advanced technology unit. “I can’t think of something that will be as helpful to the next era of logistics and innovation and how goods are being moved,” Ron said of Waabi.

Raquel Urtasun. Courtesy of Waabi

45. Jeff Dean

  • Chief Scientist, Google DeepMind & Google Research

Under Jeff Dean’s leadership as Google’s chief scientist, the company has made major efficiency gains. In August, Dean published a paper showing that the carbon footprint of an average Gemini prompt fell 44x over the 12 months ending in May. He credits the progress to Google’s full-stack approach—more efficient model architecture and algorithms, custom hardware, optimized idling and ultra-efficient data centers. In 2024 alone, Google cut data center emissions by 12 percent and replenished 4.5 billion gallons of water, advancing its goal to return 125 percent of the freshwater it consumes across operations.

In May, Google launched Gemini AlphaEvolve, an A.I. system that autonomously generates and refines algorithms by combining creative models with testing systems. The tool has already boosted efficiency in data centers and A.I. training.

Dean’s influence extends beyond Google. Over the past two years, he has invested in 37 early-stage A.I. startups, including Perplexity, DatologyAI, Latent Labs, World Labs and Roboflow—often before Series A. In June, he backed the launch of the Laude Institute, led by Andy Konwinski, to translate academic breakthroughs into open-source tools, real-world products and socially responsible ventures.

He is also vocal about A.I.’s impact on work and research. At AI Ascent 2025, Dean predicted that advanced systems will soon reach the level of a junior software engineer, reshaping software development. At the Gemini Singapore Symposium earlier this month, he outlined how A.I. adoption could expand economic elasticity and spur job creation in tech-driven fields. He also forecast that A.I.-driven autonomous research could accelerate discovery and lower barriers to chip design, leading to an “explosion of specialized hardware.

Jeff Dean. Courtesy of Google

46. Bill McDermott

  • CEO, ServiceNow

With board seats at Zoom and Figma and a track record as CEO of Europe’s most valuable software company, SAP, Bill McDermott now leads ServiceNow through a sweeping generative A.I. transformation years in the making. ServiceNow has partnered with tech giants like Microsoft, integrating Copilot to power its Now Assist feature. It has also launched philanthropic initiatives, such as an A.I. skills program in Brazil with a major nonprofit partner.

McDermott is betting big on A.I. as ServiceNow’s growth engine. In May, the company launched a high-touch consulting service to help corporate clients adopt A.I. tools, restructured sales incentives to reward A.I. adoption, and introduced premium subscription tiers tied to A.I. offerings.
In July, ServiceNow invested $750 million in cloud platform Genesys and announced a high-profile partnership with Ferrari Hypercar, applying its A.I. to boost performance and optimize real-time race operations.

Earlier this month, the U.S. General Services Administration signed a landmark OneGov agreement with ServiceNow to accelerate A.I.-driven government modernization. The deal, aligned with President Trump’s A.I. Action Plan, is projected to increase workflow efficiency by up to 30 percent, cut costs for taxpayers, and offer federal agencies discounts of up to 70 percent on access to A.I.-powered automation and agentic tools.

Last week, ServiceNow unveiled its new Zurich platform, showcasing breakthrough capabilities such as “vibe coding” tools that let employees build production-ready apps with natural language, enterprise-grade A.I. security features, and autonomous agentic workflows. 

Bill McDermott. Getty Images

47. Jahmy Hindman

  • SVP & CTO, John Deere

At John Deere, Jahmy Hindman oversees the integration of A.I. into the agricultural pioneer’s industrial equipment, updating the tractors, combines and tillage machinery that generations of farmers have relied upon with automated, precision-guided features. Predictive maintenance engines, digital twins, and advanced analytics effectively turn each piece of equipment into a self-operating intelligence platform that leverages farming data to identify opportunities and increase crop yields. John Deere’s See & Spray technology, which detects individual weeds and applies herbicide only where needed, exemplifies this precision approach and has reduced herbicide use by up to two-thirds (saving farmers money). The company showcased its A.I.-powered fleet’s ability to handle new crops and conditions at CES 2025.

“With global food demand expected to rise as the population nears 10 billion by 2050, the need for efficiency and sustainability in agriculture has never been greater,” Hindman tells Observer. “Our customers operate in predominantly rural environments with changing and often harsh weather conditions. This is the place our technology must perform.”

Automation can fill in labor gaps across industries like agriculture, construction and commercial landscaping, according to Hindman. As John Deere goes full steam ahead on emerging technologies, the company is also partnering up with OpenAI to connect with prospective employees skilled in A.I.

Hindman is driving John Deere’s 2026 initiative to connect 1.5 million agricultural machines through satellite, enabling real-time remote control in regions lacking cellular coverage. The company has already achieved breakthrough connectivity speeds over satellite, which Hindman says “accelerates the model training flywheel and leads to faster, more robust improvements.”

In July, Hindman took on the role of a board member for Emergie Prairie, an innovation nonprofit. Hindman emphasizes the unique pressures of agricultural A.I. and the deep responsibility he feels to develop technology that farmers trust. The average farmer is 58 years old and works 12 to 18-hour days, and John Deere’s role is to “make every seed count, every drop count, and every bushel count,” Hindman says. “Farmers get one chance a year to do it right, so every decision and every action matters.”

Jahmy Hindman. Courtesy of Deere & Co.

48. Ray Kurzweil

  • Computer Scientist, Inventor & Futurist

As principal researcher at Google and co-founder of Beyond Imagination, a robotics startup, Ray Kurzweil remains one of A.I.’s most enduring futurists. In 2024, he published The Singularity Is Nearer, a follow-up to his 2005 best-seller, predicting that A.I. will merge with human consciousness and A.I.-driven medical breakthroughs could soon overcome all diseases and even the aging process by the end of the 2030s. 

“Hollywood portrays A.I. as an alien invasion coming to destroy us. People think A.I. is a rival for survival,” Kurzweil tells Observer. “I don’t see it that way. A.I. is evolving from within us and will reflect our values, knowledge and beliefs. We have been creating and merging with technology since the beginning of time. Ever since we picked up a stick to reach a higher branch, we’ve used tools to extend our reach, both physically and mentally. Just like we built mechanical machines to extend our muscles to build roads and bridges, we’re now building intelligent machines to extend our brains.”

Beyond Imagination raised $100 million from Gauntlet Ventures in Series B funding to build humanoid robots in May. At Google, Kurzweil continues to influence A.I. strategy, particularly in natural language processing and machine learning. Across decades, his innovations—from the first print-to-speech reading machine to advanced music synthesizers—have earned him the National Medal of Technology and global recognition. Kurzweil’s ideas, once considered speculative, now reverberate across research labs and boardrooms. 

In his 1999 book, The Age of Spiritual Machines, Kurzweil predicted that A.I. would reach human-level intelligence by 2029. He says his forecast drew so much attention that Stanford University organized an international conference to assess it, bringing together several hundred A.I. experts from around the world. While 80 percent agreed that computers would eventually match human performance, most believed it would take a century.

“A lot has happened since then. Today, consensus among experts has fallen in line with my original prediction, and some say it will happen even sooner,” he says. “So, while I can’t say that I am surprised by our progress, I can say that I have been delighted by the advancements A.I. has made in the past few years. We have recently reached a new level of computational power that is enabling A.I. to learn just about everything in every field.”

Ray Kurzweil. Courtesy of Kurzweil Technologies Inc.

49. Evan Solomon

  • Minister of Artificial Intelligence, Digital Innovation and Federal Economic Development Agency for Southern Ontario

Evan Solomon’s appointment as Canada’s first-ever Minister of Artificial Intelligence in May represents a watershed moment in global A.I. governance. His mandate, rooted in the Liberal platform released in April, positions Solomon to influence A.I. policy through every aspect of Canada’s economy while addressing critical national security implications. With Prime Minister Mark Carney advocating for sweeping A.I. adoption to create an “economy of the future,” Solomon stands at the center of Canada’s most ambitious technological transformation effort. His role encompasses everything from incentivising business A.I. adoption through tax credits for small and medium enterprises to slashing repetitive government tasks and establishing an office of digital transformation.

Solomon faces the paradox of leading a nation with world-class A.I. research capabilities but leading citizen and business adoption. Despite Canada ranking number one globally in Academia-Industry Model Production Concentration and second in Foundation Models, only 12 percent of Canadian businesses reported using A.I. as of June, while 79 percent of Canadians express concerns about adverse A.I. outcomes. Solomon’s response has been both strategic and practical, as demonstrated by his recent $9 million announcement to fund the training of 5,000 mid-career energy sector workers in A.I. skills. His four-pillar approach—scale, adoption, trust and sovereignty—directly addresses these adoption barriers while leveraging Canada’s existing strengths, including the University of Toronto’s foundational work on LLMs and homegrown successes like Cohere.

The timing and potential impact of Solomon’s leadership make him particularly compelling to watch. Canada’s tech talent growth (66,000 jobs at a 5.9 percent growth rate) is outpacing the United States, with Toronto ranking third globally for tech talent behind only San Francisco and Seattle. Solomon’s efforts to retain Canadian talent, build data centers and reintroduce critical A.I. legislation (including the Artificial Intelligence and Data Act) could unlock enormous economic value, with some projections suggesting A.I. adoption could boost labor productivity by 17 percent over 20 years and generate $185 billion by 2030. As Canada ranks second in A.I. research output per capita among G7 nations yet 14th overall in Stanford’s global A.I. rankings, Solomon’s success in bridging this research-to-implementation gap could fundamentally reshape both Canada’s economic trajectory and global A.I. leadership dynamics. 

Evan Solomon. Courtesy of the Office of the Minister of A.I., Digital Innovation and Federal Economic Development Agency for Southern Ontario

50. Yann LeCun

  • Chief A.I. Scientist, Facebook AI Research (FAIR)

Yann LeCun is regarded as one of the “Godfathers of A.I.” for pioneering research that laid the foundation for modern deep learning. The French-American computer scientist received the 2018 Turing Award for his work in neural networks and has spent decades advancing the field. For the past ten years, he has guided Meta’s Fundamental AI Research (FAIR) group and is the company’s chief A.I. scientist.

At FAIR, LeCun leads exploratory, long-term research that has powered early iterations of Meta’s Llama model. Following Meta’s recent restructuring, FAIR is one of four groups comprising Meta Superintelligence Labs, the company’s newly formed A.I. division.

Besides his work at Meta, LeCun is the Silver Professor of Data Science, Computer Science, Neural Science and Electrical Engineering at New York University. He’s also a prominent supporter of research funding for scientists and has warned that recent government cuts to academic institutions could harm America’s technological progress. 

LeCun continues to gain recognition for his contributions to the field, winning the Queen Elizabeth Prize for Engineering in February and the New York Academy of Sciences’ inaugural Trailblazer Award in May.

Yann LeCun. Getty Images

51. Daniel Gross

  • Meta Superintelligence Labs

Daniel Gross, founder of venture firm NFDG, is guiding A.I. development as a member of Meta’s new superintelligence unit. Meta Superintelligence Labs (MSL) poached Gross and Nat Friedman, his investment partner, this summer, and Gross was hired to focus on developing consumer A.I. products within the newly formed division. 

Gross previously served as the CEO of Safe Superintelligence Inc. (SSI), the secretive A.I. startup launched last year by star researcher Ilya Sutskever, who now helms the company. The venture is focused on steering AGI research with safety at its core, a clear departure from Silicon Valley’s “move fast and break things” ethos. SSI secured $1  billion in initial funding in June  2024 and reached a $32  billion valuation by March 2025—without releasing a product. “The company’s future is very bright, and I expect miracles to follow,” said Gross in July.

The tech investor has a storied history of nurturing the launch of startups. As an angel investor, Gross has backed players like Character.ai, Weights & Biases, and Rippling. He was also formerly a partner at Y Combinator and helped launch its A.I. program and, alongside Friedman, set up the Andromeda Cluster—a supercomputer of over 4,000 GPUs that startups in their portfolio can rent—in 2023. With global-scale investment impact, AGI alignment, and ecosystem building, Gross is a multi-sector bridge‑builder shaping the next era of A.I.

Daniel Gross. Getty Images

52. Richard Socher

  • Co-Founder & CEO, You.com & Managing Director, AIX Ventures

As CEO of You.com, Richard Socher has built an A.I.-powered search platform that now answers more than 1 billion monthly queries for enterprises including DuckDuckGo, Windsurf, Harvey and the National Institutes of Health. Earlier this month, the company announced a $100 million Series C at a $1.5 billion valuation. 

“Search is intellectually one of the most interesting tasks because you help people find information, learn and ultimately gain knowledge. It can motivate one for many decades,” Socher tells Observer. “That is why we had to do it.”

You.com’s enterprise tools highlight Socher’s emphasis on knowledge workers rather than consumer search. Its ARI (Advanced Research and Insights) agent synthesizes intelligence from more than 400 sources into decision-ready briefs within minutes, while Auto Mode automatically routes queries to the most suitable A.I. agents. The platform’s Multiplayer A.I./Team Plan supports real-time collaborative workspaces, positioning You.com as a productivity platform instead of a traditional search engine. “Not enough people are talking about how important the search infrastructure layer is to A.I.,” Socher says, stressing that his customers are professionals whose careers “depend on these answers being correct.”

Socher is a leading figure in natural language processing, with more than 220,000 citations. He is widely credited with introducing neural networks into NLP and developing foundational methods, including word vectors, contextual vectors and early prompt engineering. Beyond You.com, Socher extends his influence through AIX Ventures, his A.I.-focused investment firm that draws on his technical expertise to back emerging startups. In July, AIX Ventures made headlines by recruiting Christopher Manning—one of the most-cited NLP researchers, currently on leave from Stanford University—as a general partner.

Richard Socher. Courtesy of You.com

53. Daphne Koller

  • Founder & CEO, Insitro

As founder and CEO of Insitro, Daphne Koller has built strategic relationships with leading pharmaceutical companies: Eli Lilly for metabolic disease therapies, including metabolic-associated steatotic liver disease (October 2024), Bristol Myers Squibb providing $25 million for ALS genetic target research (December 2024), and Moorfields Eye Hospital for A.I. foundation models targeting neurodegenerative eye diseases (March 2025). These partnerships demonstrate that major industry players validate Insitro’s technology.

“My concern about A.I. in science isn’t the distant risk of superintelligence, but the erosion of rigor from the seductive plausibility of generative A.I.,” Koller tells Observer. “My core fear is that the ease of generating plausible answers will tempt organizations to bypass the hard-won ground truth of prospective, experimental validation. The danger isn’t that AI becomes too intelligent, but that we become complacent, trusting articulate outputs over real data. That would silently erode the scientific method and waste years chasing beautifully worded mistakes.”

Koller’s company operates with substantial resources despite recent strategic restructuring. Three years after founding, Insitro raised $400 million in 2021 for machine-learning-powered drug discovery. In May, the company cut 22 percent of staff (65 employees) to streamline operations and extend its runway into 2027, positioning itself for sustained development rather than expansion. This strategic focus on longevity reflects Koller’s understanding of biotechnology development timelines and capital efficiency requirements.

As co-founder and former co-CEO of Coursera, Koller democratized education globally before transitioning to A.I.-powered drug discovery. “Education taught me that A.I. shines when feedback loops are fast and data are abundant,” she says. “The biggest misconception is that traditional drug discovery is ready for A.I. You can’t just drop A.I. onto hundreds of incoherent spreadsheets and expect breakthroughs. We need to re-architect the systems and data collection around A.I. Done right, A.I. is an amplifier of rigorous biology, not a substitute for it.”

Daphne Koller. Courtesy of insitro

54. Andrew Feldman

Andrew Feldman has built one of the world’s fastest A.I. infrastructures. Feldman’s company develops A.I. supercomputers and large language models, including OpenAI’s first open-weight reasoning model launched in August, achieving record-breaking speeds of 3,000 tokens per second to solve complex math, science and coding problems. Feldman argues this speed creates transformative possibilities. “When the internet was slow, Netflix delivered DVDs by mail. When the internet was fast, Netflix could become a movie studio. That’s the potential of speed,” Feldman tells Observer. “To fundamentally transform industries, to enable business models that weren’t in existence before.”

Cerebras’ inference and training clouds outperform Nvidia and Groq when running Meta’s Llama models, according to internal benchmarks. In May, Cerebras released Qwen3-32B, an open-source model that specializes in coding and advanced reasoning. As a vocal Nvidia competitor, Feldman says, “We’re orders of magnitude faster and we use a tiny fraction of power per unit compute.” 

Feldman highlights key government partnerships, including collaborations with Sandia National Laboratory, Lawrence Livermore and Los Alamos that achieved “a militarily significant simulation that was more than 400 times faster than was possible on the largest supercomputer in the U.S.—a supercomputer with more than 30,000 GPUs.”

Feldman’s previous company, SeaMicro, which developed energy-efficient servers, was acquired by AMD. Cerebras is close to an exit, too, having confidentially filed for an IPO last year.  

Despite his technical optimism, Feldman is concerned about A.I.’s broader impact: “I worry about the division in society—the potential for A.I. to further bifurcate our economy into the haves and have-nots,” he says.

Andrew Feldman. Courtesy Cerebras Systems

55. Stephen Schwarzman

  • CEO, Blackstone

Under Schwarzman’s leadership, Blackstone has committed more than $100 billion to data centers, solidifying its position as the world’s largest private investor in A.I. infrastructure. These facilities, essential for training and deploying advanced models, are becoming as critical to A.I.’s growth as breakthroughs in algorithms and research. Schwarzman has guided Blackstone to focus not only on scale but also on strategic partnerships, ensuring that these data centers support both enterprise adoption and cutting-edge innovation across industries. This week, Blackstone announced it had agreed to buy a natural gas plant in Western Pennsylvania for nearly $1 billion, furthering its bet on the rising electricity demand of A.I. 

Beyond capital allocation, Schwarzman has shaped the broader discourse around A.I. through philanthropy and academic initiatives. His landmark gift to establish the MIT Schwarzman College of Computing placed ethics, governance and societal impact at the center of A.I. education, training a generation of leaders to navigate the promises and perils of machine intelligence. Through this combination of investment and thought leadership, Schwarzman wields influence over both the infrastructure powering A.I. and the frameworks guiding its responsible deployment.

Stephen A. Schwarzman. Getty Images

56. Shiv Rao

  • Founder & CEO, Abridge

Shiv Rao is transforming healthcare documentation through Abridge, his A.I. platform that converts patient-clinician conversations into clinical notes in real time. Abridge’s technology automatically transcribes bedside conversations into contextually aware, compliant and billable notes and medical orders, reducing the hours of administrative burden. “As a doctor, nothing was more soul-crushing than working a full day helping patients only to come home and have hours of documentation,” Rao, who still practices cardiology, tells Observer. “I knew the key was unlocking what was said in the exam room in real-time: gathering intelligence at the point of conversation.” 

This year alone, Abridge will support clinicians across more than 50 million medical conversations, helping to reduce burnout by up to 60-70 percent, the company says.  At Sharp Healthcare, clinicians who use Abridge reported an 83 percent reduction in note-writing effort. At Lee Health in Florida, 86 percent of clinicians reported doing less after-hours work. In June, Rao launched “Abridge Inside for Inpatient” in collaboration with electronic health records giant Epic, which serves 305 million patients worldwide. The same month, the company closed a $300 million series E funding round led by a16z and Khosla Ventures at a $5.3 billion valuation. (The company raised a $250 million Series D in February.)

By the end of July, Rao announced that Abridge had grown its health systems partnerships by 50 percent in four months, now serving more than 150 inpatient, outpatient and emergency departments in the U.S.—including Mayo Clinic, Johns Hopkins, UNC, Duke and Yale. In August, Abridge announced an enterprise partnership to build A.I.-enabled prior authorization with Highmark Health. That same month, Abridge announced a breakthrough: their technology is six times more likely than competitor tools to identify and correct hallucinations in A.I.-generated clinical notes.

Administrative costs comprise 30 percent of healthcare spending, a statistic Rao calls “staggering,” but notes that the purpose of Abridge isn’t simply to save money but to “bring humanity back to healthcare.” His ROI is “when clinicians feel joy again, when patients feel heard and when health systems eliminate impossible backlogs and unsustainable workflows.”

The company is positioned to scale its impact across the $4.3 trillion U.S. healthcare market, where administrative costs account for approximately 30 percent of total spending.

Shiv Rao. Courtesy of Abridge

57. Mati Staniszewski

Mati Staniszewski has a firm grip on A.I.-generated audio through his company ElevenLabs, a text-to-speech platform with customizable synthetic voices and voice cloning capabilities. A BlackRock and Palantir alumnus, Staniszewski has built ElevenLabs into a Series C startup worth a reported $6.6 billion that’s backed by big names like a16z and ICONIQ. In June, the company launched Eleven v3, supporting over 70 languages that can sigh, whisper, laugh and react. To date, ElevenLabs customers have created over 2 million agents through the platform, which have handled over 33 million conversations so far this year. In August, the company introduced Eleven Music, an A.I. music generation service. In September, ElevenLabs reached $200 million in annual recurring revenue.

“We believe that voice will be the primary interface for interacting with A.I. and technology,” Staniszewski tells Observer. “We are building a future where you can create conversational agents that help you speak to your tools, your content and your device—and they speak back, all in ways that feel natural. This changes how we create, access information and do business, while offering businesses a new way to express their brand identity.”

Staniszewski is outspoken about his company being supported by a slim team, doubling down on the move-fast startup mentality. That outlook has proven successful for ElevenLabs, which has partnerships with 60 percent of Fortune 500 companies, including Epic Games, Cisco and Chess.com (which offers an A.I. chess coach using their technology). A practitioner in licensing intellectual property for A.I. use, Staniszewski has worked with at least 5,000 creators, paid out around $5 million to use their vocal likenesses and made strides in the A.I. audio realm, all the while. “The future of A.I. hinges on trust,” he says. “And this is just the beginning.”

In September, Staniszewski told Bloomberg that ElevenLabs hopes to reach $300 million in recurring revenue by the end of this year.

Mati Staniszewski. Courtesy of ElevenLabs

58. Winston Weinberg

  • Founder & CEO, Harvey

Winston Weinberg is a lawyer turned tech entrepreneur who launched A.I. for law firms through Harvey in 2022. In August, Harvey hit $100 million in annual recurring revenue over the course of 36 months, and in the past year, the company’s weekly average users have quadrupled. With a global presence in 53 countries and partnerships with noteworthy companies like LexisNexis, Weinberg’s startup is actively transforming the way law practices work around the world. Global accounting and consulting firm PwC has equipped its junior lawyers with Harvey technology, going so far as to say “there’d be a riot” without it. “Most forward-thinking firms know A.I. is going to change their business, it’s just a question of how much and when,” Weinberg tells Observer. “Right now, if you’re a junior lawyer or an associate in financial services, you’re doing a lot of work that doesn’t reflect your training
and what you thought the role would entail. A.I. changes that and ideally makes the work you do as a human more meaningful.” 

Weinberg clarifies that Harvey is not simply for junior associates, and that 20 percent of users are partners. “In the firms that are more successful, partners are helping
lead the transformation,” he says.

A $300 million funding round in June, co-led by Kleiner Perkins and Coatue (at a $5 billion valuation), shows investors have confidence in what Weinberg is building. Weinberg developed Harvey in partnership with former Google DeepMind research scientist Gabe Pereyra. In June, the company announced an alliance with LexisNexis to provide Harvey users “trusted A.I. answers grounded in LexisNexis U.S. case law and statutes, validated through Shepard’s Citations.” Earlier this year, Weinberg confirmed a $150 million spending agreement with Microsoft Azure cloud services over two years, marking projected growth into the future. 

In August, Harvey kicked off its law school partnership program with Notre Dame Law School as its first account. “While we hope our product helps lawyers save time, we also want to be more proactive in shaping the future of law,” Weinberg says of the program. The law school partnership “will bring A.I. fluency and the Harvey platform to students and professors alike, and our hope is that the partnerships have a meaningful impact on sharing the future of knowledge work alongside these institutions and their faculty members.”

Winston Weinberg. Courtesy of Harvey

59. Dr. Rumman Chowdhury

Dr. Rumman Chowdhury advocates grounding A.I. in local realities and preserving human ingenuity at the center of its value. In 2022, she founded Humane Intelligence, a nonprofit focused on “bias bounties,” a concept she pioneered as Twitter’s director of machine learning ethics, transparency and accountability. The organization facilitates “institutionalized red teaming,” assessing A.I. systems for vulnerabilities, limitations and sociotechnical risks. Since 2023, Humane Intelligence has hosted more than 15 red teaming events across industry, government and academia worldwide. In Singapore, for example, testers from nine countries uncovered failures invisible in monolingual, monocultural lab settings. “Novel ideas originate in human minds,” Chowdhury tells Observer, urging organizations to “ensure decisions requiring creativity, ethical reasoning or contextual understanding are retained for humans” rather than delegated to A.I.

In 2024, Chowdhury joined New York City’s AI Steering Committee, created to oversee municipal adoption, foster public dialogue and guide agencies in deploying A.I. effectively. Alongside experts from IBM, Dell, Microsoft, Intel, Columbia University, NYU and more, she helps shape how algorithmic systems affect 8.4 million residents, from public benefits to policing.

“Cities face unique challenges,” she says. “Their problems are intensely practical, close to daily life and directly impact millions. Unlike federal regulators, city officials can’t simply issue broad principles. They must translate A.I. ethics into operational guidelines.”

Chowdhury has also advised at the federal level, serving on the Department of Homeland Security’s Artificial Intelligence Safety and Security Board and as the U.S. State Department’s Science Envoy for Artificial Intelligence. What concerns her most is that A.I. regulations are often treated as an afterthought. “If evaluations remain just a checkbox for compliance, rather than a meaningful process for stress-testing and improvement, we’ll end up deploying A.I. that’s brittle, unaccountable and out of step with people’s needs.”

Dr. Rumman Chowdhury. Courtesy of Humane Intelligence

60. John Imah

  • Co-Founder & CEO, SpreeAI

John Imah is transforming fashion retail through practical A.I. applications. As co-founder and CEO of SpreeAI, which delivers photorealistic virtual try-on technology with 99 percent sizing accuracy to e-commerce companies globally, Imah is scaling A.I.-powered software solutions. Under his leadership, SpreeAI has raised over $80 million, achieving a $1.5 billion valuation in 2025. Supermodel and businesswoman Naomi Campbell joined SpreeAI’s board in June 2024, and the company secured an exclusive industry alliance with the Council of Fashion Designers of America (CFDA), where Imah is a member. The key to convincing investors and industry icons like Campbell? “We showed that our virtual try-on and sizing tech solves a real problem by improving fit and reducing returns,” Imah tells Observer. “Campbell joined our board because she believed in my vision and saw my track record in both tech and fashion.”

As the first fashion-tech CEO invited to the Met Gala this year, Imah has deployed SpreeAI’s technology to luxury fashion houses, including Sergio Hudson in the U.S. and Kai Collective in the U.K. “Cultural credibility is extremely important in the fashion industry,” he says. “Being at events like the Met Gala or partnering with the CFDA signals that we’re part of the fashion community—that we speak fashion’s language and respect its culture.”

Previously leading strategic partnerships at Amazon, Facebook and Snap, Imah applies similar partnership expertise at SpreeAI. SpreeAI holds five patents with 22 pending and has formed partnerships with MIT and Carnegie Mellon University to foster a long-term talent and innovation pipeline. A Nigerian founder based in Los Angeles, he ensures SpreeAI’s technology serves diverse body types and ethnicities while reducing environmental impact through lower return rates. SpreeAI’s upcoming A.I. Stylist and Virtual Wardrobe features will expand the platform, positioning generative A.I. as the standard for how consumers engage with fashion e-commerce.

John Imah. Saúl López

61. Albert Bourla

Under Albert Bourla’s leadership, Pfizer has committed to saving $5.7 billion in operational expenses by the end of 2027 through adopting a variety of A.I. tools. The pharmaceutical giant is on track to deliver $4.5 billion in net cost savings by the end of 2025, of which $500 million will be reinvested into R&D through 2026. 

In April, Pfizer inked its sixth deal with Flagship Engineering, a life sciences VC fund, to discover autoimmune drug candidates using A.I. Two months later, Pfizer expanded its collaboration with Chinese tech firm XtalPi to develop A.I. that can discover small molecules and predict crystal structures—building on their 2018 partnership that has already demonstrated A.I.’s ability to accelerate drug discovery timelines. Bourla’s A.I. investments position Pfizer alongside competitors like AstraZeneca, which have already reported significant reductions in drug discovery timelines thanks to A.I.

In the public sphere, Bourla has spoken about A.I.’s role in healthcare at the World Economic Forum this year. He regularly authors social media posts exploring A.I.’s potential to transform cancer care and close gaps in medical access. Bourla has articulated a bold vision for A.I. in oncology, predicting that within the next decade, A.I.-accelerated innovations like antibody-drug conjugates “could replace traditional chemotherapy, transforming cancer care and improving outcomes for millions.” 

Albert Bourla. Courtesy of Pfizer

62. Sanja Fidler

  • VP of A.I. Research, Nvidia & Associate Professor, University of Toronto

Recruited by CEO Jensen Huang in 2018, Sanja Fidler operates at the center of Nvidia’s ambition to define the next big thing in A.I. Her team, which consists of over 40 A.I. researchers at Nvidia’s Toronto-based Spatial Intelligence Lab, has developed technology that converts text prompts into 3D objects nearly instantaneously.

Her research directly impacts multiple high-value markets through breakthrough A.I. models. She spearheaded Nvidia’s entry into the $400 billion autonomous vehicle market while leading development of virtual reality world generation capabilities. Her team created Cosmos, a family of  A.I. world models that understand object movement over time in 3D space—critical technology for advancing self-driving vehicles and robotics applications and a key component of Nvidia’s push into physical A.I.

In academia, Fidler’s research spans over 130 published papers in computer vision, machine learning and natural language processing, accumulating more than 55,900 citations. As a co-founding member of the Vector Institute for Artificial Intelligence and Canadian CIFAR AI Chair (2018), Fidler bridges industry and academia. Her recognitions include the Nvidia Pioneer of AI Award, Amazon Academic Research Award and University of Toronto Innovation Award. Through her unique position leading spatial A.I. research at Nvidia, access to substantial R&D resources and extensive academic contributions, Fidler has become a key architect of the robotics and autonomous systems revolution.

Fidler sees this moment as transformative for the field. “The era of physical A.I. has begun. Many major industries such as engineering, manufacturing and robotics need intelligence that understands and connects to the real physical world,” she tells Observer. “Our team’s mission at Nvidia is to develop frontier spatial intelligence technologies to help the world build cutting-edge, 3D solutions. It’s a very exciting time for us working in spatial A.I. research.”

Sanja Fidler. Courtesy of Nvidia

63. Andrej Karpathy

  • Founder & CEO, Eureka Labs

As founder and CEO of Eureka Labs,  launched in July 2024, Andrej Karpathy is building an A.I.-native education platform that pairs teachers with A.I. assistants to deliver scalable, personalized learning. The company’s first A.I.-powered course is already in development. Beyond Eureka, Karpathy reaches nearly a million subscribers on YouTube, where his tutorials on coding and large language models have become a global resource for aspiring A.I. practitioners.

Karpathy’s influence also extends into venture investing, where he has strategically backed companies with custom model development, agent platforms and core infrastructure. In May 2024, he joined the $25 million series A round for Lamini, an enterprise platform for custom LLMs founded by Google Cloud alum Sharon Zhou. He has since invested in /dev/agents, an A.I. agent development platform now valued at $500 million and Lambda, a Nvidia-backed infrastructure startup that raised $480 million in series D funding. 

His technical credibility stems from leadership roles as co-founder and former research scientist at OpenAI and former head of A.I. at Tesla, which provided a deep understanding of cutting-edge A.I. research and real-world deployment challenges. Karpathy has positioned himself as a key architect bridging A.I. research, education and venture capital influence across the ecosystem by combining educational innovation through Eureka Labs, mass learning outreach via YouTube and strategic investments in high-value startups.

Andrej Karpathy. Getty Images

64. Margaret Mitchell

Margaret Mitchell represents the critical intersection of technical excellence and ethical leadership in A.I. As chief ethics scientist at Hugging Face, she has published over 100 papers on natural language generation, assistive technology, computer vision and A.I. ethics. Her 2025 work, challenging AGI-centrism and redefining research rigor, addresses fundamental questions about the direction of A.I. development. 

“AGI as a whole is just a super problematic concept that provides an air of objectivity and positivity when, in fact, it’s opening the door for technologists to just do whatever they want,” Mitchell told the Financial Times in June. “For me, A.I. should be grounded and centered on the person and how best to help the person. But for a lot of people, it’s grounded and centered on the technology.” 

Mitchell founded and co-led Google’s Ethical AI group as a staff research scientist before being terminated in 2021 following her advocacy for diversity and concerns about research censorship at the company. Her dismissal, alongside that of Timnit Gebru, highlighted critical tensions between A.I. ethics research and corporate interests in the tech industry.

In February, Mitchell co-authored groundbreaking research arguing that the A.I. community should “stop treating AGI as the north-star goal of AI research,” identifying six key traps that AGI discourse creates, and advocating for prioritizing specificity in engineering goals, embracing pluralism and fostering greater inclusion of disciplines and communities. In April, Mitchell joined eight other former OpenAI employees, including Geoffrey Hinton, in signing an open letter to California and Delaware attorneys general urging intervention in OpenAI’s proposed restructuring. The letter warns that allowing the company to escape nonprofit control would eliminate key governance safeguards and endanger its founding mission to ensure AGI benefits all humanity. In June, Mitchell co-authored influential research on “Rigor in A.I.,” proposing an expanded conception of rigorous A.I. research beyond methodological rigor to include epistemic, normative, conceptual, reporting and interpretative dimensions. 

As corporate interests increasingly conflict with public benefit in A.I. research, her advocacy for transparency, accountability and inclusive development practices provides essential guidance for the field.

Margaret Mitchell. Getty Images

65. Lila Ibrahim

  • COO, Google DeepMind

Lila Ibrahim is one of the most influential executives shaping the path toward artificial general intelligence (AGI). As chief operating officer of Google DeepMind, she oversees operations, infrastructure and risk governance at one of the world’s most important A.I. research labs. Her mandate is to ensure that the lab’s cutting-edge science is translated into products and systems that benefit humanity at scale.

Ibrahim brings experience from senior roles at Intel and Coursera, giving her a unique vantage point at the intersection of hardware, software and education. Under Ibrahim’s leadership, DeepMind’s breakthroughs power core Google products used by billions. Gemini, Google’s family of generative A.I. models, is now embedded across Search, Workspace, and Cloud, reshaping how users interact with information and productivity tools. She has also been central in building DeepMind’s governance framework, balancing the company’s ambition to develop AGI with its responsibility to mitigate risks.

Beyond her executive role, Ibrahim is an influential voice in the global A.I. policy conversation. She has publicly advocated for thoughtful, balanced regulation and international coordination, cautioning that overly restrictive policies could stifle innovation while failing to address real risks. 

Her contributions have been recognized worldwide. Ibrahim has been honored by the Financial Times, the United Nations and the World Economic Forum, underscoring her role as both an industry leader and a global policy influencer.

Lila Ibrahim. Getty Images

66-67. Clément Delangue & Julien Chaumond

  • Founders, Hugging Face

Clément Delangue and Julien Chaumond have helped democratize A.I. through Hugging Face’s open-source platform while pushing the company into new frontiers like robotics. As CEO and CTO, respectively, they scaled Hugging Face from 5 million to over 10 million users in just one year, turning it into the global hub for A.I. model sharing. The platform’s $4.5 billion valuation, following a $235 million series D round in 2023, underscores its central role in making A.I. development accessible across industries worldwide.

In 2025, the duo expanded Hugging Face beyond software into hardware with the April acquisition of Pollen Robotics, marking the company’s biggest step into physical technology since its founding. That move set the stage for July’s launch of Reachy Mini, a $299 open-source desktop robot designed to make A.I. simulation affordable for everyday developers. The low price point reflects Hugging Face’s mission to ensure A.I. innovation isn’t limited to well-funded labs but available to anyone.

Delangue’s influence also extends to A.I. policy. In February, he co-authored an open letter calling for A.I. to serve the public good in fields like education and medicine, and he continues to back Current AI, a global collaborative dedicated to ensuring A.I. works in the public interest. His prediction of A.I.’s “economic and employment growth potential” in 2025 reflects his optimistic view of the technology’s role in society. Together, Delangue and Chaumond are shaping the future of A.I. by leading a platform, building new hardware, and pushing for policies that make the technology accessible to everyone.

Observer

68. Percy Liang

  • Associate Professor, Stanford University & co-founder, Together AI

Percy Liang is a senior fellow at Stanford’s Human-Centered Artificial Intelligence (HAI), a program co-led by Fei-Fei Li. As director of the Center for Research on Foundation Models (CRFM), Liang develops evaluation frameworks for leading foundation models and transparency grading systems for A.I. model companies. With more than 180,000 Google Scholar citations across 58 publications, he is a prominent researcher laying the groundwork for A.I. innovation. His recent studies—ranging from robot policies to large language models for medical tasks—shape the integration of A.I. into daily life and the broader pursuit of artificial general intelligence (AGI).

Liang takes a contrarian stance on AGI, one of the field’s most widely discussed concepts. “The notion of AGI as a goal is misguided (at best it’s a vibe),” Liang tells Observer. He argues that A.I. already outperforms humans in many areas while lagging in others, and will continue improving unevenly across different dimensions “without any natural bounds.”

Liang’s biggest concern is what he sees as a troubling decline in openness. “I worry deeply about the decline of openness in A.I.,” he says. While models like Llama and DeepSeek are technically open-weight, he argues they fall “far from the true spirit captured by open-source software.” In his view, the current ecosystem fosters “a few producers and mostly consumers,” in stark contrast to the Internet revolution’s decentralized, open-source foundations. “Today, we need the same for A.I.,” Liang emphasizes.

The DeepSeek breakthrough marked a pivotal moment in his view of the A.I. landscape. “It was celebrated by proponents of open(-weight) models, but it was also notable coming from a non-U.S. (Chinese) company,” he says. The event underscored both the global nature of A.I. progress and the ongoing debate over model accessibility.

As an educator, Liang advocates a dual approach to teaching with A.I. He believes students must first demonstrate “foundational mastery” by completing basic tasks without A.I., while also learning to embrace it as a tool that can extend their capabilities. This balance, he argues, maximizes A.I.’s educational potential while addressing concerns about surveillance.

Liang co-led this year’s Stanford AI Index Report, an influential HAI publication for policymakers and industry leaders. His goal is to make foundation models more accessible and transparent by advancing open-source technology and rigorous benchmarking.

Percy Liang. Courtesy of Stanford University

69. Abdullah Alswaha

  • Minister of Communications and Information Technology, Saudi Arabia

As Saudi Arabia’s Minister of Communications and Information Technology, Abdullah Alswaha is a key architect of the Kingdom’s Vision 2030 technology agenda, steering its rapid emergence as a hub for A.I., cloud computing and space innovation. At the LEAP 2025 technology conference in Riyadh, he announced $14.9 billion in pledged investments to diversify the economy through digital infrastructure. Today, Saudi Arabia accounts for half of the Middle East and North Africa’s $260 billion digital economy, with technology jobs more than doubling since 2021.

Alswaha has forged major partnerships—with Oracle to expand A.I. and cloud collaboration, with SpaceX to advance space technology, and with SoftBank to boost global investment in digital transformation. Tencent Cloud’s decision to open its first Middle East data center in Saudi Arabia underscores the momentum of his initiatives.

A vocal advocate for inclusive A.I., Alswaha cautioned at the United Nations’ Internet Governance Forum in late 2024 about the risks billions face if excluded from the A.I. revolution. With the global digital economy valued at $16 trillion, he is positioning Saudi Arabia as both a regional powerhouse and an international thought leader in the scalability and governance of emerging technologies.

Abdullah Alswaha. Getty Images

70. Walid Mehanna

  • Chief Data & A.I. Officer and Chairman of the Digital Ethics Advisory Board, Merck KGaA, Darmstadt, Germany

As Chief Data & A.I. Officer and Chairman of the Digital Ethics Advisory Board for Merck KGaA, Darmstadt, Germany, Walid Mehanna has overseen one of the most comprehensive enterprise A.I. deployments in pharma, life sciences and electronics (per the Pharma A.I. Readiness Index). Under his leadership, Merck KGaA rolled out its optimized myGPT suite application via a 2023 partnership with A.I. startup Langdock. Today, 52,000 of the company’s 63,000 employees use the suite, with 14,000 digital assistants active across 1,600 groups and more than 12 million prompts logged—recently approaching 1.5 million per month. A company representative says it is the most-used non-mandatory tool.

“The culture of a science and technology company helps A.I. adoption immensely,” Mehanna tells Observer. “Scientists are used to the ‘experiment and iterate’ phase of any methodology. That’s why it’s a great place to pioneer A.I.” Mehanna compares his method for building a corporate A.I. strategy to scaling a pyramid: productivity for all (myGPT) is the foundation; operational A.I. in core business streams is next; and advanced A.I. for products and revenue sits at the top. “Models will change, but our data, guardrails and talent compound, so the strategy doesn’t,” he explains.

Mehanna joined Merck KGaA from Mercedes-Benz, where he was the head of data and analytics. “In a car, you can instrument almost everything. In biology and chemistry, you rarely see the full system,” Mehanna says. “That lack of visibility turns modeling, validation and guardrails into the real engineering challenge. Data is noisier, ground truth is murkier, and transparency is limited. So A.I. needs to be more humble, explainable, and evidence-driven.”

Since 2021, Mehanna has convened digital ethics experts from Mayo Clinic, Johns Hopkins University and Dana-Farber Cancer Institute to implement ethical frameworks for Merck KGaA’s A.I. deployment across the whole value chain and business sectors. Mehanna doesn’t believe “that you must choose between speed and responsibility. With the right guardrails, you move faster because you can innovate with confidence. Governance is like brakes: the better they are, the faster you can drive safely.”

Walid Mehanna. Courtesy of Merck KGaA, Darmstadt, Germany

71. Andy Markus

  • Chief Data & A.I. Officer, AT&T

Andy Markus has helped stage AT&T’s stock market comeback by putting A.I. at the center of its operations and offerings. After years of sluggish performance, AT&T shares climbed more than 45 percent in the 12 months ending in August, lifting its market capitalization to over $211 billion. Markus, who joined in 2020, inherited a company already using legacy A.I. systems behind the scenes but has since pushed generative and agentic A.I. to the forefront.

“We are maniacal at tracking the ROI of our generative A.I. and agentic A.I. use cases,” Markus tells Observer. “We’ve achieved really great accuracy with a straightforward agentic A.I. workflows.  How is that accuracy maintained as these workflows grow exponentially with many non-deterministic decision points? It’s something we’re very focused on.”

AT&T made headlines by shifting from ChatGPT to open-source models, largely to cut costs. Its internal LLM, AskAT&T, now writes and refines code, drafts business plans and streamlines workflows for 100,000 employees. The system processes over 175 million API calls and 5 billion tokens daily. “We still use OpenAI functionality, but we’ve also been very successful at fine-tuning open-source SLMs to be as accurate as LLMs to control costs,” Markus says. That approach also powers AT&T’s generative A.I. customer service agents, which now handle many of the company’s 40 million annual service calls, reducing response times by a third.

Markus has forged partnerships with Microsoft and Nvidia to strengthen AT&T’s enterprise A.I. capabilities. He rejects the idea that generative and agentic A.I. are overhyped. “We’re proving that if done right, there is significant and meaningful value to be generated for the enterprise,” he says. 

Andy Markus. Courtesy of AT&T

72. Sara Hooker

  • Frontier A.I. Leader

As the former vice president of research and head of Cohere Labs at Cohere, Sara Hooker led the $6.8 billion company’s nonprofit research arm. During her three years with Cohere, Hooker’s team of fewer than 30 engineers and researchers published over 100 research papers, collaborated with over 150 institutions and released models that have been downloaded 23 million times. She left Cohere in September 2025 to explore “new problems” that are “central to the future of intelligence,” she said when announcing her departure.

While tight-lipped about her next venture, Hooker confirmed to Observer that she’ll begin her new role in early October. “Humans adapt,” Hooker tells Observer. “Our current A.I. doesn’t. The biggest leap that is going to come in building thinking machines is solving for this gap.”

Hooker’s focus on multilingual language models ensured A.I. accessibility across more than 100 languages worldwide, driven by her conviction that “when you speak in someone’s language, you really connect with their heart, not their head.”

Hooker’s research has exposed critical flaws in A.I. evaluation and transparency. In April, she led a study revealing that LM Arena, maker of the influential Chatbot Arena benchmark, systematically biased results to favor certain companies—including OpenAI, Meta, Google and Amazon—at the expense of their rivals. This research challenges the integrity of widely used A.I. evaluation systems and demonstrates her role in holding the industry accountable for fair assessment practices.

Through Cohere Labs, Hooker advanced open science collaboration in machine learning. Projects like Aya focused on multilingual model development and research into model efficiency and data quality. By leading international research initiatives, challenging industry evaluation standards, and expanding multilingual A.I. accessibility, Hooker has emerged as a key architect of more equitable and transparent A.I. development.

Sara Hooker Courtesy of Sara Hooker

73. Rob Francis

Rob Francis has transformed one of the world’s largest travel platforms into an A.I.-powered ecosystem serving millions of travelers globally. Since joining Booking.com as CIO in 2019 and rising to CTO in 2021, Francis has overseen the deployment of generative A.I. tools that fundamentally change how people plan and book travel. 

“We weren’t the first to market out of a fear of missing out,” Francis tells Observer. “We spent time ensuring we had the right underpinnings for data governance, moderation and safety. That investment has served us well.”

In 2023, Francis launched Booking.com’s AI Trip Planner in beta for U.S. travelers, powered by OpenAI’s ChatGPT API and integrated with Booking.com’s proprietary data on properties, pricing and availability. It was the first A.I. tool from a major travel platform (Expedia and Airbnb followed). In 2024, Francis expanded the A.I. suite with Smart Filter, Property Q&A and Review Summaries. Smart Filter allows travelers to describe accommodations in natural language—such as “Hotels in Amsterdam with a great gym, a rooftop bar and canal views”—while A.I. scans Booking.com’s entire inventory to deliver tailored results. Under Francis’s leadership, Booking.com built its own orchestration layer, allowing teams to switch between OpenAI, Anthropic, Google and open-source models. In 2025, he expanded A.I. deployment internally, authorizing generative A.I. capabilities from Zoom, Glean and Google’s Gemini chatbot for Booking.com’s workforce. 

Given that the company operates across hundreds of countries with vastly different cultural preferences, languages and travel patterns, Francis stresses a top priority is “ensuring our systems provide inclusive treatment to all, irrespective of nationality, race, gender or other sensitive information—which could lead to bias if not handled with due responsibility. We have multiple controls in place, including rigorous testing, human oversight and ongoing model improvements, to mitigate the risk of A.I. hallucinations and partiality.”

In July, Booking.com’s Global A.I. Sentiments Report, a survey of over 37,000 consumers across 33 countries, identified 36 percent of global users as A.I. Enthusiasts who believe A.I. makes life easier, saves time, enhances productivity and expands learning. Francis says “more than half of travelers” are willing to accept A.I. recommendations, representing a significant behavioral change since the AI Trip Planner’s initial launch. The platform has seen increased engagement, with users staying on the platform longer while exploring personalized itineraries and faster search times through A.I.-powered filters.

Rob Francis. Courtesy of Booking.com

74. Chetan Dube

  • CEO & Chairman, Quant

After selling his conversational A.I. company Amelia to SoundHound for $180 million in 2024, Chetan Dube launched Quant, a company focused on agentic A.I. for customer service that is reimagining labor, accountability and A.I.’s role in the economy. In less than a year, Quant has secured enterprise deals with a global airline, a major utility company and a telecommunications consortium. One of the largest utilities in the country is now resolving over 76.8 percent of all its inbound complex calls through Quant’s digital employees, Dube tells Observer.

A former NYU assistant professor and longtime A.I. entrepreneur, Dube has spent nearly three decades working to replicate human intelligence in software, collaborating with governments in the U.S., Europe and the Middle East to adopt A.I. across their tech stacks. Accruing a net worth of $2.4 billion, Dube has been named one of Forbes’ top 10 minds in A.I. He puts trust and safety at the core of his agenda, calling for a new safety governance model where A.I. systems are dissected, tested and certified before deployment. 

He also advocates taxing digital employees and redistributing the gains to human workers. Dube argues that while traditional A.I. delivered “marginal” single-digit ROI, agentic technologies can achieve returns exceeding 50 percent by focusing on action rather than information. “A.I. companies are often advancing generative technologies for information retrieval, which accounts for 19 percent of the total incoming volume, as opposed to Agentic technologies for actions that handle 81 percent of incoming tasks,” he says.

Dube tells Observer that the urgency driving his work became personal when his son asked him whether he was going to be a “robodad.” That moment crystallized his concern that “most people aren’t taking seriously” the possibility of A.I.’s more dystopian outcomes. “Humans aren’t ready for a hybrid society,” where we can’t distinguish between digital and human colleagues, which Dube believes will be here by 2030. Humanity must prepare for what he calls a “digital tsunami” that could either clean up humanity’s problems or lead to social unrest. “Man must proactively move to a higher ground of creative thinking,” Dube says. “We have a revolutionary power at hand. If we harness it gainfully, it can cure the planet of many maladies; if we don’t, it can be the final invention.”

Chetan Dube. Courtesy of Quant

75-76. Nick Lynes & Scott Mann

  • Co-CEOs & Founders, Flawless

Scott Mann and Nick Lynes are using A.I. to close the cultural gap in film distribution. Flawless, their A.I. dubbing platform, provides tools that eliminate the disconnect between on-screen actors and dubbed dialogue, allowing directors to translate movies into different languages without compromising artistic intent. 

The company’s flagship product, TrueSync, was used to release Watch the Skies, an English-dubbed version of the Swedish film UFO Sweden. It was the world’s first theatrical full-length feature to use immersive A.I. dubbing. Mann and Lynes were stunned by the audience response when it opened at the Berlin Film Festival this year. “It was shocking, even to us, how much better the film played. How much more it connected with the audience and resonated on a new emotional level,” they tell Observer.

That milestone followed a production partnership at Cannes, backed by a $100 million fund. Its second product, DeepEditor, allows studios to make post-production changes without reshoots and has been praised by SAG-AFTRA leadership for its ethics-first design.

With their rights management system Artistic Rights Treasury (A.R.T), Mann and Lynes set a new industry standard: AI-powered filmmaking that protects both creative control and global reach. Their approach follows a “trickle up” model. “When human artists benefit, the entire system benefits,” they explain. “That is how A.I. can define a healthier future for culture and business alike.”

 However, despite their optimism, Mann and Lynes warn that A.I.’s impact depends on current leadership choices. “Whether A.I. transforms humanity for good or bad will be down to the companies and leaders currently designing its agenda,” they caution. 

Scott Mann & Nick Lynes. Courtesy of Flawless

77. Hiroaki Kitano

  • President & CEO, Sony Computer Science Laboratories, Inc.

As president and CEO of Sony Computer Science Laboratories, Hiroaki Kitano oversees A.I. strategy in entertainment, robotics and consumer technology, reaching hundreds of millions of users worldwide. He also directs Sony Global Education and founded the RoboCup Federation, which advances robotics and A.I. through international competitions and social impact projects.

In June, Kitano was appointed Senior Executive Advisor for Niremia Collective, a Silicon Valley venture capital firm specializing in well-being technology. The role positions him to channel significant investment into A.I. applications for healthcare and wellness sectors. At the World Economic Forum in January, he articulated his vision that A.I.’s impact has “propelled science at unprecedented speed, which will change the form of civilization for decades to come,” emphasizing the need for cross-disciplinary A.I. development rather than siloed approaches.

Kitano frames A.I. as the fourth great civilizational shift, “the early stage” of an A.I. Industrial Revolution. “We may still be at the ‘steam engine’ phase,” Kitano tells Observer, noting the qualitative differences between today’s revolution and the past lies in autonomous A.I. “This will amplify the capabilities of our civilization,” he says.

Kitano’s ultimate goal is to develop “A.I. as a scientist,” capable of making Nobel Prize-worthy discoveries, through his proposed Nobel Turing Challenge. “There are human cognitive limitations,” he tells Observer. “A range of issues hampers human scientists from understanding the mechanisms behind very large, complex, dynamic systems. A.I. for autonomous scientific discovery can overcome such limitations of human cognition to perform the kind of science a human cannot.”

​​Beyond corporate strategy, Kitano actively advances research equity through concrete initiatives. In February, he orchestrated the inaugural “Sony Women in Technology Award with Nature,” distributing $250,000 grants to three women researchers to accelerate their technological research. Through RoboCup Federation and his venture capital advisory role, Kitano leverages Sony’s broad technology portfolio to position himself as a key architect at the intersection of A.I. innovation, investment strategy, and inclusive research development worldwide.

Hiroaki Kitano. Courtesy of Sony Computer Science Laboratories

78. Shishir Mehrotra

  • CEO, Grammarly

Shishir Mehrotra, the former head of product and technology at YouTube, is now steering Grammarly through its next chapter as CEO. After founding collaborative workspace platform Coda, Mehrotra stepped into Grammarly’s executive role to oversee the company’s rapid expansion. Backed by a $1 billion financing round led by General Catalyst in May, Grammarly is now valued at over $10 billion.

Under his leadership, Grammarly is doubling down on enterprise offerings and broadening its A.I. portfolio. The company has rolled out a suite of A.I. agents for writing and grading assistance and features like a citation finder and plagiarism detector. Earlier this month, it launched multilingual writing features powered by large language models, providing support in five languages. “What’s different about this phase is that it’s not incremental improvement—it’s architectural,” Mehrotra tells Observer. “The relationship between humans and A.I. has to evolve from tool usage to team leadership. We’ll stop being individual contributors who use A.I. tools and become conductors of A.I. networks.”

“The 10x productivity gains aren’t about humans working faster; they’re about humans learning to orchestrate A.I. systems so effectively that we amplify our capabilities exponentially,” he adds. “And based on what I’m seeing in the market, that transition is happening faster than most people realize.”

Grammarly’s acquisition strategy is also accelerating. In July, it acquired Superhuman, an A.I.-driven email platform to streamline communication. Mehrotra frames the move as part of the company’s “A.I. superhighway” approach, which embraces interoperability over walled gardens. “Everyone assumes we’re David going up against Goliath, but we’re playing a completely different game,” he says. “Microsoft and Google build walled gardens; they want you living entirely in their ecosystem. We meet users everywhere they work.”

Shishir Mehrotra. Courtesy of Grammarly

79. Nitzan Mekel-Bobrov

  • Chief A.I. Officer, eBay

As eBay’s chief A.I. officer since 2021, Nitzan Mekel-Bobrov has integrated generative A.I. across a marketplace serving millions of users worldwide. He led the development of eBay’s “magical listing tool,” which allows sellers to upload product images while A.I. automatically extrapolates data and fills in key details—a feature with a success rate exceeding 90 percent.

In May, eBay launched image-to-video capabilities to enhance the selling experience, along with its first consumer-facing shopping agent designed to help buyers discover “hidden gems.” In August, the company rolled out “Offers in Messaging,” which uses generative A.I. to pre-write responses to prospective buyers, and an automated feedback tool that leaves positive reviews on seller pages when products are delivered without issues and the buyer opts not to leave a review.

Under Mekel-Bobrov’s leadership,  eBay has increased investments in GPU computing to support more advanced A.I. functionality across the platform. The company’s stock price reached an all-time high in August, prompting Fortune to ask, “Is eBay actually sexy again?

Last week, eBay CEO Jamie Iannone told The Wall Street Journal that the company aims to become an A.I. leader, leveraging thirty years of data from billions of transactions as it invests in generative technology. In the same article, Mekel-Bobrov noted that all 11,500 eBay employees now use A.I. agents for project-based work, tracking every task and reducing meeting volume by double digits. Each employee has the resources to build personalized A.I. agents tailored to their job function and specific needs.

Nitzan Mekel-Bobrov. Courtesy of eBay

80. Dan Neely

  • Founder & CEO, Vermillo

As debates over intellectual property in A.I. intensify, Dan Neely has introduced a solution. He is the co-founder and CEO of Vermillio, an A.I. licensing and protection platform that monetizes and safeguards data. Recently, Neely made Vermillio’s flagship product, TraceID, free to users worldwide in an effort to democratize digital safety in the A.I. era.“We think this kind of protection is a human right,” Neely tells Observer. “It’s not just public figures who are at risk anymore. We’re seeing children being cyberbullied with deepfakes. We’re hearing tragic stories of elderly people falling victim to deepfake scams.”

In March, the company raised $16 million in a series A funding round led by DNS Capital and Sony Music. The deal marks Sony Music’s first investment in an A.I. licensing company. In 2024, Vermillio launched a partnership with talent agency WME (which represents Tina Fey, Serena Williams and Ben Affleck, among other celebrities). Steve Harvey also recently tapped the company to protect against unauthorized uses of his likeness that target fans via online scams.

Neely advocates for federal legislation and responsible A.I. standards because “today, there are roughly a million [pieces of deepfake content] created every minute,” he says. And he remains skeptical of tech platforms’ motives. “The major A.I. platforms are using your data for profit, they are not looking out for you,” he warns. “We’re facing an unprecedented digital safety crisis for young people. None of the platforms are focused on building guardrails to protect young people. We need to hold them accountable through significant punishments, financial and otherwise.”

Vermillio’s global reach has expanded to scour more than two trillion generative A.I. outputs to protect over 400,000 hours of premium video, 40,000 hours of music and 20,000 hours of gameplay. As deepfakes continue to increase, Neely’s work—including successfully delisting deepfake sites from search indexes and protecting individual image rights through TraceID—will become even more important.

Dan Neely. Courtesy of Vermillio

81. Andrew Ng

  • Founder, DeepLearning.AI

Andrew Ng, an A.I. pioneer and evangelist, is moving the needle by countering one of Silicon Valley’s biggest myths: namely, that A.I. is an all-purpose magic wand. The founder of DeepLearning.AI (an ed-tech platform focused on artificial intelligence) and AI Aspire (an A.I. advisory firm), and managing general partner of the AI Fund (a venture capital firm), has recently asserted that the real reason most startups struggle is poor execution, not bad code. And his thesis—that as tech is optimized, humans need to change to keep up—is reshaping how founders think about building, scaling and deploying A.I. 

His eyes-wide-open, science-first outlook is tempered with optimism—at this year’s GCVI Summit, he was quick to point out that the hype surrounding A.I. shouldn’t put off investors. More importantly, he practices what he preaches. DeepLearning.ai has now equipped more than 7 million learners worldwide with the knowledge to deploy A.I. in the real world. Under his leadership, the AI Fund expanded globally in late 2024; this year, it hosted a Buildathon, during which developers shipped full apps in mere hours using A.I.-assisted coding. Recently, Bain & Company tapped Ng and AI Aspire for a partnership that will bring his expertise to clients looking for sharper artificial intelligence strategies.
to
What makes Ng so vital is his ability to democratize a deeply complex field. Whether teaching (on top of everything else, he’s an adjunct at Stanford), investing in underrepresented markets or informing policy, he champions an inclusive A.I.-powered future where everyday builders—not just elite labs and VC-backed unicorns—are using the technology in meaningful ways to solve real-world problems across sectors and geographies.

Andrew Ng. Getty Images

82. Sasha Luccioni

  • A.I. & Climate Lead, Hugging Face

Sasha Luccioni is reshaping how the A.I. industry confronts its environmental footprint. As A.I. & Climate Lead at Hugging Face, the machine learning scientist has emerged as a definitive voice quantifying A.I.’s energy crisis, putting sustainability at the core of system design. Luccioni’s research reveals that generative A.I consumes 30 times more energy than traditional search, demonstrating generative A.I.’s capacity to drain power. Her findings have rippled across global policy discussions, influencing how governments and corporations approach A.I. sustainability mandates. 

Transparency around the environmental impacts of A.I. models has fallen as the technology’s development surges, with 84 percent of LLM use in May being done through models with no environmental disclosures, according to a recent study co-authored by Luccioni. From a consumer perspective, users should restrain themselves from turning to tools like ChatGPT for unnecessary tasks, she warned in July at Geneva’s AI for Good Summit.

Luccioni also co-founded Climate Change AI, rallying thousands of researchers towards solutions to decarbonize A.I. Through her board role at Women in ML, she’s amplifying crucial voices often silenced in tech’s echo chambers. 

Sasha Luccioni. Observer

83. Jakub Pachocki

  • Chief Scientist, OpenAI

Promoted to chief scientist in 2024, Jakub Pachocki now leads OpenAI’s research agenda to build models capable of reasoning through complex scientific and mathematical problems. OpenAI CEO Sam Altman has described him as “one of the greatest minds of our generation.”

A veteran of the company for eight years, Pachocki helped shape many of OpenAI’s defining milestones, from leading the team whose bots defeated the world champions of Dota 2 in 2019 to training GPT-4. More recently, he has focused on advancing OpenAI’s “reasoning models,” including o1, which takes time to deliberate before producing answers to complex coding, math and scientific challenges. The results came quickly. In 2024, OpenAI’s reasoning models earned gold-medal performance at the International Mathematical Olympiad and outperformed nearly all human competitors in a major coding contest. 

Pachocki also sees broader scientific potential in A.I. In May, he told Nature that mounting evidence suggests systems can generate novel insights. “Even this year, I expect that A.I. will, maybe not solve major science problems, but produce valuable software, almost autonomously,” he said. His remarks helped spark debate about machine learning and its implications for research and industry.

Though less publicly visible than some peers, Pachocki’s work underpins models now deployed by hundreds of millions worldwide. He’s also one of the researchers challenging the notion that A.I. is an indecipherable “black box” system. “I emphasize seeking understanding of how deep learning works,” he told TIME recently. “Despite it seeming like it’s just mathematics, it’s really a sort of natural science, where you’re trying to understand this phenomenon.”

Jakub Pachocki. Observer

84. Alex Zhavoronkov

  • Founder & CEO, Insilico Medicine

As founder and CEO of Insilico Medicine, Alex Zhavoronkov built Pharma.AI, a drug discovery engine that could potentially reduce drug development timelines from two to four years to 12 to 18 months. Insilico recently announced a breakthrough Parkinson’s therapy created using Pharma.AI, showcasing the platform’s ability to tackle neurodegenerative diseases.

“We saw signs of potential lung function restoration and improved Forced Vital Capacity (FVC) in patients with Idiopathic Pulmonary Fibrosis. That moment proved to me that A.I. was helping drive real clinical breakthroughs that could directly improve patients’ lives,” Zhavoronkov tells Observer. “We’re heading into an era of pharmaceutical superintelligence, where agents won’t just streamline workflows but actually make decisions and design experiments. Most people aren’t talking about it yet, but once A.I. starts managing A.I., everything changes.”

In June, Insilico raised a $123 million series E round, began clinical trials for an A.I.-designed cancer drug, announced the release of its Nach01 foundational model—an LLM that accelerates generative chemistry innovation—on AWS, and signed a Memorandum of Understanding with United Arab Emirates University to foster the next generation of A.I. and biotech talent in the UAE, where Insilico has operated a generative A.I. and drug R&D center since 2023. The company also announced it is nearing completion of its first A.I.-powered robotic research lab in Suzhou, China, dubbed Life Star 2. The platform’s unprecedented scope is evident in Insilico’s recent release of over 100 million molecular structures for a single target—uncovering “novel scaffolds and patentable chemotypes at a size that traditional libraries would never reach,” Zhavoronkov says.

Though critics question whether companies like Insilico oversell their A.I. capabilities, Zhavoronkov remains uniquely positioned among A.I. biotechnology leaders and is a highly cited researcher on deep learning in biomedicine. In May, Zhavoronkov told Bloomberg that Insilico’s A.I.-generated drugs will be commercially available by 2030. The global market for A.I. pharmaceuticals is expected to grow from $2.7 billion in 2025 to $13.9 billion in 2034.

Alex Zhavoronkovs. Courtesy of Insilico Medicine

85. Chris Olah

  • Co-Founder, Anthropic

Christopher Olah is at the forefront of efforts to make large language models more transparent, tackling one of A.I.’s most enduring mysteries: how neural networks actually make decisions. As a technical staff member and co-founder at Anthropic, Olah leads groundbreaking research in mechanistic interpretability, developing methods to peer inside the “black box” of A.I. systems and map their internal reasoning processes.

His work has already delivered major breakthroughs. Last May, his team identified groups of neurons tied to specific behaviors such as bias detection and spam filtering—and showed that deliberately altering these clusters could change a model’s behavior in predictable ways. This gave researchers a new lever for alignment and control. In July, his team extended its methodology to attention mechanisms, advancing the “attribution graph” technique for tracing how inputs flow through and shape outputs. This marked a shift from treating neural networks as opaque algorithms to viewing them as interpretable, steerable systems.

Olah’s work addresses one of A.I. safety’s most urgent challenges: understanding and controlling increasingly powerful systems before they grow too complex to govern. His research in mechanistic interpretability is laying the groundwork for more reliable, transparent, and safe A.I. deployment.

Chris Olah. Courtesy of Anthropic

86. Navin Chaddha

In September 2024, Navin Chaddha launched “AI Garage,” a $100 million early-stage investing initiative to back founders who build “A.I. teammate” companies at the ideation stage. “People build companies, not markets,” Chaddha tells Observer. As managing partner of Mayfield Fund, Chaddha has led the expansion of the firm’s A.I. portfolio, pouring millions across startups in enterprise software, cybersecurity and next-gen chip architectures. But the application that has surprised Chaddha the most is “A.I.-first professional services,” where small teams now deliver what once required hundreds of specialists—at significantly higher margins. Comparing today’s market to e-commerce in the 1990s, Chaddha says professional service firms must reimagine how they operate or risk becoming obsolete. Still, “the idea that A.I. replaces humans is completely wrong,” he says. “Humans must collaborate with A.I. to enhance human capabilities to superhuman levels.”

A World Economic Forum Young Global Leader and 16-time Midas Lister, Chaddha’s track record spans over $120 billion in created enterprise value and 40,000 jobs. This October, he will speak at TechCrunch Disrupt alongside Precursor Ventures’ Charles Hudson, offering guidance on how entrepreneurs can raise their first round of funds. Chaddha’s long-game investing in infrastructure and A.I. provides directional guidance to the next wave of enterprise transformation. “In the A.I. sector, the technical risks are in semiconductor and foundation model companies,” Chaddha tells Observer. “Everywhere else, the risk is in execution.” 
 
Despite his optimism about A.I.’s potential, Chaddha warns of challenges ahead. “An energy crisis is coming. The enormous power needs of A.I. data centers will put a massive load on our power grid, driving greenhouse gas emissions and placing huge demands on water resources,” he cautions. 

Navin Chaddha. Courtesy of Mayfield Fund

88. Abdullah bin Sharaf Alghamdi

  • Founder & President, Saudi Data & A.I. Authority (SDAIA)

Professor Abdullah bin Sharaf Alghamdi has emerged as a key architect of Saudi Arabia’s A.I. transformation under the kingdom’s Vision 2030. As the founding leader of SDAIA and chair of the Saudi Federation for Cybersecurity, Programming and Drones (SAFCSP), Alghamdi has positioned the Kingdom at the forefront of national-scale A.I. governance and digital strategy.

Since launching SDAIA in 2019, Alghamdi has steered major initiatives in data infrastructure, A.I. ethics and digital economy development, ensuring Saudi Arabia is an adopter of A.I. and a global player in shaping its governance. Under his leadership, SDAIA has struck strategic partnerships with international tech firms and governments, most recently signing a memorandum of understanding with AMD at the Saudi-U.S. Investment Forum in June 2025 to bolster data center development. Earlier agreements with U.S. counterparts have also underscored his role as a bridge between Saudi ambitions and global A.I. ecosystems.

Alghamdi’s influence is institution-building. With SDAIA and SAFCSP, he established the infrastructure and talent pipelines underpinning the Kingdom’s A.I. strategy. His authority spans multiple sectors, from digital economy growth to cybersecurity, and his initiatives align with global debates on ethical A.I., inclusion and governance. As A.I.’s role in the global economy accelerates, Alghamdi’s leadership ensures Saudi Arabia continues to emerge as a significant contributor to the worldwide A.I. agenda.

Abdullah bin Sharaf Alghamdi. Getty Images

89. Sheikh Tahnoon bin Zayed Al Nahyan

  • National Security Advisor, UAE & Chair, MGX

As the UAE’s National Security Advisor and brother of the country’s president, Tahnoon bin Zayed Al Nahyan has become one of the most influential figures in the global A.I. race. He chairs MGX, an A.I. investment fund with $50 billion in assets from the UAE’s wealthiest investors. He oversees most of Abu Dhabi’s $1.5 trillion sovereign wealth fund, which he intends to leverage to cement the UAE’s position as a hub for innovation.

His influence extends beyond investment into the realm of geopolitical dealmaking. In May, Al Nahyan’s investment firm committed $2 billion to World Liberty Financial, a cryptocurrency company tied to the Trump Administration. Two weeks later, Trump approved giving the UAE access to hundreds of thousands of A.I. chips, many destined for Group 42 (G42), the tech holding company that Al Nahyan chairs and controls. The timing of these deals raised questions about the intersection of his business interests and diplomatic relationships, particularly given his March White House banquet dinner with President Trump.

In August, MGX was reportedly weighing a $25 billion A.I. fundraise and exploring a $1 billion investment in French startup Mistral AI. Al Nahyan’s G42, a tech holding company that last year announced a $1.5 billion A.I. partnership with Microsoft. His investment portfolio stretches across the global frontier, from OpenAI to Stargate, signaling his appetite for projects with far-reaching impact.

Al Nahyan’s aggressive funding style has reshaped sovereign wealth investing by explicitly linking financial power to geopolitical objectives. By pairing deep pockets with strategic diplomacy—and in some cases, personal relationships with key officials—he has positioned the UAE as a player that global tech leaders and governments can’t afford to ignore, even when national security concerns arise.

Sheikh Tahnoon bin Zayed Al Nahyan. Getty Images

90. Robert Opp

  • Chief Digital Officer, United Nations Development Programme (UNDP)

As Chief Digital Officer of the United Nations’ Development Programme, Robert Opp shapes how A.I. and data drive economic growth across more than 170 countries. From digital public goods to A.I.-powered tools, he focuses on making emerging technologies accessible and inclusive, but challenges the assumption that A.I. delivers benefits equally everywhere.

“In reality, the benefits have been distributed unequally across and within countries,” Opp tells Observer, noting that A.I. amplifies exclusion when data sets and solutions don’t reflect local realities, languages or cultural context. “What’s missing in the conversation is how to localize A.I.,” he explains. “Without diverse and inclusive datasets, A.I. will continue to misrepresent and even marginalize entire populations. This issue doesn’t make headlines as much as job displacement or safety risks, but it is fundamental to whether A.I. can actually serve everyone.” 

Opp prioritizes building foundations—digital IDs, payment systems and data exchanges—before layering A.I. on top. He has become a leading voice in international development circles, advocating for A.I. equity as a driver of private-sector growth and the UN’s Sustainable Development Goals. In commentary on South Africa’s A.I. strategy, he underscored the chance for developing nations to “prioritize equity, inclusion and rights from the start, rather than retrofitting protections later.”

A turning point in his thinking came with an MIT report in August showing 95 percent of companies had seen “zero ROI” from generative A.I. “That felt like a real moment,” he says, “a signal we might finally be moving past the hype cycle.” The finding reinforced his focus on rigorous evaluation of A.I. investments, especially in the public sector.

A.I.’s “dizzying gallop” was also a theme of the 2025 UNDP Human Development Report, published in May. Under Opp’s leadership, UNDP is piloting A.I. for agriculture (real-time crop feedback), health (maternal care access through language models) and education (personalized, inclusive learning). “These are areas where A.I. directly improves lives,” Opp says. “But only if countries have the infrastructure, data and governance to make it work.” He credits UNDP’s “people-first approach” and initiatives like the AI Trust and Safety Re-imagination Programme with building data governance and privacy protections for vulnerable populations.

Before joining UNDP, Opp was Director of Innovation at the World Food Programme, where he launched ShareTheMeal, an app that raised $40 million to fight hunger. The experience taught him that “digital platforms can radically reduce the friction of engagement. When people can act instantly from their phones, they are more willing to participate, even in small ways.” Today, Opp brings that philosophy to embedding ethical A.I. design at the heart of humanitarian work.

Robert Opp. Courtesy of UNDP

91. Amandeep Singh Gill

  • Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, United Nations

As A.I. seeps into international relations, Amandeep Singh Gill fosters the conversation. Through his role as the United Nations Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, Gill is actively shaping global A.I. governance. He focuses on building national capacity to create and deploy A.I. responsibly through investments in compute, cross-domain talent and context-rich datasets. This year alone, Gill has visited India to discuss technology governance, spoken at Politico’s A.I. and Tech Summit about the risks of autonomous weapons, discussed A.I. research with Vietnam and met with China to promote A.I. cooperation. His diplomatic work is driven by concern about A.I.’s concentration of power. “I worry that the growing gulf between the architects of cognition and those who inhabit architected cognitive spaces will undermine human agency and freedom,” Gill tells Observer.

“Intelligence too cheap to count” reminds him of the “electricity too cheap to meter” promise from the nuclear hype era, he says, warning against assumptions that A.I. will usher in abundance without limits. However, Gill sees reason for optimism in recent technical developments—the simultaneous drop in training costs for smaller LLMs and scaling laws reaching a plateau—which he believes could enable “a more democratic, less concentrated, more sustainable A.I. innovation ecosystem.”

Gill addresses the collective action problem of A.I. governance through “multidimensional differentiated agendas” that balance different countries’ priorities—from innovation promotion for developed nations to capacity building for developing countries—while relying on agreed norms like human rights to harmonize perspectives. He orchestrates digital cooperation across the UN’s 193 member states by leading a 39-person A.I. advisory body, advocating that “life and death decisions cannot be delegated to machines,” pushing UN member states to establish explicit prohibitions and regulations on lethal autonomous weapons systems by 2026. Gill rejects the standard framing of innovation versus safety. “The innovation-safety dilemma is false. Innovation has always proceeded best within guard rails,” he says.

Amandeep Singh Gill. Courtesy United Nations

92. Anusha Dandapani

  • Chief Data & Analytics Officer, United Nations International Computing Center (UNICC)

As head of the United Nations International Computing Center’s new A.I. Hub, Anusha Dandapani bridges the gap between the public, academic and private sectors. A working adjunct professor at New York University and Fordham University, Dandapani’s coordination across the United Nations helps build cohesive strategies for responsible A.I. governance, development and deployment. 

“We’re in a ‘move fast and break things’ moment, but the guardrails for responsible use of A.I. are not yet fully in place,” Dandapanni tells Observer. “How do we encourage rapid innovation while ensuring protections that serve humanity at large? It’s a bit like when cars were first introduced: we put them on the road long before we had seatbelts and airbags. We can’t afford to repeat that mistake with A.I.”

Working within UN mandates, Dandapani has collaborated on tangible achievements: Last year, UNICC partnered with Columbia University to support the creation of the Gender Bias Overview Tool (GenBOT), which uses A.I. and machine learning to “identify, measure and address gender biases in datasets and technological solutions.” A few months later, Dandapani spearheaded UNICC’s collaboration with NYU to develop the A.I.-Driven Media Analysis Tool, which identifies “xenophobic language, misinformation and harmful narratives in media reporting” on migration and displacement issues.

Dandapanni introduced the UNICC A.I. Hub in July, at a launch event coinciding with the AI for Good Summit in Geneva. UNICC’s A.I. Hub operates under the organization’s strategic framework through 2030, offering consultancy, shared and custom solutions and training programs for UN teams through their A.I. Academy. The A.I. Hub’s global influence is poised to help UN member states effectively leverage A.I. as it evolves. 

Anusha Dandapani. Courtesy of UNICC

93. Terence Tao

  • Professor of Mathematics, UCLA

Terence Tao, widely regarded as “the Mozart of Math” and the world’s greatest living mathematician, occupies a singular position in the A.I. landscape. His extraordinary ability to process information, gain insights and draw connections between unrelated topics—qualities that made him legendary at UCLA—now positions him as a critical force in A.I.’s mathematical foundations. As an advisor to XTX’s $9.2 million fund supporting researchers using A.I. to advance mathematical capabilities, Tao is actively shaping how A.I. can be applied to push the boundaries of mathematical discovery. His recent high-profile feature in The Atlantic is just the latest to establish him as a leading authority on A.I.’s potential and limitations in mathematical reasoning. 

What makes Tao particularly compelling is his nuanced understanding of A.I.’s constraints. His observation that A.I. still lacks a “sense of smell”—the human intuition that alerts when something doesn’t add up—represents one of the most insightful critiques of current A.I. limitations. His is the perspective of someone who understands both the profound potential and inherent boundaries of A.I. in mathematical contexts. His emphasis that human judgment remains crucial in mathematics provides a counterbalance to A.I. hype while simultaneously working to expand A.I.’s mathematical capabilities. 

The reverence expressed by the mathematics community is evident in the hundreds of Reddit comments praising his breadth of research contributions and ability to discover and disseminate insights. The response to a single question in the /math subreddit: How extraordinary is Terence Tao?, underscores why his voice carries such weight in discussions about A.I.’s role in advancing human knowledge. As one Reddit user explained: “When I was an undergrad at UCLA, one thing that I consistently heard about him from professors, grad students and undergrads who worked with/studied under him was his insane ability to process information, gain insights and draw connections between unrelated topics. A few professors said they could just talk about their (unrelated) research with him and he could get up to speed on it lightning fast.” 

Tao’s recent political engagement adds another dimension to his influence. Previously avoiding politics, he’s become outspoken about funding cuts to scientific research, particularly after President Trump froze UCLA funding from the National Science Foundation and National Institutes of Health. This transformation from apolitical academic to active advocate comes at a crucial time when A.I. research funding faces uncertainty. His unique combination of unassailable mathematical credentials and growing political voice makes him potentially influential in shaping A.I. research policy and funding priorities over the coming year, especially as debates about A.I. regulation and research investment intensify globally. 

“The type of work I cherish the most is the type where, at the end of the project, not only have I understood some phenomenon or subject better, but can also present it in such a way that others also gain the same insight,” Tao says in 2006’s Insights From SMPY’s Greatest Former Child Prodigies: Drs. Terence (“Terry”) Tao and Lenhard (“Lenny”) Ng Reflect on Their Talent Development. “I find this type of progress—the discovery and dissemination of insights—more satisfying, in fact, than solving a previously unsolved problem, though I find the two are often related. One usually does need to discover a new insight, or to understand an existing insight more fully, in order to make progress on a problem. This type of work isn’t always a research paper; there are also some lecture notes for my graduate and undergraduate classes, for instance, that I am quite proud of, explaining quite standard material but with a spin on it, which gives it more meaning and relevance to the reader.”

Terence Tao. Getty Images

94. Rohit Prasad

  • SVP and Head Scientist of AI, Amazon

Rohit Prasad drives Amazon’s dual push into consumer voice technology and artificial general intelligence. As SVP and Head Scientist of AI, he oversees the company’s generative A.I. portfolio, including Amazon Nova, the foundation model designed for frontier intelligence with industry-leading price performance. In April, Amazon introduced Nova Sonic, a real-time, emotion-aware speech model that interprets tone and intent for more natural interactions. Prasad calls it a “huge step” toward AGI, advancing models that grasp context and complexity. Amazon says Nova Sonic is nearly 80 percent faster and cheaper than rival voice systems like OpenAI’s GPT-4o. Early adopters include ASAPP, Education First and Stats Perform, which deploy it across customer support, language learning and sports analytics.

Prasad’s shift into AGI leadership in 2023 built on his years as head scientist for Alexa, where he led multidisciplinary teams advancing far-field speech recognition, natural language understanding and the machine learning breakthroughs that made Echo’s hands-free interaction mainstream. Under his leadership, Alexa became a global standard for ambient intelligence—A.I. that blends seamlessly into environments and adapts across devices. Today, those two worlds converge. Amazon Nova foundation models now power Alexa+, handling over 70 percent of responses, while Nova Act drives Alexa’s Web Action SDK for voice-controlled web interactions. This integration ties Prasad’s track record in conversational A.I. directly to Amazon’s broader AGI ambitions.

Beyond voice tech, Prasad is advancing safety through initiatives like the Nova AI Challenge, which used penetration testing to stress-test model trustworthiness in code generation. He is a named author on more than 100 scientific papers, holds multiple patents, and serves on the board of the Partnership on AI, a nonprofit focused on ensuring artificial intelligence advances human-centered outcomes.

Rohit Prasad. Courtesy of Amazon

95. Palmer Luckey

  • Founder, Anduril Industries

With his company, Anduril Industries, Palmer Luckey—who became a billionaire by selling his company Oculus VR to Facebook (now Meta)—is poised to change the way the world makes war using A.I. Ostensibly a hardware company, Anduril’s success has been built around Lattice AI, an advanced scalable open-source software platform that powers a suite of autonomous and semi-autonomous defense solutions. Luckey has secured multimillion-dollar contracts with several government entities, including deals with the U.S. Air Force and the DoD’s Defense Innovation Unit in 2024. Early this month, the company announced a $159 million military contract to develop an A.I.-powered, helmet-mounted mixed reality headset that will “equip every soldier with superhuman perception.” Among the international buyers of Anduril Industries products are Ukraine (autonomous weaponry) and Australia (Ghost Shark autonomous subs).

By August of last year, Anduril reached a $14 billion valuation after a Series F funding round that raised $1.5 billion from investors like Peter Thiel’s Founders Fund and Sands Capital. This summer, the company raised $2.5 billion at a $30.5 billion valuation in a funding round led by Founders Fund, which contributed $1 billion. Construction is already underway for a $1 billion manufacturing facility in Ohio to produce A.I.-powered drones and other aerial weapons systems.

Luckey, a vocal supporter of Donald Trump, is a firm believer in the idea that power promotes peace. “My position has been that the United States needs to arm our allies and partners around the world so that they can be prickly porcupines that nobody wants to step on, nobody wants to bite them,” he said in a 60 Minutes interview, adding that he’s “a lot more worried about evil people with mediocre advances in technology than A.I. deciding that it’s gonna wipe us all out.” 

Palmer Luckey. Courtesy of Anduril Industries

96. David Sacks

  • Founder & Partner, Craft Ventures

A former PayPal executive and Trump-appointed tech advisor, David Sacks channels his influence as founder and partner of Craft Ventures, an early-stage venture capital firm. As the White House’s A.I. and crypto czar since 2025, Sacks received an ethics waiver allowing him to participate in policy decisions, despite relevant financial holdings and business relationships that have drawn scrutiny.

His role became controversial when Sacks advocated for a UAE chip deal that would grant the Emirates access to hundreds of thousands of advanced A.I. chips, despite administration colleagues expressing concerns about his longstanding business relationships with the Gulf and past investments in the A.I. industry. The timing raised questions as the Abu Dhabi Investment Authority, which oversees $1.5 trillion in sovereign wealth under Sheikh Tahnoon bin Zayed Al Nayhan, was an early investor in Craft Ventures. Just weeks before the chip deal approval, Al Nayhan’s firm invested $2 billion in World Liberty Financial, a Trump-linked cryptocurrency company.

Beyond his government role, Sacks continues his venture capital work through Craft Ventures. In 2025, he led major investments worth tens of millions into startups, including Vultron, Norm AI and Horizon—companies developing A.I. agents and safety infrastructure. His investment portfolio spans various A.I. applications, from automation tools to safety-focused technologies.

Sacks has also become an outspoken voice in A.I. policy debates, often framing issues through a competitive lens with China. In mid-2025, he warned that overreaching chip export restrictions could undermine U.S. competitiveness as Chinese models close the performance gap, arguing, “Do we want these countries to be piggy banks for American A.I. or for Chinese A.I.?” He has tempered fears of A.I.-driven job loss, instead advocating for attention to digital-era mental health, calling it a “tech psychosis” that demands structural reform.

David Sacks. Getty Images

97. Chandra Donelson

  • Director of Data, AI & Software, United States Space Force

Chandra Donelson is building the systems that shape how the United States Space Force (USSF) uses intelligence, interoperability, and automation at scale. Under her leadership, the Space Force released the 2025 Data & AI Strategic Plan in March, outlining how to integrate advanced innovation into operations to prevent conflict. Four months later, she helped launch TALOS, an A.I. agent developed with Slingshot Aerospace to provide real-time mission support. Donelson also spearheaded the Space Force Generative AI Challenge in 2024 and the Space Force AI Challenge in 2025.

In April, the USSF awarded Slingshot Aerospace a $4.5 million contract to apply A.I. to satellite tracking, space traffic coordination and modeling. Just last week, the Space Force announced the development of Cyber Resilience On-Orbit, an AI-powered tool designed to detect cyberattacks on satellites.

98. Fidji Simo

  • Applications CEO, OpenAI

Fidji Simo has outlined an ambitious vision for A.I. as a democratizing force. In July, ahead of beginning her new role at OpenAI, Simo distributed what Wired called a “hyper-optimistic” welcome memo to OpenAI staff, describing how ChatGPT could serve as a personal coach, tutor and emotional companion available to everyone—drawing from her own experience with a business coach she calls “transformative” but acknowledges was “a privilege reserved for a few.” 

Simo positions A.I. as capable of compressing “thousands of hours of learning into personalized insights” and helping users “develop confidence in areas that once felt opaque.” Her practical mandate involves translating OpenAI’s research into viable commercial products and securing the business partnerships to fund such expansive promises.

Simo formerly served as the CEO of Instacart. She’s credited for successfully taking the company public and launching initiatives like the A.I.-powered Caper smart cart and a healthcare research institute focused on food access and insecurity. She joined OpenAI’s board in 2024. Now, as an official member of the executive team, she is tasked with making the company profitable as it gears up for its own potential IPO. 

Since August, Simo has reported directly to Sam Altman, who calls her “uniquely qualified” to lead a group of business and operational teams “responsible for how our research reaches and benefits the world.” Earlier this month, Simo published a blog post on expanding economic opportunity with A.I., as the company announces its job matching platform and certification program under the banner of democratizing economic opportunities. While OpenAI executives acknowledge that A.I. is reshaping the job market and creating disruption, the company positions its new commercial platform as a pathway for workers to adapt to the changing landscape that A.I. technologies have helped accelerate.

Fidji Simo. Getty Images

99. Mark Minevich

  • Founder, Going Global Ventures

Mark Minevich is a strategic connector bridging governments, corporations and international organizations in A.I. policy and investment. He is one of the few individuals simultaneously shaping A.I. investment flows, government policies and corporate strategies across multiple continents. “We are naive to even think that A.I. can ever be ‘neutral,’” Minevich tells Observer. “It simply can’t. Every model carries the fingerprints of human bias baked into the data and algorithms. The real task isn’t chasing some fantasy of objectivity but building systems that are transparent, accountable and constantly monitored by humans.”

As founding partner and chairman of Going Global Ventures and strategic partner at Mayfield Fund, Minevich directs capital across the U.S., E.U., Gulf Countries, South America and Japan while serving on corporate advisory boards including Franklin Templeton AI. His portfolio includes successful exits like Infosec Global (acquired by Keyfactor in May 2025) and DarwinAI (acquired by Apple in March 2024). Through IDCA, Minevich co-authored the Global AI Infrastructure Report 2025, which serves as official input to G7 and UN discussions. Minevich launched and leads the AI150 and holds senior fellow positions with the Council on Competitiveness in Washington, DC, and the Global Federation of Competitiveness Councils. Minevich is an executive advisor to Hitachi Japan, Hitachi Vantara, Aramco and the Saudi AI Data Authority in Saudi Arabia. His shareholder position and private investments encompass board and advisory roles at DevRev (backed by Khosla and Mayfield), NukkAI, 1FS Wealth, Corent Technology, Inc. and Quant.AI. 

Mark Minevich. Courtesy of Mark Minevich

100. Sam Hamilton

  • Sustainable & Responsible A.I.

As head of data and A.I. for Visa, a role he stepped down from on September 12, Sam Hamilton was a key architect of the company’s global A.I. strategy, ensuring the technology delivered measurable business impact while reshaping how billions of people engage with digital payments. During his tenure, Hamilton oversaw more than $10 billion in investments across fraud prevention and advanced analytics. These efforts not only reinforced Visa’s leadership in secure payments but also produced tangible results, including blocking an estimated $40 billion in fraudulent activity in 2023.

“Most people focus on model performance or bias, but few talk about how dependent we are on opaque data pipelines, proprietary training sets and third-party infrastructure,” Hamilton tells Observer. “If a foundational model is trained on flawed or manipulated data, or its lineage is unclear, it can introduce systemic risks across industries—especially in finance.”

In March 2024, Hamilton spearheaded the launch of Visa Protect, a suite of A.I.-powered tools designed to stop fraud in real time, with a particular focus on card-not-present and digital transactions. The rollout reflected his vision for A.I. at scale: seamlessly embedded into financial infrastructure to enhance trust, efficiency and security. The following month, Visa debuted Visa Intelligent Commerce, a partner program with platforms including Anthropic, Mistral, OpenAI and Perplexity, allowing developers to leverage Visa’s A.I. commerce capabilities worldwide.

Hamilton’s next venture, announced soon, will focus on sustainable, responsible and ROI-backed A.I. solutions for individuals, enterprises and communities. Looking back, he points to the release of multimodal capabilities as a turning point. “When LLMs began interpreting images alongside text, it marked a major leap,” he says. “Suddenly, A.I. could understand visual context, diagrams, screenshots and even handwriting. This unlocked new use cases in education, accessibility, design and diagnostics—blurring the lines between human and machine perception.”

Reflecting on his time at Visa, Hamilton lauds the company’s commitment to being “accountable stewards of data,” citing the principles and safeguards that guided its A.I. systems. “Visa has been using A.I. for more than 30 years,” he says. “Visa invests in and employs A.I. to drive innovation and support its mission to uplift everyone, everywhere by being the best way to pay and be paid.”

Sam Hamilton. Courtesy of Visa

Project Credits

Lead Editor: Merin Curotto
Editors: Sissi Cao, Alexandra Tremayne-Pengelly & Christa Terry
Reporting: Rachel Curry, Refugio Garcia & Aaron Mok
Research: Reese Hanna, Sandra Lope-Bello, Kat Wifvat, PhD, Michael Kim, Reema Alotaibi, Irza Waraich & Maya Stehr
Art & Audience: Sonia Rubeck

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow