Quant’s Chetan Dube Prepares for the Incoming Digital Tsunami

In this Q&A, Quant CEO Chetan Dube reflects on the seismic shifts A.I. is bringing to work, society and the global economy. He discusses agentic A.I.'s rise could reshape how we define employment, the role of digital twins in a hybrid human-machine future and the urgent need for new policies like machine taxation and universal basic income to ensure technology serves humanity rather than displaces it.

Sep 18, 2025 - 12:00
 0  3
Quant’s Chetan Dube Prepares for the Incoming Digital Tsunami

Professional portrait of <a href="https://observer.com/person/chetan-dube/" title="Chetan Dube" class="company-link">Chetan Dube</a>, founder and CEO of <a href="https://observer.com/company/quant/" title="Quant" class="company-link">Quant</a>, smiling at the camera while wearing a light pink suit jacket with a purple bow tie and white dress shirt. He is positioned in front of a modern office background with blurred architectural elements. The image includes "A.I. Power Index" branding with his name and title "Founder & CEO, Quant" on the right side.

Chetan Dube, featured on this year’s A.I. Power Index, has built a $2.4 billion net worth across three A.I. companies over 28 years, culminating in his latest venture, Quant, which he launched in September 2024 immediately after selling his previous company, Amelia, to SoundHound for $180 million. As founder and CEO of Quant, Dube is pursuing what he calls his “lifetime persuasion”—a quest that began in 1998 when he told his doctoral advisor at NYU’s Courant Institute that he could approach the Turing horizon within a couple of summers. Twenty-five years later, advances in neural networks and datasets have finally made his vision of creating “digital employees indistinguishable from humans” possible. Quant’s agentic technologies are already delivering dramatic results, with one major utility company resolving over 76.8 percent of complex customer service calls through Quant’s digital employees, achieving the kind of operational efficiency gains—often exceeding 60 percent—that Dube believes will create an existential crisis for companies that fail to adapt. His work extends beyond business applications to broader societal implications, as he advocates for taxing corporations on the revenue driven by digital workers and redistributing those proceeds to humans as a form of universal basic income. Dube warns that humanity faces a “digital tsunami” by 2030, where distinguishing between digital and human colleagues will become impossible, requiring proactive preparation for what he envisions as a hybrid workforce where humans manage teams of A.I. agents rather than compete with them for traditional jobs.

What’s one assumption about A.I. that you think is dead wrong?

That it is completely utopian or that it is completely dystopian. The future is ours to decide. We have a revolutionary power at hand. If we harness it gainfully, it can cure the planet of many maladies; if we don’t, it can be the final invention.

 If you had to pick one moment in the last year when you thought “Oh shit, this changes everything” about A.I., what was it?

When data sets finally reached a size that the convolutional neural networks started to come up with answer recalls that mimicked close-to-human levels of accuracy. As someone chasing the Turing edict (where machines become indistinguishable from humans) for over two decades, we knew that the thus-far elusive horizon was near.

What’s something about A.I. development that keeps you up at night that most people aren’t talking about?

Humans aren’t ready for a hybrid society. We aren’t ready for passing someone in a hallway and not knowing if they are digital or human. We aren’t ready to have digital companions or digital bosses. Yet, all of that is going to happen by 2030. The fact is that there is a digital tsunami moving towards us at record speed. We cannot sit on the beaches, thinking everything will be fine, when a 100-foot wall of water moves towards us. Man must proactively move to a higher ground of creative thinking. Companies must prepare human workers to leverage their digital companions for everyday work and move up the value chain. Governments must develop redistribution techniques for the wealth generated through digital means to provide humans with a universal basic income. This tsunami can be beautiful if we proactively start moving up, as it can clean up all of humanity’s crud and maladies. If we don’t, it can lead to social unrest, as humans scramble to redefine themselves in a digital world.

Once, when walking my son to collect the newspaper at the end of the driveway, he asked, “Dad, are you going to be a robodad?” The fact that the Terminator vision of A.I. development is a possibility, as distinct as it might be, is a frightening thought that most people aren’t taking as seriously.

You sold Amelia to SoundHound for $180 million in August 2024 and immediately launched Quant in September. What convinced you to start over rather than stay with the acquisition, and how is Quant different from what you built before?

The reason for that relates to my lifetime persuasions. While at Courant Institute of Mathematical Sciences in 1998, I proclaimed to my doctoral advisor that, given a couple of summers, we should be able to extend our work on deterministic finite state machines and approach the Turing horizon. My professor cautioned me, saying that even the father of A.I., John McCarthy, had given up on the problem, stating that it turned out to be tougher than anticipated. I was young and naïve, and set sail.

It so happens that my advisor was right. It wasn’t a couple of summers. Twenty-five summers had passed chasing this ‘end of the rainbow.’ But finally, in 2023, data sets and neural networks had reached a point where agentic technologies made my research pursuits—of developing digital beings indistinguishable from humans—a realistic possibility. It was time for me to embark, with Quant, on creating digital employees that will define a more productive hybrid workforce. Within the first year, several major corporations are making that a reality. For instance, one of the largest utilities in the country is resolving over 76.8 percent of all its inbound complex calls about billing, disruptions and originations through Quant digital employees. It is exciting to see us getting there.

You’ve built a $2.4 billion net worth across three A.I. companies over 28 years. What pattern do you see in corporate A.I. adoption that others miss, and which industries are moving too slowly?

The biggest challenge in the adoption of A.I. has been efficacy. The ROI on A.I. has been, thus far, marginal, on average, in single digits. Only now, with the adoption of Agentics, can companies achieve ROI in excess of 50 percent. Corporations are jumping headlong. Companies that don’t achieve 60 percent gains in operational efficiencies will face an existential crisis. Even the typically slower adopters in industrials and utilities have built up bold plans for Agentic adoption. The problem is that the signal-to-noise ratio is weak. A.I. companies are often advancing generative technologies (for information retrieval, which accounts for 19 percent of the total incoming volume) as opposed to Agentic technologies (for actions that handle 81 percent of incoming tasks). Organizations must cross the subtle yet profound chasm from information (19 percent) to action (81 percent) to realize the promise of steep A.I. returns.

Empirical data of production-grade deployments is now clearing the air. Even typically slower-moving verticals like utilities and industrials have leapfrogged and jumped ahead in the adoption of Agentic technologies.

You propose taxing digital employees for their productivity, with the proceeds redistributed to human workers. How do you implement that practically when most companies see A.I. as a cost-cutting tool? How would the funds be distributed?

Great question. Real agentic technologies can potentially eliminate over 60 percent of service costs. These digital workers will work 24/7 and boost global productivity by as much as 40 percent. McKinsey estimates that they’ll generate over $4.4 trillion of value. Some proportion of that the corporations can keep, say half at 30 percent. The other 30 percent should be redistributed to humans to provide a minimum standard of living for all humans. Machines have to work for humans. They have to be subservient, and we have to enforce that algorithmically. The whole purpose of my research has been to apply A.I. for the betterment of human society. If that is not realized, then what was the point?

In 2014, I was privileged to address some members of the French senate, when one of their scholars challenged that isn’t the whole purpose of A.I. that man may spend more time at the beach. I strongly believe that A.I. should exist to serve humanity. It should be subservient to humans. Left to run amok, we face the challenge of rogue A.I., without the ethics of differentiation between right and wrong. Rather than baroque copious amounts of literature on A.I. governance that no one reads (let alone complies with), we need to have dynamic compliance laws like HIPAA for A.I.

For any A.I. system to be accepted into widespread adoption, we need to ensure it passes those A.I. compliance tests that validate that the A.I. system has been intrinsically taught the ethics of doing the right thing and the subservience to the master of humanity it serves. Once these robosystems are certified as “good citizens” of tomorrow, we should make them pay taxes like the rest of us. Based on the work they do and commensurately the “wages” they earn (we can easily assess when a robot is doing how many full-time-equivalent humans work), we could tax them 30 percent.

This 30 percent should go into a pool and be distributed equally among the humans whose household income is below a certain threshold. It will ensure that no child goes hungry and no mother has to worry about providing for their kids.

How complicated will it be for companies to calculate what portion of their revenue A.I. twins generate? 

Most mature organizations maintain good indices of productivity. We can tell that a human does x tickets a day or handles y tasks a day. Based on that, we have an assessment of how many robots are needed to displace that task. Once a digital twin has been “hired” for the job, we should task him with 30 percent of the “wages earned” (what a human would have earned for doing the same task).

The most basic, consumer-friendly A.I.s today—thinking ChatGPT and Claude—require a surprisingly irritating amount of oversight and quality checks. They often reorganize or misinterpret information. In certain use cases, it takes more time to fact-check generative A.I. results than it would have been to tackle things yourself. How will employers know what tasks can reasonably be handed off to a digital twin with minimum oversight or quality control checks? 

Terrific insight. Current large language models need oversight. There is inherent non-determinism in probabilistic convolutional networks that have a token-by-token recall based on a degree of confidence variable, often called temperature. Even if you set the temperature to be zero (which is forcing it to be deterministic in theory), there is an inherent tendency to drift due to floating-point non-associativity and concurrent thread executions. Floating point non-associativity is that (a+b)+c in LLM world is not necessarily equal to a+(b+c). Similarly, concurrent thread executions refer to the fact that, depending on which concurrent core finishes first, you will have different results. The above is supposed to dissect why you feel LLMs need oversight. Today.

When A.I. makes mistakes, such interruptions tend to undermine humans’ trust in digital twins to manage tasks competently, which may be a friction point, as humans may be reluctant to hand things off to their A.I. twins. How do you think about this? 

In bygone days, we used to say, to err is human, but to really screw up takes a machine. That fear is predominantly a limiting factor today in the explosive adoption of agentic technologies, as opposed to generative technologies. If generative technologies did something wrong, you’ll get a wrong FAQ or information in your hand; if agentic technologies did something wrong, your trade can go berserk. Progressively, the systems are coming up with ways to mitigate hallucinations. So the variance will become less and less. An important approach in this regard is to layer cognitive determinism on top of these inherently probabilistic neural networks so that Merin’s [Observer’s head of audience and content strategy] digital twins can do their jobs without the systems straying from the ordained execution path.

The rise of digital twins will create a need for user-friendly platforms that enable humans to manage what’s been handed off. There are enterprise systems for this already in the market. Do you foresee a market for consumer-friendly applications to create and manage digital twins? 

Advocacy for human-in-the-loop is key today. Machines may be doing the heavy lifting for common chores, but we need control gates for humans to validate their behavior. Your question is intriguing as it is leading to machines governing other machines. We are definitely trending in that direction. Albania’s Prime Minister just appointed an A.I. minister, stating it will eliminate corruption and help Albania leapfrog others. By 2030, you are going to start seeing agentic CEOs in companies that are running other autonomous agents for processes ranging from CRM to ERP to legal.

Can you give me an example of the work only a human could do, or will always be able to do better than a digital twin, in media and publishing? Everyone needs a little inspiration these days. 

No LLM will be asking these probing questions that are supposed to steer the civilization towards a better future. The power of creative thinking is the forte of humans. For instance, A.I. is playing a crucial role in genomic sequence mapping by automating the analysis of sequencing data. LLMs are used to identify genetic variants with unprecedented accuracy. Pattern matching is what neural networks do very well. But humans are required to interpret the results of the patterns and insights. Humans are the ones who make informed decisions about genetic diseases, drug targets and personalized medicine. While A.I. serves as an invaluable tool, humans continue to be in the driver’s seat. As of yet!

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow