Nick Beim

Thoughts on the Economics of Innovation

The Week Altruist Shook the Markets

On February 10, Altruist announced a new AI-powered capability called “tax mode” in its Hazel AI product. Public wealth management stocks dropped precipitously, losing $150 billion in value over the next day and a half and leveling off at a total loss of $130 billion by Feb 12. It was wealth management’s DeepSeek moment.

What happened? What is Hazel’s “tax mode”? And what does this episode tell us about how markets are thinking about AI’s impact on wealth management?

What Makes Hazel Different

AI in wealth management is not new. Over the past 18 months, we’ve seen the rise of AI note takers and workflow assistants tailored for financial advisors—companies like Jump and Zocks. These tools sit on top of meetings, generate summaries, pre-fill CRM entries, draft follow-up emails, and help with compliance documentation. They are useful, but not transformational.

Hazel is different for one reason that matters enormously: it is integrated directly into the custodial system and has access to custodial data. Note takers such as Jump and Zocks operate at the “conversation layer.” They hear what the client says. They can summarize goals and generate follow-ups. But they do not have direct, structured, authoritative access to portfolio-level, tax-lot-level, account-level custody data. That means they can’t do things like calculate real-time embedded gains, model tax-loss harvesting across accounts, optimize asset location dynamically and identify wash sale conflicts across household accounts. Hazel can.

Because Altruist is both the advisor’s platform and the custodian, Hazel’s “tax mode” can query the actual ledger of record. It can look at every lot, every gain, every loss, every account registration. It can move from conversation to action without manual reconciliation between systems. That’s a fundamentally different starting point than an AI tool that only listens to a Zoom call.

Why Tax Is the Most Valuable—and Most Difficult to Scale—Service in Wealth Management

In wealth management, advisors often say that tax alpha is the most tangible value they create. Investment returns are partly market-driven. Behavioral coaching is real but hard to quantify. Planning is episodic. But tax management—done correctly—can produce measurable, after-tax outperformance year after year.

And yet, tax optimization is one of the hardest services to scale. Why? Because it involves painstaking research and analysis involving cross-account coordination, household-level modeling, timing sensitivity, compliance and documentation, and because it requires advisor judgment.

Today, many advisors still rely on spreadsheets, manual analysis and fragmented tools to deliver tax insights. Even with modern rebalancing software, true household-level, dynamic, AI-driven tax optimization remains complex and labor-intensive.

Hazel’s “tax mode” attacks exactly this bottleneck. It can instantly surface tax-loss harvesting opportunities, simulate gain-offset strategies, optimize asset location across taxable and tax-deferred accounts and draft client-ready explanations in seconds.

In other words, it converts high-skill, low-scale labor into software leverage. That’s why the market reacted.

Why Existing Custodians Are at a Structural Disadvantage

It would be easy to assume that large custodians like Charles Schwab or Fidelity could simply replicate Hazel’s functionality. But that assumption overlooks something structural: legacy custody platforms are not modern wealth infrastructure. Most large custodians operate on decades-old systems layered with multiple acquired platforms, patchwork integrations, batch-processing architectures, rigid data schemas and complex internal permissioning systems. These systems were not built for real-time AI integration. They were built for record-keeping, trade settlement, regulatory reporting and operational stability.

To integrate a true AI tax engine at the core of custody requires clean and normalized data architecture, real-time API access, flexible permissions and modular service design. Altruist, as a modern fintech custodian built in the cloud era, has the architectural advantage. Its custody stack was built with modern technology, APIs and data-layer accessibility in mind. By contrast, retrofitting AI deeply into a legacy custody core is more like open-heart surgery than adding a feature.

This doesn’t mean Schwab or Fidelity won’t respond. It means the speed and depth of response are structurally constrained. Markets are extremely sensitive to that distinction.

Why the Market Reaction Was So Large

The selloff was not just about tax-loss harvesting software. It was about the potential repricing of advisor productivity, platform differentiation, margin structures, custody defensibility and client acquisition economics.

If AI tools embedded in modern custodians materially increase advisor productivity, several second-order effects follow:

  • Advisors may consolidate onto AI-enabled platforms
  • Smaller RIAs could scale faster with fewer staff
  • Tax alpha becomes more standardized
  • Client expectations rise
  • Platform switching becomes more attractive

For firms that earn basis points on trillions of AUM, even a small change in retention, growth, or pricing power compounds dramatically.

Markets price optionality. And they price competitive displacement risk even more aggressively.

One Interpretation: How Much Value Could AI Create?

One way to interpret the $130 billion market drop is not as a measure of value destroyed, but as a real-time, market-implied estimate of potential value transfer in the industry.

Consider:

  • The U.S. wealth management industry oversees roughly $30–40 trillion in advised assets
  • Advisors typically charge 50–100 basis points
  • Even a 5–10 basis point shift in value capture (through better tax optimization, other new AI capabilities and competitive pressure) equates to tens of billions annually

Who captures this value? Advisors themselves seem well-positioned to capture some of it. If AI-enabled tax optimization allows them to deliver 20–40 basis points of after-tax alpha, or reduce staffing costs by 20–30%, or improve client retention by even 1–2%, the cumulative value creation across the industry could be significant.

Consumers themselves will benefit greatly from faster, more automated and more rigorous tax optimization and other AI capabilities to come. And some advisors offering leaner, more responsive organizations may choose to share some of these gains in the form of lower fees.

One significant set of beneficiaries of this value transfer will be providers of powerful new AI capabilities that are not easily replicated by others. Altruist is off to a strong start in this category as a high-tech, full-stack custodian with a growing suite of AI-enabled capabilities embedded in its Hazel platform. It has already announced additional Hazel modules for financial planning and compliance support to be released in the quarters ahead, and many more will follow.

At Venrock we have invested in a variety of AI-enabled technology companies in wealth management seeking to provide similarly high-value products and services that cannot be easily replicated. In addition to Altruist, these include FINNY (an AI-enabled prospecting platform for advisors with its own data infrastructure), Moment (a fixed income trading and portfolio management platform with its own data and execution infrastructure), Vanilla (AI-enabled estate planning software) and two additional companies that will come out of stealth mode shortly.

The Big Picture

The wealth management industry has historically been insulated from software-style disruption because of some of the core characteristics of advisory work: relationships matter, regulation slows change and switching costs are real.

AI doesn’t eliminate these factors. But it does amplify the advantages of modern infrastructure. Hazel’s significance is not just that it uses AI. It is that it uses AI at the custodial data layer, enabling tax optimization—and ultimately additional capabilities—to be delivered programmatically. That’s why other AI note takers cannot replicate it. That’s why legacy custodians face integration challenges. And that’s why markets erased over $130 billion in market value in days.

The episode suggests something larger: AI’s value in wealth management will not come from replacing advisors, but from radically increasing their leverage. And in a trillion-dollar industry built on basis points and operating leverage, even modest improvements in productivity, tax alpha, and platform differentiation can justify very large valuation swings.

The market reaction may ultimately prove exaggerated. But it revealed something unmistakable: investors believe AI can reshape the economics of wealth management at scale. And when the ledger of record meets the language model, the consequences ripple far beyond a single product announcement.


The Promise of Vertical AI

I had an opportunity to speak with David Weisburd on what happens when general intelligence commoditizes, the promise of vertical AI and the impact of AI in wealth management and defense. We also had chance to discuss what I think is one of the most interesting cognitive bias traps in venture investing, which is when pattern recognition – which helps you sift through 95% of investment opportunities with extraordinary efficiency – utterly fails you when it comes to some of the most successful venture investments. Love how they digitally dressed me in a tie for the cover photo :). You can listen to the discussion here.


FINNY: Unlocking Organic Growth in Wealth Management

It’s rare when an early-stage B2B startup sees explosive organic growth out of the gate, with 80%+ of customers coming from inbound demand, close rates of 70%+ and sales cycles of 1-2 days. These are signs of exceptional product value, and that’s what FINNY, an AI-driven prospecting platform for wealth management, has delivered with its initial product.

FINNY enables financial advisors to find and engage potential clients who are the best fit for their practices and most likely to convert. And it delivers big: since its launch, FINNY has generated $7.7M in new client assets per advisor annually, which is a big number relative to the product’s low cost. At a 1% annual fee, this translates into $77K in high margin revenue per advisor annually for 20+ years.

Scalable organic growth is the holy grail of the wealth management industry, something that every advisor seeks and few experience. Existing growth methods are painful because they are either incredibly expensive (custodial referrals) or incredibly time consuming (traditional lead gen). The result is an industry that struggles significantly with growth. FINNY brings scalable organic growth to financial advisors by enabling them to reach out to high-conversion targets in a customized yet highly automated, continually optimized way. A few hours of setup launches a campaign that puts high-value prospects on an advisor’s calendar.

The kind of success FINNY has seen is only possible with an exceptional team. FINNY’s founders – Eden Ovadia, Victoria Toli and Theo Janson – are AI engineers who combine deep domain expertise in wealth management, top Silicon Valley product experience and an ability to execute very rapidly. They have already won every major award in their industry – the 2025 Wealthies, the 2025 Morningstar Fintech Annual Showcase, the 2025 Datos Impact Awards, the ThinkAdvisor Luminaries Awards – and were recently named part of the Forbes 2026 list of 30 Under 30. They have emerged as clear thought leaders in the industry on how to accelerate organic growth.

It’s a privilege and a lot of fun to work with the team. FINNY is our seventh investment in wealth management, where we continue to see big opportunities for new technology companies. In the emerging AI-driven technology stack for financial advisors, FINNY is a must-have component whose significance will grow meaningfully over time. Watch for significant announcements from FINNY in the year ahead.


Collaborative Intelligence

Much of the early public discussion about AI focused on what AI could do better than humans and where it would replace human labor. The same thing happened with the emergence of computers in the 1960’s, personal computers in the 1970’s and the internet in the 1990’s. This line of thinking is a natural human response to new technologies perceived as a threat, but it misses the far more important point that the big breakthroughs in productivity and intelligence come from human-AI, human-internet and human-computer teaming.

In this podcast discussion with Gautam Mukunda and Shawn Bice, we had a chance to dig into what the collaborative human-AI intelligence of the future will look like, how AI will change human decision-making and how it will impact employment, science and technology innovation. You can listen to the discussion here.


Learning Effects, Network Effects and Runaway Leaders

.

There’s a new economic force at work in the machine learning revolution that is capable of generating increasing returns to scale, much as network effects did in the internet revolution.

This force is automated learning, and its business impact comes in the form of learning effects: the more a product learns, the more valuable it becomes.

Learning effects have the potential to generate enormous economic value, as network effects do, if companies are able to close this loop and make it self-reinforcing: that is, if their products learn more because they have become more valuable.

This happens when more valuable products attract more users or customers, who provide more and richer data of the kind that enables machine learning models to make these products more valuable still, which attracts more users or customers still, and so on, creating a self-perpetuating cycle.

.

Just as network effects determined who the biggest winners of the internet revolution were, learning effects will determine who the biggest winners of the machine learning revolution will be.

Because they enable increasing returns to scale, they will similarly give rise to a set of companies that become runaway leaders – that are capable of pulling away from their competitors and continuing to increase their leads over time.

Offline Origins
Like network effects, learning effects have always existed in the offline world but have become supercharged in the digital world. In the offline world, learning effects are transmitted through humans: as people learn how a product can become more valuable, they modify it accordingly. Human learning, however, is artisanal, and artisanal learning only scales so quickly.

What’s new and different in the machine learning era is that certain kinds of learning have become automated. Software can learn by itself with exposure to new data and become more valuable in the process. This is a big deal economically. It involves the unlocking of new source of economic value that was previously inaccessible.

A Vast New Power
Learning effects have taken off most significantly at large internet platforms given the immense amount of data they control and their aggressive investment in machine learning to accelerate product innovation: Google in search, ads, photos, translate and Waze; Facebook in search, ads and newsfeed; and Amazon in search, ads, product recommendations and Alexa, to name just a subset for each. These companies recognize that machine learning has granted them a vast new power, and they are eager to take maximum advantage of it.

Perhaps the best pure-play example of the power of learning effects is Tesla, which began as an electric car company but was able to deploy machine learning to extraordinary effect across its fleet to become the category leader in autonomous driving. Tesla’s autonomous driving capabilities make its cars more valuable, which attracts more customers and data, which enables it to improve these capabilities further and attract even more customers, and so on. As a result of its learning effects, Tesla’s rate of innovation and value creation in the autonomous driving area have dwarfed what its competitors have been capable of.

Engineered Growth
Network effects and learning effects generate growth in different ways. Network effects tend to generate growth organically through a kind of gravitational accretion, as individual consumers and businesses pursuing their own self-interest decide to join the largest and most valuable networks, making them larger and more valuable still.

Learning effects similarly benefit from consumers and businesses pursuing their own self-interest to purchase the best products, but they are less the result of gravitational accretion than of finely tuned technology and product development efforts that require constant intervention and recalibration in order to tie together data, intelligence, product innovation and user/customer growth.

As a result, even though learning effects are partially the product of automated learning, they are by no means automatic. The data generated from new customers must be of the right kind and of sufficient volume to enable new learning. This learning must be optimized effectively enough to create new product value. And this value must be strong enough and productized well enough to attract more customers. Any break in this chain means there is no self-reinforcing cycle and hence no learning effects.

Runaway Leaders
Perhaps the most interesting question about learning effects is what are the conditions that make them strong enough to create runaway leaders, as these are the companies that tend to create the vast bulk of enterprise value in the technology startup world.

Learning effects don’t always produce runaway leaders. Just because one company has a head start in learning doesn’t mean other companies can’t acquire more or better data or learn more efficiently from similar data to catch up with them and eventually bypass them. It’s an interesting question today, for example, if Tesla is pulling away from the pack in autonomous driving, or if others will catch up in the years ahead.

In order for learning effects to produce runaway leaders, a company must secure a definitive advantage over its competitors in one of the component areas of learning effects – data, intelligence, product innovation or user/customer growth – and leverage this into advantages in the others, such that the company can acquire data, learn, innovate and grow not only more rapidly than its competitors do, but more rapidly than they can.

As with learning effects generally, there is nothing automatic about tying these advantages together. It requires excellent execution.

Typically a company is able to jumpstart this cycle by developing a significant data advantage over its competitors. It then must translate this data advantage into an intelligence advantage as measured by the capabilities of its machine learning models, which requires that its models be as or more efficient than those of its competitors. This intelligence advantage must then tie to a product innovation advantage that is directly correlated with a user or customer acquisition advantage and ultimately with an advantage in the size of its user or customer base. Enough customers have to want to buy Teslas, in other words, because of their autonomous driving capabilities vs. because it’s an electric or cool-looking car, as that doesn’t create a strong enough self-reinforcing cycle. Finally, this user or customer base advantage must enhance the company’s data advantage in the right way to generate additional learning.

.

Generally the narrower the scope of a product and the greater the degree to which machine learning drives its value, the easier it is to tie these advantages together to create a runaway leader.

Wherever it is possible to tie these advantages together, there will likely be ferocious competition, as with network effects, for startups to get an initial head start in competing for scale to achieve escape velocity and become runaway leaders given the huge premium on winning. The early bird that capitalizes on its head start generally gets all the worms. Other birds need to bootstrap alternative advantages in the form of more efficient learning engines or access to large and differentiated datasets in order to have a chance.

Learning Curves: Long, Steep and Perpetual
In order for runaway leaders to be able to maintain their leads over time, there’s an important additional requirement, which is that the learning curves for their products must be long enough and steep enough to enable them to provide increasing product value for an extended period. If the learning curves for their products are short or top off quickly, early leaders will max out on them while they still have viable competitors, and these competitors will be able to catch up. If the learning curves are long and steep, on the other hand, these companies will have sufficient runway to break away from their competitors and maintain their leads over time.

Certain products – particularly those built on highly dynamic datasets – may have perpetual learning curves such that in a rapidly changing world, they can always be meaningfully improved. It’s around these kinds of products that the most valuable runaway leaders will likely develop. Potential examples include search, semantic engines, adaptive autonomous systems and applications requiring a comprehensive real-time understanding of the world.

The Interaction of Learning Effects and Network Effects
Network effects almost always create the opportunity for learning effects, as they involve the generation of ever more data in the form of new network members and interactions. Companies must invest in machine learning to create these learning effects, and they may or may not be successful. They may fail to generate meaningful learning, or they may generate meaningful learning but not learning effects if this learning does not result in more valuable products that lead to the continual acquisition of new data for additional learning.

Conversely, learning effects can create network effects. Tesla, for example, did not benefit from network effects when it was just an electric car company and was not yet focused on autonomous driving. However, once the company outfitted its cars with information sensors to develop autonomous driving capabilities through machine learning, it suddenly began to benefit from network effects: each Tesla became more valuable the larger the fleet became.

Importantly, however, when learning effects create network effects, these network effects do not exist independently of them. They are in effect an expression of the learning effects: learning just happens to take place through a network. If Tesla turned off its machine learning, its network effects would cease to exist.

The reverse, however, is not true. Network effects can give rise to learning effects that can exist independently of them. Facebook’s core network effect of people wanting to be part of the same social network that their friends are, for example, generates lots of new data that machine learning models can learn from. One area where Facebook has invested significantly in machine learning and succeeded in generating learning effects is improving the relevance of its newsfeed. Newsfeed relevance is a different kind of value than the core value around which the company’s network effects are based, although the two clearly reinforce each other. If Facebook stopped growing its user baser, it could continue to generate increasing value by improving the relevance of its newsfeed through these learning effects.

Since network effects and learning effects are both functions of customer value, whenever they exist side by side in a product, they always reinforce each other, as each makes the product more valuable in a way that attracts more customers and data.

The most formidable kinds of runaway leaders that tend most strongly toward natural monopoly – Facebook and Google are excellent examples – are those that benefit from network effects and learning effects working in tandem, as their mutual reinforcement means these companies run away from the pack much faster and are generally impossible to catch, provided they also benefit from perpetual learning curves.

Startups vs. Incumbents
Incumbent internet platforms have unsurprisingly been the big winners of the machine learning revolution to date because of their vast data assets and their significant investment in this new technology. Their early dominance has led skeptics to wonder if machine learning is a game that startups can win at all given their relative data disadvantages.

There are huge new datasets and data-rich applications created every day, however, in domains where these and other platforms have little or no presence, which provide an abundance of new opportunities for startups.

In addition, there are many large datasets sitting in organizations that startups are best suited to access because they are better able to provide these organizations with innovative applications to take advantage of them.

And although startups make lack the early edge in data, they always have the advantages of focus and adaptability. Where I believe these advantages will make the biggest difference in machine learning is that machine learning applications are engines, and startups have the ability to build and tune these engines most precisely to maximize learning effects. They have the ability not only to maximize the amount of learning and hence value they create from new data, but to complete this loop and maximize the amount of data in the form of new customers they create from new learning.

Only by constantly tightening and amplifying these loops can companies grow rapidly from learning effects and hope to achieve escape velocity to become runaway leaders. As a general rule, startups tend to be better at this than incumbents.

This article was originally published on Techcrunch