Introducing PAND·AI: A Responsible AI Stock Analyst, Designed Our Way
- Vincent D.
- 1 day ago
- 17 min read
Updated: 22 hours ago
We’re happy to launch PAND·AI today as a new addition to WU Advanced—our take on building a responsible AI analyst, one that is grounded in real institutional data and doesn’t spit out random, hallucinated gibberish.
Every graph, feature, and output you see on our platform exists because it fits my actual needs as an investor. Saying it this way might sound selfish, but it isn’t. Instead of building features based on the naïve reflex of a product designer who wants to “add stuff,” I try to anchor everything in what I personally find genuinely useful as an active investor. My bet is simple: if it’s useful for me, chances are it’s useful for others too.
One thing I have been doing a lot recently is using AI to understand companies when exploring new tickers in our Stock Health Dashboard. The Dashboard gives great market-based information, but it doesn’t tell me what a company actually does, how big it is, what its competitive edge might be, or who it is fighting against in the market. To fill that gap, I found myself constantly turning to AI.
I don’t know how representative I am as an investor, but I do know I’m far from being alone in using AI as part of the investing process.
A Mercer survey found that 91% of investment managers either already use AI (54%) or plan to use it (37%) in their strategies or research. Retail isn’t at that level yet, but the trend is very real: an eToro survey showed that 19% of retail investors currently use AI—up from 13% the previous year—and among younger investors this proportion jumps to 41%.
Imperfect AI Driving Important Decisions
So AI is already at the center of many investment decisions. But what worries me is that investing is a high-stakes activity, and AI models—while impressive—are still very imperfect. To make matters worse, they are extremely confident, even when they are wrong and that combination can be toxic.
From my point of view, here are the four reasons why AI is still an imperfect technology for investing:
Data freshness is a real problem. Even ChatGPT 5.1, the most recent model, only “knows” things up to around June 2024. It can search the web, but that means it simply regurgitates whatever it finds—without any guarantee the sources are reliable. What a company did a year and a half ago is not terribly relevant for investment decisions and acting on wrong numbers is dangerous.
On top of that, models sometimes invent facts. Some models, like Claude 4.5, are less prone to hallucination, but ChatGPT remains incredibly good at confidently making things up. It doesn’t just lie—it lies with style. Here is a Gemini example of a major hallucination:

Most AI models have a strong positivity bias. This exists mostly to avoid producing controversial or harmful statements. X AI tried to remove that bias, and the internet was full of examples within days of Grok saying some pretty terrible things. You might wonder what positivity bias has to do with investing. Well, you may have seen recently the CNN story where ChatGPT essentially encouraged someone toward self-harm?
"Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity. You’re not rushing. You’re just ready."
Yeah, sorry — I know this story is horrible and incredibly sad. But I’m bringing it up to highlight a point about LLMs that is mostly overlooked and becomes also genuinely dangerous when translated into financial analysis: extreme optimism. If a model can think it’s a good idea, in a dark moment, to encourage someone toward suicide, it can certainly also be overly optimistic about the “long-term potential” of a stock like Nikola (NKLAQ). It’s an extreme example, but not a speculative one — we had to constantly fight this “always-positive” reflex when building PAND·AI, because having a realistic and critical view is absolutely essential when it comes to investing.
And finally, when you ask most models about a company, they generate beautiful, convincing text — but it’s often made of random facts, incomplete reasoning, or disconnected statements. A solid analyst report is more than just scattered information about a company. It follows a method, builds a narrative, and ties facts together with intent. A good recent example is Yahoo Finance’s new AI analyst. They rolled out a beta, removed it 2 weeks ago, and then brought it back. Here is the “company overview” it produced for Apple Inc.:
“Apple Inc. is positioned as a key player in the AI sector, facing regulatory challenges that could lead to substantial fines. Despite mixed valuation trends, analysts maintain a generally positive outlook on the company’s growth potential amid a broader tech market rally.”
That’s it. I didn’t cut anything — that’s literally what they had to say about Apple. And although nothing in that paragraph is technically false, if you didn’t already know Apple, would you genuinely understand what the company actually does? Yes, a summary is a summary, and Apple does a lot of things, so I would have understood if they skipped some of the product lines. But in my view, this paragraph is useless. (Also noteworthy: the PE ratio given by Yahoo’s AI analyst was off by four full points that day. It looks small on paper, but in reality that’s roughly 20% of Apple’s entire PE range over the last decade.)
PAND·AI
So if the future of investing lies in using AI, why not build one that’s actually better at it? I’ve been using AI long before large language models came into the spotlight, and along the way I published several academic papers in peer-reviewed IEEE journals on related topics. So creating our own system didn’t feel out of reach.

Here is me 12 years ago, in my full researcher glory, surrounded by robots in my lab. How could I not be credible enough to build an AI analyst? I even had Tom Lee’s glasses back then — which clearly adds a whole extra layer of credibility.
Okay, okay, I’m joking. This was just to give you a laugh and a breather in this already long text that was cruelly missing pictures and graphs. The truth is: my old research background actually has very little to do with what we’re doing here. Using LLMs isn’t like building a neural network from scratch — it’s mostly smart prompting and careful coding.
But… more recently, I worked on a project with a non-profit organization connected to one of the three founders of deep learning. I’ll skip the details, since I’m not entirely sure what is confidential and what isn’t, but what matters is that this group is arguably one of the most qualified in the world when it comes to building responsible AI systems — meaning AI that doesn’t hallucinate, doesn’t invent facts, and doesn’t produce random nonsense with confidence. Through that project, I learned a few of their techniques, and I quickly realized these were exactly the ingredients needed to build the kind of AI analyst I could genuinely trust — the one we childishly named PAND·AI. (See footnote about Panda)
The result is a hierarchical architecture built from carefully crafted prompts, multiple models each used for their respective strengths, and an additional supervising model that oversees the process. The final report is produced from 26 structured prompts across six different AI models from three different companies. Some sections of the report require up to five layers of back-and-forth reasoning between models. When fully parallelized on higher-end cloud infrastructure from model providers, the entire report takes about five minutes to generate. (more on that later...). This brings certain limitations, but I mention it so you can better understand that what PAND·AI delivers is far more than simply asking ChatGPT: ‘Could you make me an analysis report on Apple stock?’
Wall of text
Looking at other AI stock-analysis platforms, we found that many of them fall into what is called the “wall of text” problem. With LLMs, generating text is easy, so the reflex is to put text everywhere. The result is a massive block of information where everything is mixed together and nothing feels structured. I’m an avid reader, and even I become overwhelmed by what some AI-driven sites are producing. In the AI analysis field, even the best players have fallen into this trap. Take Perplexity Finance, the finance-focused branch of the otherwise excellent Perplexity AI. On their site, below the price-action graph of AAPL, here is what you face:

It’s a huge flood of text, and it’s not even clear how the information is connected or how it actually helps me make an investment decision. Something tells me this dashboard was designed by AI people, not by people who actively invest.
And I think this is where we had one important advantage in building PAND·AI we know AI, but we are also very active investors, and have been for a long time.
So to avoid falling into that exact problem, we stepped back and tried to look at things purely from an investor’s point of view. Why and when do we need to read about a company? After thinking about it, the conclusion was that different purposes drive our motivation to seek information.
For example, I often hear about a company I don’t know much about. In that context, the first thing I want is a very quick understanding of the company (Company Profile). If I start getting interested, I may want to dig deeper and get a clear picture of its strengths and weaknesses, look at the financials, and read both the bull and bear investment thesis. If I’m thinking about actually taking a position, I become more focused on valuation and technical momentum. In other cases, I already own the stock, and I simply want to stay up to date on how the company is evolving. In that situation, insights from earnings calls become extremely valuable.
We started from these different motivations and built around them. At the center is the stock you’re looking up, with a company profile built from up-to-date information. Around that, we organized all the information depending on your relationship to the stock: discovering it, understanding it, preparing to enter, or already holding it.

The info you have available
Financial Analysis
Earnings Insights
SWOT Analysis
Valuation in Context
Bull/Bear case
Technical Momentum
Clicking on any of them will open a window showing the content.
I have a special affection for the Earnings Insight section — it’s the most expensive part to generate, both in time and computation.

This section looks at the transcripts of the last four earnings calls, with the goal of understanding the company’s strategic evolution and any meaningful shifts. We benchmarked it against ourselves (what we, as human analysts, actually want to hear) and against a wide range of other AI-generated analyses, and I’m genuinely impressed with what we ended up with. It can easily save hours of earnings-call listening.
I also really like the Bear and Bull Thesis section, which is generated last and uses the full report as context. I personally think a prudent approach to investing is to understand both opposing theses — not only to evaluate the risks, but also to build an internal radar for the moments when our own personal thesis starts drifting in the wrong direction. What I mean is this: when I enter a position, it’s usually because I find the bull thesis compelling. But understanding the bear thesis helps me stay aware of what could threaten the investment, and what to keep an eye on in upcoming earnings calls.
Returning to the core: you may also want to explore every aspect of a stock in one shot. To avoid clicking through each section individually, you can simply click “See Whole Report.”

This opens the entire report in a new window, presented in a logical and coherent order. Since this can be a lot of text, you can also download it by clicking the download button.

And you will get a fully edited downloadable PDF version

Database
Each full report takes about five minutes to generate. A large portion of the content only needs to be refreshed after earnings calls, which happens automatically on the night following the announcement. The valuation and technical sections, however, update daily, and that portion alone takes about thirty seconds.
To make the experience faster, we preload the 100 most popular tickers on the web based on Grok assessement (Grok is incredibly good at analyzing, in real time, what people are talking about on X and Reddit.) Any of those reports should open instantly on your side.
If you request a ticker that is not already in our database, it will be added to the queue. Depending on how many reports are currently queued, you can expect a delay equal to 5 minutes per missing report.

You can go look at another ticker or do something else — it will appear in the database once generated, available to everyone.
We’ll also monitor which tickers are requested frequently and automatically add them to the preloaded list if they weren’t already included.
We apologize for the waiting time. On one hand, generating high-quality content requires lot of computation. We are using full-speed parallel pipelines on Anthropic, OpenAI, and Grok. Think of PAND·AI as an AI agent — the kind of workload that sometimes takes minutes (even half an hour) on standard ChatGPT.
On the other hand, the reason we don’t preload all existing tickers—like we do in TuneMap—is cost. Each full report requires 26 API requests, and every one of them carries a fee. From the outset, it was important to me that we not introduce a new higher subscription tier or raise the price of WU Advanced to support PAND·AI. So for now, this new capability is simply an additional expense we’re absorbing, and we’re focusing on managing those costs intelligently.
Our model’s costs already took a strong hit in September. One of our data providers was caught by Nasdaq Datalink selling data they did not have the rights to. Their settlement allowed them to continue selling their packages, but they had to share the contact information of all customers with Nasdaq, who then reached out to verify if those customers needed the infringed data. We did — so we had to purchase an extension directly from Nasdaq. After negotiation we found a reasonable compromise, but the agreement came with a regulatory disclosure that triggered a wave of other data providers contacting us to pitch “corrected” packages. We always subscribe at the institutional level, but even there, pricing varies depending on what you do with the data. WealthUmbrella is a weird edge case: we’re halfway between redistributing data and using it internally, and some providers classified us in the expensive tier. This happened in August and September and set us back months in revenue.
I’m not complaining — only giving context for why we’re careful with costs in PAND·AI. But, this whole episode also motivated us to build that AI analyst in the first place: if everyone was going to charge us more for data, we might as well do more with it.
PAND·AI in beta mode
PAND·AI is not perfect. As impressive as AI currently is, it's still not 100% proof. I mean Google currently have the lead in the AI battle with their incredibly Highly praise Gemini 3 and Nano-Bana and here is what he spitted me out to my simple request the other night!

We could argue that this was genuine trolling rather than an honest mistake by Gemini, but still—this is pretty far off (though the puppy is cute!).
We released PAND·AI as a Beta not because we’re particularly worried about technical bugs (although they will inevitably happen), but because we want to emphasize caution regarding the content it generates. It shouldn’t suddenly go off the rails the way Gemini did when it confidently responded to my simple request with a little puppy. I’m confident the supervision layers will prevent any major mistakes, and Zackary and I have already reviewed many reports without spotting anything obviously concerning. But there is still room for improvement in the precision of the analysis.
We’ve opened a dedicated section in our forum so you can share your feedback, suggestions, or issues. You have a different perspective than we do, and one of the most helpful things you can do is tell us when you think information is missing in the report for a company you know well. This is exactly how we benchmarked PAND·AI against other AI analysts. For example, we read through pages of earnings-call transcripts, identified what carried strategic value, and tuned PAND·AI to reliably surface that type of information. None of these adjustments were stock-specific — yet the improvement compared to other platforms, especially in that section, was remarkable, which is why I told you I believe it’s one of the most valuable components of the report.
Just as ChatGPT improved dramatically after going viral, I’m certain PAND·AI — even if on a much humbler path — will evolve considerably over the coming year. We’re really looking forward to your input to help shape that evolution. And when we reach a point where we are fully confident in its robustness, we’ll remove the Beta label. There is also one feature coming soon: Investor Sentiment, a section that will summarize how investors on X and Reddit are feeling about the stock.
PAND·AI: A Knowledge Support, Not a Real Analyst
Earlier I discussed the issue of positive bias in LLMs and how hard we had to fight against it. Our goal was to make our AI analyst as neutral as possible. That neutrality also aligns with how we believe an AI analyst should fit into an investment process: it is not there to recommend or not recommend a stock. It is not a financial advisor nor a human analyst. Its purpose is to provide the most objective and synthesized (but precise) overview of a company so that you can make your own decision.
That point matters. And we actually had to work against the model’s instinct to give recommendations, because we don’t believe AI is yet at a point where it can replace a human analyst with wide experience, knowledge of macro trends, understanding of the stock market and economic context, and an editorial perspective. Nothing replaces that depth.
For example, I discovered Beth Kindig before the IOFund even existed, after reading an article she wrote on Roku (in 2018 I think). I immediately felt that her arguments were incredibly thoughtful compared to everything else I was reading at the time. I’ve read Evercore ISI reports that cost ten thousand dollars and weren’t as sharp or as insightful as some of Beth’s articles. (And just to be clear: this is simply a spontaneous expression of appreciation that also illustrates my point. And in any case, when people do excellent work — especially in a field as demanding as investing — I think it deserves to be acknowledged.) But I became fully convinced in 2020, when she wrote a kind of warning about Alteryx. For newer investors: Alteryx was, at that time, one of the strongest stocks on the market and had been for the previous two years. It had a mix of cloud and AI, but unlike many “AI” narratives today, it wasn’t speculative. They had 90% gross margins and consistently beat earnings. I even remember reading a Seeking Alpha analyst who said Alteryx was so good that he had put all his kids’ college funds into the stock.
Yet in 2020, the IOFund published a note highlighting the risks they saw in Alteryx, given the change in environment created by COVID. It was a courageous call, considering how universally bullish the market was about the company. They ended up being right. Alteryx was one of the few cloud stocks that did not participate in the post-COVID rally and has since been taken private. At that moment, nothing in the earnings reports or standard financial indicators suggested an upcoming downtrend. What made the difference was Beth’s experience combined with macro awareness — her thesis (If I remember) being that Alteryx’s product was expensive and often the “cherry on top” of a cloud infrastructure, exactly the kind of cost companies would cut during COVID uncertainty.
A trained AI almost certainly would have missed that call, because it was driven by factors extrinsic to the company and required connecting dots across multiple domains. And this is exactly why I use this example: an AI analyst does not replace humans. It plays a different role: a precise knowledge aggregator.
Naturally, if a company’s fundamentals are terrible, PAND·AI will highlight the risks. Being neutral doesn’t mean being blind. And if everything about a company is stellar, it will likely acknowledge that as well. But even a responsible AI analyst like the one we built is not meant to tell you how great or how bad a company is — its purpose is to help you quickly develop a solid understanding of the business: its strengths, its weaknesses, and its overall trajectory. The Earnings Insight section, in particular, can help you envision where a company might be heading, which is often the most important factor behind a successful investment: its future path.
Conclusion
That’s it. Not much more to say about PAND·AI. You probably didn’t even need this post to understand how this new addition to WU Advanced works — it’s fairly intuitive on its own. The goal of this text was mostly to start a conversation about AI analysts, their current limitations, and their role in the investing process. It was also an opportunity to give you a look at what’s happening under the hood and to explain the reasoning behind our design choices.
Footnote: Why are we so obsessed with pandas?
And if you’re wondering why we’ve been so obsessed with pandas for years… here’s the story.
I don’t actually have anything special for or against pandas. The only ones I’ve seen in real life were the cute ones at the San Diego Zoo. Back in 2021, when I started working on WealthUmbrella, I hired a proofreader and an illustrator on Fiverr because I knew I would need a lot of text and images once things started rolling. And it worked — until I realized their 48-hour turnaround time, which was fine during the preparation phase, would be completely unusable once we were operating in real time.
That’s when I discovered DALL·E 1, a couple of months before ChatGPT went viral. Technically impressive… but unusable for anything publishable. Then came DALL·E 2, just one or two months before the explosion of ChatGPT. For the first time, I felt I could generate my own images on demand — good enough in quality, and fast enough to keep up with our workflow.
But DALL·E 2 had a huge weakness: generating humans. And if you know anything about humanoid robotics, you’ve probably heard of the uncanny valley — the phenomenon where the closer something looks to a human, the more we like it… until it gets just slightly off, and suddenly it looks creepy.

Well… DALL·E 2 lived permanently in that creepy zone.
But oddly enough, it was great at drawing humanized pandas. And since I had nothing against pandas—and could, in some metaphorical way, relate to them—I started generating panda images for everything WU-related. In a spirit of continuity, the habit stuck and still amuses me to this day. For example, when we launched WU Advanced, I recreated a scene I loved from Astro Boy—my favourite cartoon as a kid, the one that made me want to become a roboticist—except, of course, with a panda. Because… why not?
So when we started working on an AI analyst and were still in the prototyping phase, I got tired of calling it “AI analyst”—the same generic label everyone uses for their ChatGPT-disguised financial reports. But I still wanted to keep “AI” somewhere in the name—not to surf on the hype, but to make sure people never confuse the source of the report. It isn’t produced by a seasoned human analyst with decades of experience, but by a machine we crafted (still with care). And as I explained above, that distinction matters: these reports serve a different purpose and should be presented as such.
At some point, I renamed our early prototype PAND·AI as a simple internal codename, and it just stuck. It’s a bit childish and unserious, and honestly, that’s perfectly fine with us. And here’s the last thing I’ll say about pandas.
Have you ever been in a room full of marketing experts? Their role is basically to absorb the collective psychology of a field and reshape messaging to match it. For example, a few years after we launched Robotiq, our website had taken on a modern, clean, Apple-like aesthetic. Then our new marketing director told us it was terrible — because it scared industrial buyers. They didn’t want to shop for industrial equipment on what looked like an iPod landing page. They wanted something that felt industrial. So we redesigned the website in a style that was less aesthetic to us, but much more aligned with that world.
Now, when it comes to finance, marketing experts will tell you that credibility is built on tradition, heritage, stability, and seriousness. There’s a whole Anglo-Saxon, old-money, upper-crust “establishment” aesthetic — names like Baring Brothers, Lehman Brothers, Merrill Lynch, Brown Brothers Harriman, or Wellington. If you’ve seen Martin Scorsese’s excellent film The Wolf of Wall Street, you may recall that they chose the name “Stratton Oakmont” (complete with an old-fashioned lion logo) precisely to play that credibility card and borrow instant trust for their scam.
That’s a game I have zero interest in playing. Not because I’m trying to adopt some anti-establishment aesthetic, but because I want to avoid those old-money credibility tricks. WealthUmbrella is a fun project built by nerds, and there’s nothing more fitting than using pandas to reject the idea of manufactured legitimacy. Our horrible website scares people, so we start with credibility at zero — and whatever trust we earn after that is the trust we probably deserve.
So PAND·AI it will be — a non-credible, childish name that perfectly reflects our refusal to play the Lehman Brothers or Stratton Oakmont game. And yet, ironically, I find that it delivers better content than many of the more “serious” financial AI platforms I’ve tried, and in a way that’s genuinely convenient for me.
I do understand that for the 25% of WU members that fall into the institutional investor category, bringing a report produced by a robot panda to an internal Monday-morning meeting might raise a few eyebrows and slightly impact credibility. Maybe down the road we’ll release this version:

Until then I hope you will all like the content generated by PAND·AI. I may have built it for myself first, but I hope it will also help you in your own investment journey.


