- Freestyle
- Posts
- Freestyle - November 2024
Freestyle - November 2024
ChatGPT Search and Google, Starlink and T-Mobile, Creepy Meta Glasses, Hims and Amazon Pharmacy, LLMs Scaling
Freestyle is where we examine the changing tides of technology from our front-row seats. These are raw, evolving thoughts—half-baked ideas meant to spark conversation. The real refinement happens when you reply, challenge, and build on what we put out there. 🤝
“Search” ChatGPT vs. Google: Closer to Clarity
When ChatGPT launched in October 2022, it felt like magic—an AI that could hold conversations, answer questions, and synthesize information. But it had a big caveat: the model only knew what the internet knew up to 2021. No current events, no breaking news, no evolving discussions. If you wanted to talk about something happening now, it simply couldn’t.
To patch that hole, OpenAI introduced Retrieval Augmented Generation (RAG). This clever workaround let ChatGPT query the internet in real time, originally through Bing, and stitch those results into its responses. With RAG, the chatbot finally started to feel like a living thing. It wasn’t just a talking encyclopedia—it was connected.
Enter ChatGPT Search. Launched in October 2024, ChatGPT Search brings a major shift to OpenAI’s product lineup, making it much more like Perplexity than traditional ChatGPT. The new update doesn’t just generate conversational answers—it cites sources, displays images, includes links, and even suggests recommended follow-up questions (see example below). It feels less like an open-ended conversational tool and more like a generative layer on top of the internet. And OpenAI isn’t hiding its ambitions—it’s going straight after Google with a Chrome extension that intercepts your search queries and redirects them to a ChatGPT window, bypassing Google entirely.
This is the exact bear thesis Wall Street has been waiting for. If chatbots like ChatGPT Search become a better way to answer questions, Google’s dominance could erode. No wonder Google trades at 20x NTM P/E, far below its MAG 7 peers in the 30x–45x range. The fear is simple: if queries move, so does the ad revenue.
But here’s the thing: now that ChatGPT Search is here, I have to ask… is this the thing that will topple Google?
Where ChatGPT Search wins: When I’m researching a new market, synthesizing Reddit reviews on protein powders, or brainstorming a name for a new company, ChatGPT Search feels like the right tool. It’s conversational, iterative, and excels at turning scattered inputs into coherent insights. The addition of citations, links, and suggested follow-ups makes the experience feel polished and actionable. Honestly, some of these behaviors weren’t even possible on Google—they’re market-expansive.
Where Google still rules: Being forced to funnel every query through ChatGPT Search via its Chrome extension instead of Google quickly reveals some cracks. The most obvious is speed—Google’s results are instant. Even though ChatGPT Search has improved, watching text appear over 2–3 seconds feels painfully slow when compared to Google’s immediacy. Then there’s familiarity. Search “amzn ticker” on Google, and you get a clean, perfectly formatted chart with options to expand to 1Y or 5Y, accompanied by relevant news explaining why the stock is moving. On ChatGPT Search, you might eventually see a chart, but not without delays, filler text, a clunky flow, and no context for the latest stock movements.
Google also wins when it comes to news. There are times when I want the opposite of ChatGPT’s clean synthesis—I want the mental junk. The raw, unfiltered flood of text that only the internet can provide. If I’m looking for coverage on the Detroit Pistons’ latest struggles, celebrity gossip, or the most recent presidential election, I want to see everything: the competing headlines, varying takes, and even the mediocre commentary. For news, Google’s breadth and chaos still feel irreplaceable.
For now, I couldn’t handle the slow latency and lack of familiarity on too many of my natural Google searches and have switched off the ChatGPT Chrome extension. Call me old fashioned but I still want to choose when I engage with ChatGPT as opposed to being forced into it over Google.
The convergence game: ChatGPT Search will improve. Latency, UI, and intuitiveness will get closer to what Google has trained us to expect. But Google isn’t sitting still. Their new AI-driven summaries (see below) are surprisingly good at synthesis. Yes, they’re often buried under ads, and dialogue isn’t as seamless, but they’re improving too.
In many ways, the battle for search dominance is converging toward a middle ground: Google leaning conversational, ChatGPT Search becoming more structured. Two years after the first ChatGPT moment, we’re finally starting to see what that middle might look like—and it’s… messy, for now.
Starlink + T-Mobile: Redefining Connectivity (and CapEx)
When T-Mobile and SpaceX announced their “Coverage Above and Beyond” partnership back in August 2022, it felt like a big deal. The promise? Say goodbye to cellular dead zones by hooking T-Mobile’s terrestrial network up to SpaceX’s Starlink satellite constellation. Using second-generation Starlink satellites with phased array antennas, your standard smartphone can talk directly to the stars—no special gear needed. But beyond the cool tech, if this partnership works it has the potential to reshape T-Mobile’s business model and give Starlink a new role in the telecom ecosystem.
We’ve traditionally thought of Starlink as a D2C product—a lifeline for rural households or a convenience for travelers. Sure, there’s been some B2B action with airlines, but this partnership marks a step toward Starlink becoming a more integral part of network infrastructure. For T-Mobile, it’s not just about solving dead zones. With Starlink, they could handle main network capacity too, especially for peak data consumption periods that otherwise require costly overbuilding. Just like electric grids overbuild for heat waves, telecom networks must overbuild for New Year’s Eve or live-streamed events. Most of that capacity goes unused during normal days. Instead, why not rely on Starlink for flexible, on-demand capacity? Variable payments could replace some of T-Mobile’s traditional heavy CapEx.
The economics of this partnership are worth exploring, even though we’re purely speculating here without any insight into the actual arrangement. Using some hypothetical napkin math, there are two key benefits to consider: incremental growth and reductions in capital expenditures (CapEx).
Incremental growth: By using Starlink to serve users in previously under-covered areas, T-Mobile could tap into new markets that were uneconomical for traditional tower deployment. T-Mobile currently has 127 million users with an ARPU of ~$48 per month and gross margins of ~65% (~$31/user). If T-Mobile were to pay Starlink $5 per user per month for satellite coverage, margins would decrease to ~54%, but the trade-off could be worthwhile. If this partnership drives just a 2% increase in users, that’s an incremental ~2.5 million users. For Starlink, at $5/month per user, that’s $150 million in high-margin annual recurring revenue. Given it is all incremental margin to T-Mobile, I’m not sure what is to stop Starlink from extracting maximal value, closer to the $31/month margin, instead of the hypothetical $5/month. This hypothetical math illustrates how both companies could win—T-Mobile grows its user base, and Starlink monetizes infrastructure that’s already in orbit.
Reduction in CapEx: Beyond growth, T-Mobile could reduce its reliance on tower construction by offloading flex capacity to Starlink. Telecom networks must build for spikes, leaving capacity underutilized most of the time. While T-Mobile doesn’t report capacity utilization, let’s use a crude assumption: if their network operates at 90% utilization on average days and 100% utilization on spike days, then theoretically 10% of their CapEx could have been saved by switching to Starlink for peak demand. With $8.2 billion of CapEx in the last 12 months—and assuming this spending was for a steady-state population with no aggregate bandwidth growth—T-Mobile could eventually save at least 10% annually, or $820 million. In reality, the savings are likely much higher, since a growing user base implies that more than 10% of their incremental build is for flex capacity.
The implications go beyond telecoms. Starlink isn’t just filling dead zones for carriers like T-Mobile—it’s poised to complement or even replace the long, expensive fiber backhaul infrastructure that telecoms, cloud providers, and data centers have relied on for decades. Fiber is costly, slow to deploy, and impractical in remote areas. Starlink’s low Earth orbit satellites provide high-speed, low-latency connectivity without ground-based infrastructure, making it an attractive alternative in underserved regions.
This opens new doors. With Starlink, remote regions could become viable hubs for cloud computing, creating new economic opportunities and leveling the playing field. This newfound flexibility could also address energy concerns, allowing data centers to be built in areas with abundant renewable energy sources like wind, solar, or hydroelectric power.
What this hints at is a future where telecoms rely less on heavy, upfront CapEx and embrace scalable, pay-as-you-go models powered by Starlink. For SpaceX, this isn’t just a way to make satellites pay off—it’s a roadmap to redefining connectivity itself. Fixing dead zones? Sure. But the real opportunity is rewriting the rules of how networks, cloud infrastructure, and even entire industries operate.
(For all the dreaming I just did, it is important to note that competitors like AT&T and Verizon are trying to block Starlink/T-Mobile, claiming that they will interfere with ground networks)
The Dark Side of Meta Glasses
The Meta Ray-Ban smart glasses are an exciting leap forward in wearable tech—finally offering something that feels genuinely useful. Compared to earlier attempts like Oculus, Magic Leap, or Vision Pro, these glasses stand out for being both practical and subtle. Their next iteration, codenamed Orion, could take things to the next level, potentially transforming how we engage with the world and digital information in real time.
Unveiled as a prototype at Meta Connect 2024, Orion is Meta’s first true augmented reality (AR) glasses. They look and feel like regular eyewear but hide some seriously impressive tech under the hood. Using silicon carbide lenses with holographic displays, Orion overlays digital content seamlessly onto the real world. Interaction is intuitive—voice and eye controls are paired with a slick wristband that measures how and when we tap our fingers. Most importantly, the glasses are lightweight and independent—no tethering to a computer or headset. For the first time, it feels like we’re on the verge of something you’d actually wear all day, every day.
In his essay, “The Glasses are the Way,” Ayo nails why this shift is so significant. Unlike smartphones, which demand your divided attention in discrete moments, smart glasses are “always on.” With zero friction of constantly toggling between devices and people, glasses can actually supercharge human-to-human interactions with technology. The reduced cognitive load allows you to stay present in your surroundings.
Of course, every innovation has its shadow. The video below is a must-watch. Recently, two Harvard students developed I-XRAY, an app that uses the glasses’ always-on recording to capture strangers’ faces, match them with online photos, and compile unsettling amounts of personal data. The implications are terrifying. This isn’t just about privacy breaches—it’s about opening a Pandora’s box of misuse. The balance between utility and ethics feels razor-thin here, and as these technologies mature, the need for robust safeguards becomes increasingly urgent. Whether these glasses become a cornerstone of how we live or a cautionary tale will depend on how we navigate this delicate balance.
Are we ready for a world where our data is exposed at a glance? @CaineArdayfio and I offer an answer to protect yourself here:
tinyurl.com/meet-ixray
— AnhPhu Nguyen (@AnhPhuNguyen1)
4:10 PM • Sep 30, 2024
Hims vs Amazon: A Tele-battle Worth Watching
Hims & Hers emerged during the wave of SPACs that took many unprofitable, growth-oriented companies public—businesses that investors quickly began to overlook or dismiss. For years, Hims was ignored as just another hair loss startup locked in a marketing war with Ro. But fast forward to 2024, and the story feels different. Hims is now a diversified telemedicine platform serving multiple categories: hair loss, dermatology, erectile dysfunction, mental health, and, most importantly, weight loss.
The company’s growth has been fueled by the broader adoption of telemedicine, a trend that gained significant momentum with the ACA’s implementation in 2014. Unlike early telemedicine pioneers like Teladoc, which aimed to address a wide range of healthcare needs, Hims has focused on a narrower scope in exchange for a more profitable business model. Specifically, the company only serves categories where it can generate margins by integrating prescription fulfillment. Hims also bridges the gap between traditional healthcare and consumer convenience, excelling in proactive medicine—where patients already know what they need (e.g., finasteride for hair loss or retinol for skincare) and simply want a fast, hassle-free way to obtain it. This stands in sharp contrast to reactive medicine, where patients seek a doctor’s expertise to diagnose and recommend treatments for more complex or uncertain issues.
Of course, proactive care has its risks—overprescribing being the most obvious—but when handled responsibly, it’s hard to argue against the efficiency and accessibility of this model. Hims’ success suggests it’s not just filling this gap; it’s owning it. By May 2024, the company had established itself as a market leader, showing impressive market share growth data at its investor presentation.
In Q3 2024, Hims posted $401M in revenue and $22.4M in operating income. Annualized, that’s ~$1.6B in revenue and ~$90M in operating income. Even better, because Hims collects cash upfront for multi-month subscriptions, free cash flow (FCF) paints an even brighter picture: $78M in FCF less $25M in stock-based compensation (SBC) for $54M in adjusted FCF in the quarter—or $216M annualized. At a ~$4B enterprise value, Hims trades at 44x operating income or an 18.5x adjusted FCF multiple. With 77% YoY growth last quarter and operating leverage kicking in, it’s easy to imagine profitability expanding further.
Then there’s the GLP-1 craze. The stock has been whipsawed by GLP-1 headlines, swinging double digits on any news. Ironically, GLP-1s were still a relatively small part of Hims’ business as of Q2 2024. The company first introduced oral weight loss pills like metformin (not a GLP-1) in 2023, which now generate $100M+ in annualized revenue. Only recently, it added Ozempic and Wegovy to its site, alongside a controversial compounded version of semaglutide GLP-1 (the generic form of Ozempic) priced at $200/month—10%-20% of the branded Ozempic/Wegovy prices. The FDA allows compounding under certain conditions: 1) the branded drug is on the shortage list (Ozempic was, but no longer is), or 2) the drug is personalized for an individual patient. Hims claims to be personalizing doses to each patient, justifying their use of compounding. But investors are worried that is too thin of a justification, and their compounding business will eventually wither away.
Reading between the lines in their Q2 2024 earnings call, Hims’ GLP-1 revenue was just <$16M annualized, including both compounded semaglutide and branded reselling Ozempic / Wegovy. While Q3 numbers might be higher, the bulk of the company’s growth still comes from its core categories: hair loss, ED, dermatology, and the less controversial metformin-based weight loss pills.
Then came November’s bombshell: Amazon, via One Medical, announced it will directly compete with Hims, and at lower prices. The news sent Hims’ stock plunging over 25% in a single day. Amazon’s pharmacy strategy has felt scattershot. Since acquiring PillPack in 2018, Amazon has launched Amazon Pharmacy in 2020, purchased One Medical, introduced RxPass (unlimited generics for $5/month), opened a physical pharmacy in NYC, and started selling GLP-1s like Zepto. Now, it’s competing directly in Hims’ core telemedicine categories. Despite this flurry of activity, Amazon has never disclosed revenue or patient numbers for its pharmacy division, suggesting it’s still a small or underperforming business.
Amazon’s product experience also feels clunky. Finding telemedicine options requires multiple clicks through Amazon Health and Pharmacy pages, blunting Amazon’s huge distribution edge. On One Medical, the new telepharmacy offerings are better integrated but the distribution advantage is limited—One Medical had only 800,000 subscribers as of its last report. There’s also the awkwardness of Amazon users’ prolific password sharing, where purchases of sensitive medications like ED treatments or weight loss drugs might inadvertently expose personal details to family or friends using the same account.
Lastly, Hims’ post-treatment support and patient education stand out, particularly in categories like weight loss, where ongoing care is critical. Weight loss medications, especially GLP-1s, can be intimidating for users after their initial prescription due to potential side effects or the need for dosage adjustments. Hims has invested heavily in resources that allow patients to access educational materials, ask follow-up questions, and modify their dosage or delivery format in real-time. This level of support not only surpasses Amazon’s current offering but is also likely welcomed by pharmaceutical giants Eli Lilly and Novo Nordisk, who lack robust direct-to-consumer channels to educate and assist patients effectively.
Despite Amazon’s entry, Hims’ diversified business across multiple verticals—hair loss, dermatology, ED, mental health, and weight loss—positions it well to weather category-specific disruptions. But their long-term success against Amazon will be driven by how much product surface area they can race ahead on.
Have LLMs Hit a Wall?
When it comes to AI investing, two structural questions dominate my thinking: 1) Is the rate of change on LLMs plateauing? and 2) Are LLM companies differentiated? These questions underpin much of the uncertainty—and opportunity—around the future of large language models.
Sam Altman stirred debate last week with his claim that we’ll reach AGI by 2025. While bold and intriguing, such a statement feels frustratingly vague, especially without a shared definition of AGI. It does little to provide clarity on the more pressing and tangible questions about LLM scalability and differentiation.
To dig deeper, I revisited the paper “Situational Awareness,” released five months ago. The paper offers one of the more thoughtful and structured bull cases for why LLMs may have far more room to grow than skeptics suggest. If you’re wrestling with these same questions, I highly recommend starting with Chapter 1—it lays out a coherent framework for thinking through the scaling potential of LLMs.
The author of the paper breaks down the scaling trajectory (measured in the below image by OOMs, or Orders of Magnitude) of LLMs over the past four years (from GPT-2 to GPT-4) and what could drive their growth over the next four. These drivers fall into three categories: Compute, Algorithmic Efficiency, and Unhobbling.
1 - Compute is the most straightforward. It’s about more chips, better chips, and increasing power limits. Nvidia GPUs—so central to today’s AI boom—were originally built for gaming, not deep learning. Only recently have they been tailored for transformer models. For example, only with the H100s did Nvidia swap from traditional computing precision formats like fp64/fp32 to fp8, to more closely make the right trade-offs for AI workloads. These shifts don’t just make models faster; they make scaling more viable. As chips get even more specialized, the limits of compute will continue to expand.
2 - Algorithmic Efficiency tackles the so-called “we’re running out of data” problem—a common skeptic refrain. The paper reframes this challenge with a clever analogy: when humans read a textbook, we don’t just read it once and call it a day. We reread, reflect, and engage in an internal dialogue to deeply process the material. LLMs, by contrast, skim through data linearly without iteration. Introducing “deep reading” into model training—rereading and reprocessing data—should unlock new efficiencies. There’s also untapped potential in entirely new data sources. Real-world video, for instance, offers a treasure trove of human interaction data that LLMs haven’t yet accessed meaningfully.
And then there’s Daniel Gross’s thought experiment: what if models trained not just on internet text but on images of web pages? A model could learn the spatial relationships between blog posts, comments, and even ad placements—context that current text-based approaches miss entirely. These kinds of shifts could change how models understand relationships and meaning.
3 - Unhobbling is perhaps the most open-ended. It’s about removing the artificial constraints around LLMs to unlock their full potential. The paper outlines several areas where progress is already happening:
Context windows: Imagine a model with permanent memory of all your interactions with it—knowing your preferences, history, and data across every app.
Test-time compute overhang: Today, LLMs are limited by seconds of compute during inference. What if they had years to solve a single complex problem? The implications for reasoning are staggering.
Scaffolding: Instead of a single model solving a problem step-by-step, picture a “team” of models collaborating on different parts of a task. Early glimpses of this concept are emerging with OpenAI’s O1-preview model.
App collaboration: By integrating directly with user tools, LLMs could dynamically control visual interfaces like coding IDEs, spreadsheets, or entire desktops. OpenAI’s latest desktop app launched last week does just this — it can interact with your coding apps directly.
What I appreciate most about “Situational Awareness” is its ability to cut through vague AGI debates and focus on specific, actionable insights. It has the effect of making the path forward feel clearer—and massively exciting.
🤙
The opinions expressed in this newsletter are my own, subject to change without notice, and do not necessarily reflect those of Timeless Partners, LLC (“Timeless Partners”). This newsletter is an informal collection of thoughts, articles, and reflections that have recently caught my attention. For discussion purposes, I may present perspectives that contradict my own. I fully expect to change my mind on some topics over time and hope that readers approach these ideas with the same mental flexibility.
Nothing in this newsletter should be interpreted as investment advice, research, or valuation judgment. This newsletter is not intended to, and does not, relate specifically to any investment strategy or product that Timeless Partners offers. Any strategy discussed herein may be unsuitable for investors depending on their specific objectives and situation. Investing involves risk and there can be no assurance that an investment strategy will be successful.
Links to external websites are for convenience only. Neither I, nor Timeless Partners, is responsible for the content or use of such sites. Information provided herein, including any projections or forward-looking statements, targets, forecasts, or expectations, is only current as of the publication date and may become outdated due to subsequent events. The accuracy, completeness, or timeliness of the information cannot be guaranteed, and neither I, nor Timeless Partners, assume any duty to update this newsletter. Actual events or outcomes may differ significantly from those contemplated herein.
It should not be assumed that either I or Timeless Partners has made or will make investment recommendations in the future that are consistent with the views expressed herein. We may make investment recommendations, hold positions, or engage in transactions that are inconsistent with the information and views expressed herein. Moreover, it should not be assumed that any security, instrument, or company identified in the newsletter is a current, past, or potential portfolio holding of mine or of Timeless Partners, and no recommendation is made as to the purchase, sale, or other action with respect to such security, instrument, or company.
Neither I, nor Timeless Partners, make any representation or warranty, express or implied, as to the accuracy, completeness or fairness of the information contained in this newsletter and no responsibility or liability is accepted for any such information. By accessing this newsletter, the reader acknowledges its understanding and acceptance of the foregoing statement.