What the New “Generative AI Act Two” Reveals About the Next AI Wave
Sequoia Capital’s “Generative AI Act Two” report highlights a shift from hype‑driven model releases to user‑centric, end‑to‑end solutions, emphasizing the rise of foundational models as components, the importance of developer tools, emerging RAG and fine‑tuning techniques, and the evolving competitive landscape.
Preface
Last September Sequoia Capital released the “Generative AI Act Two” report. The first act highlighted the emergence of foundational models and novel applications, but those were only the initial showcase of the technology. The second act, centered on user needs, now begins, re‑thinking generative AI from capital, market, and ecosystem perspectives.
Original report link: https://www.sequoiacap.com/article/generative-ai-act-two/
Overall Interpretation
Many AI companies lack product‑market fit or sustainable competitive advantage, making the overall AI ecosystem enthusiasm unsustainable.
The market is moving into the second act driven by users, solving problems end‑to‑end. Foundational models are used as part of solutions rather than the whole.
The report maps the market distribution of generative AI, its basic tools, and compute providers.
Points That Need Correction
AI‑related technology is developing much faster than previously expected; what once seemed to need a decade now advances rapidly.
Compute demand is more urgent than user demand, exemplified by business models that pay to skip usage steps.
Foundational model providers should be separated from application layers, yet the most successful user‑facing apps are vertically integrated.
Market dynamics are intensifying, and companies amplify competition, leading to opaque foundational models for customers.
The moat lies with customers, not data. While data‑flywheels can create advantage in specialized domains, data‑based moats are fragile; workflow and user network effects appear more durable.
Relatively Accurate Predictions
Generative AI is extremely hot.
The first killer application, ChatGPT, reached 100 million MAU faster than any prior app (6 weeks vs years for Instagram, WhatsApp, YouTube, Facebook). New killer apps such as Character AI, GitHub Copilot, and Midjourney are emerging.
Developers are key.
The form factor is becoming more complex, shifting from personal productivity tools to system‑level productivity.
Copyright issues are gaining attention.
Where We Stand
AI‑first applications have modest retention; generative AI’s median retention is about 14 %, indicating users have not yet found enough daily value. In short, the most important challenge for generative AI is proving its value, not just finding customers or demand.
Emerging reasoning techniques such as chain‑of‑thought, tree‑of‑thought, and reflection improve model capability for richer, more complex tasks, narrowing the gap between expectations and ability. Developers use frameworks like LangChain to orchestrate multi‑step chains.
Transfer‑learning methods like RLHF and fine‑tuning are becoming more accessible, especially with recent fine‑tuning of GPT‑3.5 and Llama‑2, allowing companies to adapt foundational models to specific domains and improve via user feedback. Developers can download open‑source models from Hugging Face and fine‑tune them for high performance.
Retrieval‑augmented generation (RAG) introduces business or user context, enhancing factuality and usefulness. Vector databases from companies such as Pinecone form the backbone of RAG infrastructure.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
