Memory Computing vs Big Data: Trends, Platforms, and Architecture Choices
This article summarizes a WeChat group Q&A on the current momentum of in‑memory computing, compares TimesTen and SAP HANA, and offers practical advice on building enterprise big‑data platforms, covering cloud vs self‑build, talent, investment, and real‑world case studies.
This article is compiled from the "Efficient Operations" WeChat group "Sit and Discuss" series, presenting the highlights of the August big‑data themed week.
Key Questions
The discussion focused on two main questions:
What is the future trend of memory computing given its apparent contradictions with big data?
What architecture and selection advice can be offered for building an enterprise big‑data platform?
Q1: Future Trend of Memory Computing
Representative memory‑computing products:
TimesTen – founded in 1996, acquired by Oracle in 2005.
SAP HANA – launched by SAP in 2010 and continuously evolving.
TimesTen
In the mid‑1990s, server memory was only tens of megabytes, making a fully in‑memory database innovative but limited to traditional row‑store OLTP workloads, without a breakthrough in thinking.
SAP HANA
SAP HANA broke the memory‑capacity barrier by using petabyte‑scale memory and column‑store technology with compression, targeting data‑analysis and big‑data scenarios. It was sold as an integrated appliance, quickly becoming an industry standard.
Although market dynamics have shifted, the speaker remains optimistic about the future of in‑memory databases, emphasizing real‑time value, improved persistence, consistency, and the decreasing cost of SSD/Flash storage.
Q2: Architecture and Selection Advice for Enterprise Big‑Data Platforms
The recommended considerations are layered:
Self‑build vs. renting (cloud platforms).
If renting, evaluate the completeness of the architecture, technical strength, service commitments, reputation, and price (often bundled as TCO).
If self‑building, anticipate challenges in technology, cost estimation, talent acquisition, and long‑term sustainability.
Key self‑build challenges:
Talent – the most critical resource, often limited by budget.
Understanding of big data – a clear strategy, long‑term value perception, and business impact are essential.
Investment – not only financial but also decision‑making processes and key personnel.
Ultimately, assess your own business, data, and financial capacity; avoid blindly copying the architectures of large internet companies.
Classic Example: 12306 Ticketing System
Before 2009, the 12306 railway ticketing platform faced severe criticism. Attempts to replicate Alibaba's Hadoop‑based architecture failed because the railway system’s OLTP requirements are far more complex.
The railway ticketing system must handle billions of database transactions daily, with intricate ticket allocation rules and massive real‑time query loads.
This illustrates that big‑data solutions must be tailored to specific workloads; no single commercial or open‑source database can claim to solve all big‑data problems.
In summary, while OLTP technology is mature, the focus should be on selecting the right data‑analysis tools, aligning with long‑term business strategy, and considering data‑flow scenarios that may become competitive differentiators in the next 3‑5 years.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.