Application and Exploration of Financial Knowledge Graphs
This article presents a comprehensive overview of financial knowledge graphs, covering their historical evolution, theoretical foundations, technical stack, implementation steps, and real‑world case studies in banking, regulatory technology, and securities, while highlighting community resources for AI and big‑data practitioners.
The talk, presented by Zhang Qiujian, director of Xinghuan Technology's Financial Division, covers the implementation, theory, and technology foundations of financial knowledge graphs, illustrated with three finance‑related cases.
It traces the historical evolution from 1950‑60s semantic networks to 1980s ontology adoption and the transition from text links to data links, culminating in Google’s 2012 Knowledge Graph.
The theoretical section discusses knowledge acquisition through nature, experience, and cultural transmission, and outlines five research approaches—symbolic, connectionist, evolutionary, statistical, and analogical—each forming a distinct school of thought.
The technical stack is described as four layers: theory, data (e.g., Freebase, Knowledge Vault, Wikidata), implementation (six steps: acquisition, extraction, fusion, storage, reasoning, modeling, discovery) using RDF triples and graph databases, and application domains.
Application domains span generic areas such as search, robotics, and IoT, and industry‑specific sectors like finance, energy, and healthcare, with pipelines that include data collection, triple extraction, entity disambiguation, storage in Elasticsearch or graph DBs, and building search or QA products.
Case studies include: (1) banking – credit‑guarantee chain analysis via graph traversal; (2) regulatory technology – risk‑monitoring dashboards for the China Banking Regulatory Commission; (3) securities – an intelligent investment research platform featuring company, supply‑chain, and sentiment graphs.
The article concludes with information about the DataFun community, which provides slide downloads, QR codes for follow‑up, and promotes the sharing of big‑data and AI practices.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.