AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications
The article presents AutoCross, a system that automatically generates and selects high‑order feature crossings for tabular data using multi‑granularity discretization, beam search, field‑wise logistic regression and successive mini‑batch gradient descent, achieving superior accuracy and efficiency in large‑scale recommendation scenarios.
In this talk, Luo Yuanfei from Fourth Paradigm introduces AutoCross, a solution for automatic feature engineering on high‑dimensional sparse tabular data commonly encountered in recommendation systems.
Traditional feature crossing methods such as RMI and CMI focus only on second‑order interactions and suffer from O(n²) complexity, while implicit methods like FM, FFM, and deep neural networks lack interpretability.
AutoCross addresses these issues by supporting high‑order, interpretable feature crossings with low inference cost. Its overall architecture takes raw data and feature types as input, passes them through a flow that includes preprocessing, feature generation, and iterative feature selection, and outputs a feature generator applicable to new data.
The key algorithms are:
Multi‑granularity discretization: each continuous feature is discretized at several granularities (e.g., intervals of 5, 10, 20) to let the model choose the most effective representation.
Beam Search: a greedy search that first creates promising second‑order features, then expands them to higher‑order ones, dramatically reducing the exponential search space.
Field‑wise Logistic Regression (Field‑wise LR): fixes parameters of already selected features and evaluates candidate features to identify the one that maximally improves model performance, saving computation, communication, and storage.
Successive Mini‑batch Gradient Descent: progressively discards weak candidates while allocating more mini‑batches to promising features, further lowering evaluation cost.
System‑level optimizations include caching feature weights to avoid repeated network and computation overhead, online computation with separate threads for data serialization and feature generation, and data parallelism across multiple processes coordinated by a parameter server.
Experimental results on ten datasets show that adding AutoCross‑generated features to logistic regression or Wide&Deep models consistently outperforms baselines such as CMI and achieves performance comparable to state‑of‑the‑art deep models.
References: Luo et al., 2019 (AutoCross), Rómer et al., 2012, Chapelle et al., 2015, Rendle, 2010, Juan et al., 2016, Guo et al., 2017.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.