ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (2024)

Jintai Chenjtchen721@gmail.comUniv. of Illinois Urbana-ChampaignUrbana, IL, USA,Jiahuan Yanjyansir@zju.edu.cnZhejiang UniversityHangZhou, China,Qiyuan Chenchenqiyuan1012@gmail.comZhejiang UniversityHangZhou, China,Danny Z. Chendchen@nd.eduUniversity of Notre DameUrbana, IL, USA,Jian Wuwujian2000@zju.edu.cnZhejiang UniversityHangZhou, ChinaandJimeng Sunjimeng@illinois.eduUniv. of Illinois Urbana-ChampaignUrbana, IL, USA

(2024; 8 Feb 2024; 17 May 2024)

Abstract.

Data organized in tabular format is ubiquitous in real-world applications, and users often craft tables with biased feature definitions and flexibly set prediction targets of their interests. Thus, a rapid development of a robust, effective, dataset-versatile, user-friendly tabular prediction approach is highly desired. While Gradient Boosting Decision Trees (GBDTs) and existing deep neural networks (DNNs) have been extensively utilized by professional users, they present several challenges for casual users, particularly: (i) the dilemma of model selection due to their different dataset preferences, and (ii) the need for heavy hyperparameter searching, failing which their performances are deemed inadequate.In this paper, we delve into this question: Can we develop a deep learning model that serves as a sure bet solution for a wide range of tabular prediction tasks, while also being user-friendly for casual users? We delve into three key drawbacks of deep tabular models, encompassing: (P1) lack of rotational variance property, (P2) large data demand, and (P3) over-smooth solution. We propose ExcelFormer, addressing these challenges through a semi-permeable attention module that effectively constrains the influence of less informative features to break the DNNs’ rotational invariance property (for P1), data augmentation approaches tailored for tabular data (for P2), and attentive feedforward network to boost the model fitting capability (for P3). These designs collectively make ExcelFormer a sure bet solution for diverse tabular datasets. Extensive and stratified experiments conducted on real-world datasets demonstrate that our model outperforms previous approaches across diverse tabular data prediction tasks, and this framework can be friendly to casual users, offering ease of use without the heavy hyperparameter tuning. The codes are available at https://github.com/whatashot/excelformer.

Tabular data prediction, Mixup

*: Co-first Authors.

copyright: acmlicensedjournalyear: 2024doi: XXXXXXX.XXXXXXXconference: SIGKDD Conference on Knowledge Discovery and Data Mining; Aug 25–29, 2024; Barcelona, Spainisbn: 978-1-4503-XXXX-X/18/06ccs: Computing methodologiesArtificial intelligence

1. Introduction

ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (1)

Tabular data is ubiquitous and plays a critical role in real-world applications, spanning diverse domains, such as medical prediction(Wu etal., 2022; Yan etal., 2024), market prediction(Wang etal., 2017), and financial risk forecasting(Kim etal., 2020b).However, unlike fields such as image and natural language processing, where data from disparate datasets frequently exhibit similar spatial or sequential feature relations and aligned semantics, tabular data often lack such “common” and “standard” data structures.Tables are typically created by casual users for diverse purposes. The features and targets can be defined subjectively, and table columns (features) are added or removed arbitrarily, even sometimes resulting in missing information or adding noise.Therefore, while some bespoke frameworks following specific inductive biases have thrived in the domains of image and textual data, achieving comparable success on tabular data is notably challenging.

Therefore, users are compelled to undergo computationally intensive hyperparameter searching in model development for specific tabular datasets, and there is currently no universally recognized method for selecting a model and determining a set of hyperparameters without comprehensive testing on the target datasets. In this paper, we endeavor to design a DNN framework that serves as a sure bet solution for diverse tabular datasets, by solving key drawbacks of existing DNNs. Inspired by (Grinsztajn etal., 2022), we summarize three key drawbacks of current deep tabular models, including:

(P1) lack of rotational variance property.As each table column holds distinct semantics and tabular data lack rotational invariance(Ng, 2004), rotational variant algorithms like decision trees are more efficient on tabular datasets. However, DNNs are a kind of rotationally invariant algorithms that has a worst-case sample complexity growing at least linearly in the number of uninformative features. As mentioned above, tables are created by casual users who frequently include uninformative features, underscoring the importance of rotational variance property.

(P2) large data demand. DNNs typically possess larger hypothesis spaces, necessitating more training data to obtain robust performances. Thus, it is widely noted that DNNs frequently exhibit competitive, and at times, superior performance compared to GBDTs on large-scale tabular datasets. Yet, their performances tend to be subpar on smaller datasets.

(P3) over-smooth solution. Observations suggest that DNNs tend to produce overly smooth solutions, a factor pinpointed by(Grinsztajn etal., 2022) as a contributor to suboptimal performance on datasets featured by irregular decision boundaries (as illustrated in Fig.3). In contrast, decision trees partition the feature space based on thresholds along each axis, resulting in sharp boundaries that are demonstrated to be more suitable for a majority of datasets.

ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (2)

Among these, the large data demand (P2) should be the primary obstacle to current deep tabular models: Intuitively, rotationally-invariant DNNs are data-inefficient(Grinsztajn etal., 2022), but this drawback can be mitigated if there are sufficient training data(Ng, 2004). Moreover, modern DNNs have been demonstrated to be able to fit any functions(Goodfellow etal., 2016). Thus, if a plethora of discrete data points adequately fulfill the feature space to accurately represent the underlying feature-target functions, DNNs can definitely fit such functions rather than obtain overly smooth solutions. We empirically observed that even though DNNs outperform GBDTs on large tabular datasets, their performance significantly declines and falls short of GBDTs when fewer training samples are used. Three examples are presented in Fig.1. Thus, boosting the effectiveness of DNNs on small datasets is the key to achieve a sure bet model.

In this paper, we delve into addressing the limitations of existing DNNs and present a robust model, ExcelFormer, for diverse tabular data prediction tasks. To address (P1), we design a novel attention module named the semi-permeable attention (SPA), which selectively permits more informative features to gather information from less informative ones. This results in a noticeably reduced influence of less informative features. A special interaction-attenuation initialization approach is devised to boost this module. This initialization approach sets SPA’s parameters with minimal values, effectively attenuating tabular data feature interactions during the initial stages of training. Consequently, SPA undertakes more cautious feature interactions at the beginning, learning key interactions and aiding ExcelFormer to break the rotational invariance.

To address (P2), we introduce two interpolation-based data augmentation approaches for tabular data: Feat-Mix and Hid-Mix. Interpolation-based data augmentation approaches, such as Mixup(Zhang etal., 2018) and its variants(Verma etal., 2019; Uddin etal., 2020), have demonstrated their effectiveness in computer vision tasks. However, as the feature-target functions are often irregular(Grinsztajn etal., 2022), simple interpolation methods, like Mixup, tend to regularize DNNs to behave linearly in-between training examples(Zhang etal., 2018), which potentially conflicts with the irregular functions. Therefore, they often fall short of improving and even degrading the performance of DNNs. We propose two data augmentation approaches, Feat-Mix and Hid-Mix, to avoid such conflicts and respectively encourage DNNs to learn independent feature transformations and to conduct sparse feature interactions.

To address (P3), we employ an attentive feedforward network to replace the vanilla multilayer perceptron-based feedforward network in the Transformer. Both consisting of two fully connected layers, this substitution integrates an attentive module to enhance the model’s ability to effectively fit irregular boundaries. Following previous work, we utilize Gated Linear Units (GLU)(Shazeer, 2020) and do not introduce any new modules. Experimental results confirm that this simple substitution effectively improves model performances.

Notably, replacing the vanilla self-attention and feedforward network with the SPA and the attentive feedforward network does not incur any additional computational burden, and the data augmentation approaches come at negligible cost. That means, the proposed ExcelFormer maintains a comparable size to that of cutting-edge tabular Transformers (e.g., FT-Transformer(Rubachev etal., 2022)). Comprehensive and stratified experiments have demonstrated the superiority of our designed ExcelFormer:

  1. (1)

    Our model outperforms existing GBDTs and DNNs not only on small tabular datasets where existing DNNs typically underperform GBDTs, but also on large-scale datasets where traditional DNNs have shown preference (Sec.6.2).

  2. (2)

    Across a spectrum of stratified experiments in terms of feature quantity, dataset scale, and task types, our ExcelFormer consistently outperforms both GBDTs and DNNs. Additionally, we observe that besides our ExcelFormer, existing approaches excel in handling different types of datasets.This underscores its status as a reliable solution for non-expert users, mitigating the need for intricate model selection (Sec.6.3 and Sec.6.4).

  3. (3)

    Notably, while most existing approaches necessitate intensive hyperparameter tuning through repeated model runs (typically 50-100 times), our model achieves superior performance with pre-fixed parameters, offering a time-efficient and user-friendly solution (AppendixD).

2. Related Work

2.1. Supervised Tabular Data Prediction

While deep neural networks (DNNs) have proven to be effective in computer vision(Khan etal., 2022) and natural language processing(Vaswani etal., 2017), GBDT approaches like XGBoost continue to be the preferred choice for tabular data prediction tasks(Katzir etal., 2020; Grinsztajn etal., 2022), particularly on smaller-scale datasets, due to their consistently superior performance.To enhance the performance of DNNs, recent studies have focused on developing sophisticated neural modules for (i) handling heterogeneous feature interactions(Gorishniy etal., 2021; Chen etal., 2022; Yan etal., 2023; Chen etal., 2023), (ii) seeking for decision paths by emulating decision-tree-like approaches(Katzir etal., 2020; Popov etal., 2019; Arik and Pfister, 2021), or (iii) resorting to conventional approaches(Cheng etal., 2016; Guo etal., 2017) and regularizations(Jeffares etal., 2023). In addition to model designs, various feature representation approaches, such as feature embedding(Gorishniy etal., 2022; Chen etal., 2023), discretization of continuous features(Guo etal., 2021; Wang etal., 2020), and Boolean algebra based methods(Wang etal., 2021b), were well explored.All these efforts suggested the potentials of DNNs, but they have not yet surpassed GBDTs in performance, especially on small-scale datasets.Moreover, there were several attempts(Wang and Sun, 2022; Arik and Pfister, 2021; Yoon etal., 2020; Zhu etal., 2023) to apply self-supervision learning on tabular datasets. However, many of these approaches are dataset- or domain-specific, and transferring these models to distant domains remains challenging due to the heterogeneity across tabular datasets. While pretrained on a substantial dataset corpus, XTab(Zhu etal., 2023) offered only a modest performance improvement due to the limited shared knowledge across datasets.TapPFN(Hollmann etal., 2022) concentrated on solving classification problems for small-scale tabular datasets and achieved commendable results.However, its efficiency waned when applied to larger datasets and regression tasks. In summary, compared to decision tree-based GBDTs, DNNs still fall short on tabular data, especially on small-scale ones, which remains an open challenge.

2.2. Mixup and other Data Augmentations

The vanilla Mixup(Zhang etal., 2018) generates a new data through convex interpolations of two existing data, which was proved beneficial on computer vision tasks(Tajbakhsh etal., 2020; Touvron etal., 2021a). However, we have observed that vanilla Mixup may conflict with irregular target patterns (please refer to Fig.3) and typically achieves inferior performance. For instance, in the context of predicting therapy feasibility, a 70-year-old man (elderly individual) and a 10-year-old boy (young individual) may not meet the criteria for a particular therapy, but an individual with an interpolated feature value (aged 40) would benefit from it. Namely, the vanilla Mixup can lead to over smooth solution, which is considered to be unsuitable(Grinsztajn etal., 2022). ManifoldMix(Verma etal., 2019) applied similar interpolations in the hidden states, which did not fundamentally alter the data synthesis approach of Mixup and exhibited similar characteristics to the vanilla Mixup. The follow-up variants CutMix(Yun etal., 2019), AttentiveMix(Walawalkar etal., 2020), SaliencyMix(Uddin etal., 2020), ResizeMix(Qin etal., 2020), and PuzzleMix(Kim etal., 2020a) spliced image pieces spatially, preserving local image patterns but being not directly applicable to tabular data. CutMix is used in SAINT(Somepalli etal., 2021) for tabular data prediction, but it is highly impacted by uninformative features, as shown in Table6.Kadra etal. (2021) investigated various data augmentation techniques aimed at enhancing the performance of MLPs on tabular data. However, these methods were found to be effective only on a limited number of tabular datasets, requiring time-consuming enumeration and testing of these options. In contrast, this paper introduced two novel data augmentation approaches for tabular data, Hid-Mix and Feat-Mix, which avoid the conflicts encountered with Mixup and contribute to ExcelFormer achieving superior performance.

3. ExcelFormer

The workflow of ExcelFormer is illustrated in Fig.2. ExcelFormer processes data by the following components: 1) After pre-processing likeGorishniy etal. (2021), the embedding layer featurizes and embeds tabular features to token-level embeddings; 2) token-level embeddings were alternately processed by the newly proposed semi-permeable attention module (SPA) and gated linear units (GLUs). 3) Finally, a prediction head yields the final target. In the following, we will introduce the novel semi-permeable attention with the interaction attenuated initialization and the GLU based attentive feedforward network first and then the rest parts of ExcelFormer.

3.1. Solving (P1) with Semi-Permeable Attention

ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (3)

As stated in (Ng, 2004), less informative features make minor contributions on target prediction but still necessitate at least a linear increase in the requirement for training samples to learn how to “ignore” them. DNNs are rotationally invariant algorithms, which are data-inefficient with a worst-case sample complexity increasing at least linearly with the number of uninformative features (Grinsztajn etal., 2022).

Our idea is to incorporate an inductive bias into the self-attention mechanism, which selectively restricts the impacts of a feature to only those that are less informative, thereby reducing the overall impact of uninformative features on prediction outcomes. We propose a semi-permeable attention module (SPA), as:

(1)z=softmax((zWq)(zWk)TMd)(zWv),superscript𝑧softmaxdirect-sum𝑧subscript𝑊𝑞superscript𝑧subscript𝑊𝑘𝑇𝑀𝑑𝑧subscript𝑊𝑣z^{\prime}=\text{softmax}(\frac{(zW_{q})(zW_{k})^{T}\oplus M}{\sqrt{d}})(zW_{v%}),italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = softmax ( divide start_ARG ( italic_z italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ) ( italic_z italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ⊕ italic_M end_ARG start_ARG square-root start_ARG italic_d end_ARG end_ARG ) ( italic_z italic_W start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) ,

wherezf×d𝑧superscript𝑓𝑑z\in\mathbb{R}^{f\times d}italic_z ∈ blackboard_R start_POSTSUPERSCRIPT italic_f × italic_d end_POSTSUPERSCRIPT is the input embeddings and zsuperscript𝑧z^{\prime}italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT the output embeddings,Wq,Wksubscript𝑊𝑞subscript𝑊𝑘W_{q},W_{k}italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT , italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, Wvd×dsubscript𝑊𝑣superscript𝑑𝑑W_{v}\in\mathbb{R}^{d\times d}italic_W start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT are all learnable matrices, and direct-sum\oplus is element-wise addition. Mf×f𝑀superscript𝑓𝑓M\in\mathbb{R}^{f\times f}italic_M ∈ blackboard_R start_POSTSUPERSCRIPT italic_f × italic_f end_POSTSUPERSCRIPT is an unoptimizable mask, where the element at the i𝑖iitalic_i-th row and j𝑗jitalic_j-th column is defined by:

(2)M[i,j]={I(𝐟i)>I(𝐟j)0I(𝐟i)I(𝐟j)𝑀𝑖𝑗cases𝐼subscript𝐟𝑖𝐼subscript𝐟𝑗0𝐼subscript𝐟𝑖𝐼subscript𝐟𝑗M[i,j]=\begin{cases}-\infty&I(\mathbf{f}_{i})>I(\mathbf{f}_{j})\\0&I(\mathbf{f}_{i})\leq I(\mathbf{f}_{j})\end{cases}italic_M [ italic_i , italic_j ] = { start_ROW start_CELL - ∞ end_CELL start_CELL italic_I ( bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) > italic_I ( bold_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_I ( bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ≤ italic_I ( bold_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_CELL end_ROW

The function I()𝐼I(\cdot)italic_I ( ⋅ ) represents a measure of feature importance, and we use the “mutual information” metric in this paper (see AppendixE for details). If a feature 𝐟isubscript𝐟𝑖\mathbf{f}_{i}bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is more informative compared to 𝐟jsubscript𝐟𝑗\mathbf{f}_{j}bold_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, M[i,j]𝑀𝑖𝑗M[i,j]italic_M [ italic_i , italic_j ] is set -\infty- ∞ (we use 105superscript105-10^{5}- 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT in implementation) and thus the (i,j)𝑖𝑗(i,j)( italic_i , italic_j ) grid on the attention map is masked. It prevents the transfer of the embedding of the feature 𝐟jsubscript𝐟𝑗\mathbf{f}_{j}bold_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT to the 𝐟isubscript𝐟𝑖\mathbf{f}_{i}bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT’s embedding.

In this way, only more informative features are permitted to propagate information to the less informative ones, and the reverse is not allowed. By doing so, SPA still maintains interaction pathways between any two features while constraining the impacts of less informative ones. Intuitively, when training samples are insufficient, some feature interactions conducted by the model may be sub-optimal, as vanilla self-attention was proved data-inefficient(Touvron etal., 2021a). When using SPA, it can avoid the excessive impacts of a noisy feature on prediction outcomes in case some associated interaction pathways are ill-suited.Furthermore, the SPA inhibits certain feature transfer pathways, thereby obviating the need for the ExcelFormer to learn partial rotational directions. The rotational properties of the ExcelFormer lie between those of the rotational invariant counterpart with vanilla self-attention (e.g., FT-Transformer) and a fully rotational variant model (e.g., feedforward network conducting no feature interactions). Namely, our SPA partially disrupts DNNs’ rotational invariance property.In practice, SPA is extended to a multi-head self-attention version, with 32 heads by default.

ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (4)

Interaction Attenuated Initialization.

Similar to how the SPA disrupts rotational invariance by diminishing feature interactions,we present a specialized initialization approach for SPA to ensure that ExcelFormer starts as a largely non-rotationally invariant model. Notably, removing all self-attention operations from a Transformer model, features are processed individually, which makes the Transformer model nearly non-rotationally invariant (if we set aside the full connection layers that fuse features for target prediction). Concurrently, prior researches have evidenced the indispensable role of feature interactions (e.g., through self-attention) in Transformer-based models on tabular data(Gorishniy etal., 2021; Yan etal., 2023). By integrating these insights, our proposed interaction attenuated initialization scheme initially dampens the impact of SPA during the early stages of training, allowing essential feature interactions progressively grow under the driving force of the data.

Our interaction attenuated initialization scheme is built upon the commonly used He’s initialization(He etal., 2015) or Xavier initialization(Glorot and Bengio, 2010), by rescaling the variance of an initialized weight w𝑤witalic_w with γ𝛾\gammaitalic_γ (γ0+𝛾superscript0\gamma\rightarrow 0^{+}italic_γ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT) while keeping the expectation at 0:

(3)Var(w)=γVarprev(w),Var𝑤𝛾subscriptVarprev𝑤\text{Var}(w)=\gamma\text{Var}_{\text{prev}}(w),Var ( italic_w ) = italic_γ Var start_POSTSUBSCRIPT prev end_POSTSUBSCRIPT ( italic_w ) ,

where Varprev(w)subscriptVarprev𝑤\text{Var}_{\text{prev}}(w)Var start_POSTSUBSCRIPT prev end_POSTSUBSCRIPT ( italic_w ) denotes the weight variance used in the He’s initialization and Xavier initialization. In this work, we set γ=104𝛾superscript104\gamma=10^{-4}italic_γ = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. To reduce the impacts of SPA,we apply Eq.(3) to all the parameters in the SPA module. Thus, ExcelFormer works like a non-rotationally invariant model initially.

Actually, for a module with an additive identity shortcut like y=(x)+x𝑦𝑥𝑥y=\mathcal{F}(x)+xitalic_y = caligraphic_F ( italic_x ) + italic_x, our initialization approach attenuates the sub-network (x)𝑥\mathcal{F}(x)caligraphic_F ( italic_x ) and satisfies the property of dynamical isometry(Saxe etal., 2014) for better trainability. Some previous work(Bachlechner etal., 2021; Touvron etal., 2021b) suggested to rescale the (x)𝑥\mathcal{F}(x)caligraphic_F ( italic_x ) path as y=η(x)+x𝑦𝜂𝑥𝑥y=\eta\mathcal{F}(x)+xitalic_y = italic_η caligraphic_F ( italic_x ) + italic_x, where η𝜂\etaitalic_η is a learnable scalar initialized as 0 or a learnable diagonal matrix whose elements are of very small values. Different from these methods, our attenuated initialization approach directly assigns minuscule values to the weights during initialization. Our approach is better suited for the flexible learning of whether each feature interaction pathway should be activated or not, thereby achieving sparse attention.

3.2. Solving (P3) with GLU layer

The irregularities (Fig.3 shows an example) present in the tabular feature-target relationship make it particularly advantageous for decision trees utilizing multiple thresholds to split the feature space.On the contrary, existing Transformer employs a two-layer MLP as its feedforward network (FFN), which possesses a lesser degree of non-linear fitting capability. Therefore, we replace the vanilla FFN by a Gated Linear Unit (GLU) layer. Diverging from the standard GLU architecture, we employ the “tanh” activation in lieu of the “sigmoid” activation for better optimization properties(LeCun etal., 2002), as:

(4)z=tanh(Linear1(z))Linear2(z),superscript𝑧direct-productsubscriptLinear1𝑧subscriptLinear2𝑧z^{\prime}=\tanh{(\texttt{Linear}_{1}(z))}\odot\texttt{Linear}_{2}(z),italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = roman_tanh ( Linear start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_z ) ) ⊙ Linear start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_z ) ,

where Linear1subscriptLinear1\texttt{Linear}_{1}Linear start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Linear2subscriptLinear2\texttt{Linear}_{2}Linear start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are applied onto the embedding dimension d𝑑ditalic_d of z𝑧zitalic_z,direct-product\odot denotes element-wise product. Please note that both the vanilla FFN and GLU employ two fully connection layers (FFN is defined by z=Linear1(ReLU(Linear2(z)))superscript𝑧subscriptLinear1ReLUsubscriptLinear2𝑧z^{\prime}=\texttt{Linear}_{1}(\text{ReLU}(\texttt{Linear}_{2}(z)))italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = Linear start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ReLU ( Linear start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_z ) ) )), resulting in similar computational costs. The SPA and GLU modules are alternately stacked to form the core structure of the ExcelFormer model, as shown in Fig.2.

Existing tabular Transformers(Yan etal., 2023; Gorishniy etal., 2021) use linear embedding layer to independently deal with each feature 𝐟isubscript𝐟𝑖\mathbf{f}_{i}\in\mathbb{R}bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R into an embedding 𝐳idsubscript𝐳𝑖superscript𝑑\mathbf{z}_{i}\in\mathbb{R}^{d}bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, by zi=𝐟iWi,1+bi,1subscript𝑧𝑖subscript𝐟𝑖subscript𝑊𝑖1subscript𝑏𝑖1z_{i}=\mathbf{f}_{i}W_{i,1}+b_{i,1}italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_W start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT + italic_b start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT. Here we also use GLU to replace it by 𝐳i=tanh(𝐟iWi,1+bi,1)(𝐟iWi,2+bi,2)subscript𝐳𝑖direct-productsubscript𝐟𝑖subscript𝑊𝑖1subscript𝑏𝑖1subscript𝐟𝑖subscript𝑊𝑖2subscript𝑏𝑖2\mathbf{z}_{i}=\tanh{(\mathbf{f}_{i}W_{i,1}+b_{i,1})}\odot(\mathbf{f}_{i}W_{i,%2}+b_{i,2})bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_tanh ( bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_W start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT + italic_b start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT ) ⊙ ( bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_W start_POSTSUBSCRIPT italic_i , 2 end_POSTSUBSCRIPT + italic_b start_POSTSUBSCRIPT italic_i , 2 end_POSTSUBSCRIPT ), where Wi,1,Wi,21×dsubscript𝑊𝑖1subscript𝑊𝑖2superscript1𝑑W_{i,1},W_{i,2}\in\mathbb{R}^{1\times d}italic_W start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , italic_W start_POSTSUBSCRIPT italic_i , 2 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 1 × italic_d end_POSTSUPERSCRIPT and bi,1,bi,2dsubscript𝑏𝑖1subscript𝑏𝑖2superscript𝑑b_{i,1},b_{i,2}\in\mathbb{R}^{d}italic_b start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT italic_i , 2 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT are learnable parameters. Then, the initial feature embedding z(0)superscript𝑧0z^{(0)}italic_z start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT are obtained by stacking all 𝐳i(i=1,2,,f)subscript𝐳𝑖𝑖12𝑓\mathbf{z}_{i}(i=1,2,\ldots,f)bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_i = 1 , 2 , … , italic_f ), as z(0)=[𝐳1,𝐳2,𝐳3,,𝐳f]Tf×dsuperscript𝑧0superscriptsubscript𝐳1subscript𝐳2subscript𝐳3subscript𝐳𝑓𝑇superscript𝑓𝑑z^{(0)}=[\mathbf{z}_{1},\mathbf{z}_{2},\mathbf{z}_{3},\ldots,\mathbf{z}_{f}]^{%T}\in\mathbb{R}^{f\times d}italic_z start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT = [ bold_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , bold_z start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , … , bold_z start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_f × italic_d end_POSTSUPERSCRIPT like previous work.

3.3. The rest part of ExcelFormer

Feature Pre-processing.

Feature values are pre-processed before feeding into ExcelFormer. The numerical features are normalized by quantile transformation and the categorical features are converted into numerical ones using the CatBoost Encoder implemented with the Sklearn Python package111https://contrib.scikit-learn.org/category_encoders/catboost.html. This step performs similar to previous works (e.g., FT-Transformer (Gorishniy etal., 2021)).

Prediction Head.

The prediction head is directly applied to the output of the topmost transformer block, which contains two fully connection layers to separately compress the information along the token embeddings and fuse the information from features, by:

(5)p=ϕ(Lineard(P-ReLU(Linearf(z(L))))),𝑝italic-ϕsubscriptLinear𝑑P-ReLUsubscriptLinear𝑓superscript𝑧𝐿p=\phi(\texttt{Linear}_{d}(\text{P-ReLU}(\texttt{Linear}_{f}(z^{(L)})))),italic_p = italic_ϕ ( Linear start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ( P-ReLU ( Linear start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( italic_z start_POSTSUPERSCRIPT ( italic_L ) end_POSTSUPERSCRIPT ) ) ) ) ,

where z(L)superscript𝑧𝐿z^{(L)}italic_z start_POSTSUPERSCRIPT ( italic_L ) end_POSTSUPERSCRIPT is the input, Wff×Csubscript𝑊𝑓superscript𝑓𝐶W_{f}\in\mathbb{R}^{f\times C}italic_W start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_f × italic_C end_POSTSUPERSCRIPT and bfCsubscript𝑏𝑓superscript𝐶b_{f}\in\mathbb{R}^{C}italic_b start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT. For multi-classification task, C𝐶Citalic_C is the amount of target categories and ϕitalic-ϕ\phiitalic_ϕ indicates “softmax”. For regression and binary classification tasks, then C=1𝐶1C=1italic_C = 1 and ϕitalic-ϕ\phiitalic_ϕ is sigmoid. The fully connection layer LinearfsubscriptLinear𝑓\texttt{Linear}_{f}Linear start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT and LineardsubscriptLinear𝑑\texttt{Linear}_{d}Linear start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT are applied along and the feature dimension and the embedding dimension of z(L)superscript𝑧𝐿z^{(L)}italic_z start_POSTSUPERSCRIPT ( italic_L ) end_POSTSUPERSCRIPT, respectively.

4. Solving (P2) with Data Augmentation

A straightforward approach to tackle data insufficiency is to create training data. While Mixup(Zhang etal., 2018) regularizes DNNs to favor linear behaviors between samples and stands as one of the most effective data augmentation methods in computer vision, empirical evidence suggests that it does not perform optimally on tabular datasets (e.g., see Table5).This discrepancy may be due to the conflict between the model’s linear behavior and the irregularity of target functions, as intuitively illustrated in Fig.3.To address this challenge, we introduce two Mixup variants, Hid-Mix and Feat-Mix, which mitigate the conflicts in creating samples.

Hid-Mix. Our Hid-Mix is applied to the token-level embeddings after the input samples have been processed by the embedding layer, along with their corresponding labels. It randomly exchanges some embedding “dimensions” between two samples (please refer to Fig.4(a)). Let z1(0),z2(0)f×dsuperscriptsubscript𝑧10superscriptsubscript𝑧20superscript𝑓𝑑z_{1}^{(0)},z_{2}^{(0)}\in\mathbb{R}^{f\times d}italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_f × italic_d end_POSTSUPERSCRIPT be the token-level embeddings of two randomly selected samples, with y1subscript𝑦1y_{1}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and y2subscript𝑦2y_{2}italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denoting their respective labels. A new sample represented as a token-label pair (zm(0),ym)superscriptsubscript𝑧m0subscript𝑦m(z_{\text{m}}^{(0)},y_{\text{m}})( italic_z start_POSTSUBSCRIPT m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT m end_POSTSUBSCRIPT ) is synthesized by:

(6){zm(0)=SHz1(0)+(𝟙HSH)z2(0),ym=λHy1+(1λH)y2,\left\{\begin{aligned} &z_{\text{m}}^{(0)}=S_{H}\odot z_{1}^{(0)}+(\mathbbm{1}%_{H}-S_{H})\odot z_{2}^{(0)},\\&y_{\text{m}}=\lambda_{H}y_{1}+(1-\lambda_{H})y_{2},\end{aligned}\right.{ start_ROW start_CELL end_CELL start_CELL italic_z start_POSTSUBSCRIPT m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT = italic_S start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ⊙ italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT + ( blackboard_1 start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT - italic_S start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) ⊙ italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL italic_y start_POSTSUBSCRIPT m end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ( 1 - italic_λ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , end_CELL end_ROW

where the matrix SHsubscript𝑆𝐻S_{H}italic_S start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is of size f×d𝑓𝑑f\times ditalic_f × italic_d and is formed by stacking f𝑓fitalic_f identical d𝑑ditalic_d-dimensional binary vectors denoted as shsubscript𝑠s_{h}italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT: SH=[sh,sh,,sh]Tsubscript𝑆𝐻superscriptsubscript𝑠subscript𝑠subscript𝑠𝑇S_{H}=[s_{h},s_{h},\ldots,s_{h}]^{T}italic_S start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT = [ italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT. shsubscript𝑠s_{h}italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT consists of λH×dsubscript𝜆𝐻𝑑\lfloor\lambda_{H}\times d\rfloor⌊ italic_λ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT × italic_d ⌋ randomly selected elements set to 1 and the rest elements set to 0. The scalar coefficient λHsubscript𝜆𝐻\lambda_{H}italic_λ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT for labels is sampled from the eta(αH,αH)𝑒𝑡𝑎subscript𝛼𝐻subscript𝛼𝐻\mathcal{B}eta(\alpha_{H},\alpha_{H})caligraphic_B italic_e italic_t italic_a ( italic_α start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) distribution, where αHsubscript𝛼𝐻\alpha_{H}italic_α start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is a hyper-parameter. 𝟙Hsubscript1𝐻\mathbbm{1}_{H}blackboard_1 start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is an all-one matrix with dimensions f×d𝑓𝑑f\times ditalic_f × italic_d. In practice, λHsubscript𝜆𝐻\lambda_{H}italic_λ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is first sampled from given eta(αH,αH)𝑒𝑡𝑎subscript𝛼𝐻subscript𝛼𝐻\mathcal{B}eta(\alpha_{H},\alpha_{H})caligraphic_B italic_e italic_t italic_a ( italic_α start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) distribution. Subsequently, we randomly select λH×dsubscript𝜆𝐻𝑑\lfloor\lambda_{H}\times d\rfloor⌊ italic_λ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT × italic_d ⌋ elements to construct the vector shsubscript𝑠s_{h}italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT and the matrix SHsubscript𝑆𝐻S_{H}italic_S start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT.

Since the embedding “dimensions” from different samples may be randomly combined in training, ExcelFormer is encouraged to independently and equally handle various embedding dimensions. Considering each embedding dimension as a distinct “profile” version of input data (as each embedding element is projected from a scalar feature value), Hid-Mix regularizes ExcelFormer to behave like a bagging predictor(Breiman, 1996). Therefore, Hid-Mix may also help mitigate the effects of data noise and perturbations, in addition to increasing the amount of training data.

Feat-Mix. Our idea of Feat-Mix is visualized as in Fig.4. Unlike Hid-Mix that operates on the embedding dimension, our Feat-Mix synthesizes new sample (xm,ym)subscript𝑥𝑚subscript𝑦𝑚(x_{m},y_{m})( italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) by swapping parts of features between two randomly selected samples x1,x2fsubscript𝑥1subscript𝑥2superscript𝑓x_{1},x_{2}\in\mathbb{R}^{f}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT, and blending their labels y1subscript𝑦1y_{1}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and y2subscript𝑦2y_{2}italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT guided by feature importance, by:

(7){xm=𝐬Fx1+(𝟙F𝐬F)x2,ym=Λy1+(1Λ)y2,\left\{\begin{aligned} &x_{\text{m}}=\mathbf{s}_{F}\odot x_{1}+(\mathbbm{1}_{F%}-\mathbf{s}_{F})\odot x_{2},\\&y_{\text{m}}=\Lambda y_{1}+(1-\Lambda)y_{2},\end{aligned}\right.{ start_ROW start_CELL end_CELL start_CELL italic_x start_POSTSUBSCRIPT m end_POSTSUBSCRIPT = bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ⊙ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ( blackboard_1 start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT - bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ) ⊙ italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL italic_y start_POSTSUBSCRIPT m end_POSTSUBSCRIPT = roman_Λ italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ( 1 - roman_Λ ) italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , end_CELL end_ROW

wherethe vector 𝐬Fsubscript𝐬𝐹\mathbf{s}_{F}bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT andthe all-one vector 𝟙Fsubscript1𝐹\mathbbm{1}_{F}blackboard_1 start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT are of size f𝑓fitalic_f, 𝐬Fsubscript𝐬𝐹\mathbf{s}_{F}bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT contains λF×fsubscript𝜆𝐹𝑓\lfloor\lambda_{F}\times f\rfloor⌊ italic_λ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT × italic_f ⌋ randomly chosen elements set to 1 and the remaining elements set to 0. λFeta(αF,αF)similar-tosubscript𝜆𝐹𝑒𝑡𝑎subscript𝛼𝐹subscript𝛼𝐹\lambda_{F}\sim\mathcal{B}eta(\alpha_{F},\alpha_{F})italic_λ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ∼ caligraphic_B italic_e italic_t italic_a ( italic_α start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ).The coefficient value, ΛΛ\Lambdaroman_Λ, is determined based on the contribution of x1subscript𝑥1x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, taking into account feature importance, by:

(8)Λ=𝐬F(i)=1I(𝐟i)i=1fI(𝐟i),Λsubscriptsuperscriptsubscript𝐬𝐹𝑖1𝐼subscript𝐟𝑖subscriptsuperscript𝑓𝑖1𝐼subscript𝐟𝑖\Lambda=\frac{\sum_{\mathbf{s}_{F}^{(i)}=1}I(\mathbf{f}_{i})}{\sum^{f}_{i=1}I(%\mathbf{f}_{i})},roman_Λ = divide start_ARG ∑ start_POSTSUBSCRIPT bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = 1 end_POSTSUBSCRIPT italic_I ( bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT italic_I ( bold_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG ,

where 𝐬F(i)superscriptsubscript𝐬𝐹𝑖\mathbf{s}_{F}^{(i)}bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT represents the i𝑖iitalic_i-th element of 𝐬Fsubscript𝐬𝐹\mathbf{s}_{F}bold_s start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, and I()𝐼I(\cdot)italic_I ( ⋅ ) returns the feature importance using mutual information. When disregarding feature importance, Λ=λFΛsubscript𝜆𝐹\Lambda=\lambda_{F}roman_Λ = italic_λ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT (assuming λF×f=λF×fsubscript𝜆𝐹𝑓subscript𝜆𝐹𝑓\lfloor\lambda_{F}\times f\rfloor=\lambda_{F}\times f⌊ italic_λ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT × italic_f ⌋ = italic_λ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT × italic_f), making Feat-Mix degenerate into a form similar to cutmix(Yun etal., 2019). However, due to the presence of uninformative features in tabular datasets, Feat-Mix emerges as a more robust scheme.

As features from two distinct samples are randomly combined to create new samples, Feat-Mix promotes a solution with fewer feature interaction. This aligns with the functionality similar to our Interaction Attenuated Initialization (see Sec.3.1). We argue that Feat-Mix not only supplements the training dataset as a data augmentation method, but also encourages ExcelFormer to predominantly exhibit like a non-rotationally invariant algorithm.

5. Training and Loss Functions

ExcelFormer can handle both classification and regression tasks on tabular datasets in supervised learning. In training, our two proposed data augmentation schemes can be applied successively by Hid-Mix(Embedding Layer(Feat-Mix(x,y)))Hid-MixEmbedding LayerFeat-Mix𝑥𝑦\textsc{Hid-Mix}(\text{Embedding Layer}(\textsc{Feat-Mix}(x,y)))Hid-Mix ( Embedding Layer ( Feat-Mix ( italic_x , italic_y ) ) ) or used independently. But, our tests suggest that the effect of ExcelFormer on a certain dataset could be better by using only Feat-Mix or Hid-Mix. Thus, we use only one scheme in dealing with certain tabular datasets. The cross-entropy loss is used for classification tasks, and the mean square error loss is for regression tasks.

ExcelFormer setting:No DA (t)Feat-Mix (d)Hid-Mix (d)Mix Tuned (t)Fully Tuned (t)
XGboost (t)4.20 ±plus-or-minus\pm± 2.764.21 ±plus-or-minus\pm± 2.704.29 ±plus-or-minus\pm± 2.734.34 ±plus-or-minus\pm± 2.734.28 ±plus-or-minus\pm± 2.77
Catboost (t)4.61 ±plus-or-minus\pm± 2.734.57 ±plus-or-minus\pm± 2.694.63 ±plus-or-minus\pm± 2.684.66 ±plus-or-minus\pm± 2.614.64 ±plus-or-minus\pm± 2.68
FTT (t)4.32 ±plus-or-minus\pm± 2.364.35 ±plus-or-minus\pm± 2.354.41 ±plus-or-minus\pm± 2.254.44 ±plus-or-minus\pm± 2.324.39 ±plus-or-minus\pm± 2.37
MLP (t)5.23 ±plus-or-minus\pm± 2.315.27 ±plus-or-minus\pm± 2.345.26 ±plus-or-minus\pm± 2.325.30 ±plus-or-minus\pm± 2.375.32 ±plus-or-minus\pm± 2.33
DCN v2 (t)6.01 ±plus-or-minus\pm± 2.785.96 ±plus-or-minus\pm± 2.755.99 ±plus-or-minus\pm± 2.276.03 ±plus-or-minus\pm± 2.746.02 ±plus-or-minus\pm± 2.73
AutoInt (t)5.70 ±plus-or-minus\pm± 2.615.78 ±plus-or-minus\pm± 2.515.77 ±plus-or-minus\pm± 2.565.88 ±plus-or-minus\pm± 2.535.80 ±plus-or-minus\pm± 2.55
SAINT (t)5.48 ±plus-or-minus\pm± 2.595.48 ±plus-or-minus\pm± 2.555.56 ±plus-or-minus\pm± 2.565.61 ±plus-or-minus\pm± 2.555.56 ±plus-or-minus\pm± 2.58
TransTab (d)6.78 ±plus-or-minus\pm± 2.526.80 ±plus-or-minus\pm± 2.596.82 ±plus-or-minus\pm± 2.576.86 ±plus-or-minus\pm± 2.596.87 ±plus-or-minus\pm± 2.55
XTab (d)8.56 ±plus-or-minus\pm± 2.208.68 ±plus-or-minus\pm± 2.198.67 ±plus-or-minus\pm± 2.198.67 ±plus-or-minus\pm± 2.198.71 ±plus-or-minus\pm± 2.14
ExcelFormer (ours)4.11 ±plus-or-minus\pm± 2.683.91 ±plus-or-minus\pm± 2.603.62 ±plus-or-minus\pm± 2.593.20 ±plus-or-minus\pm± 2.103.41 ±plus-or-minus\pm± 2.12

6. Experiments

6.1. Experimental Setups

Implementation Details. We configure the number of SPA and GRU modules as L=3𝐿3L=3italic_L = 3, set the feature embedding size to d=256𝑑256d=256italic_d = 256, and apply a dropout rate of 0.3 to the attention map. AdamW optimizer(Loshchilov and Hutter, 2018) is used with default settings. The learning rate is set to 104superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT without weight decay, and αHsubscript𝛼𝐻\alpha_{H}italic_α start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT and αFsubscript𝛼𝐹\alpha_{F}italic_α start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT for eta𝑒𝑡𝑎\mathcal{B}etacaligraphic_B italic_e italic_t italic_a distributions are both set to 0.5. These settings are the default hyperparameters for our ExcelFormer.In the hyperparameter fine-tuning process, we utilized the Optuna library(Akiba etal., 2019) for all approaches. Consistent with(Gorishniy etal., 2021), we randomly select 80% of the data as training samples and the remaining 20% as test samples. During training, we reserve 20% training samples for validation. To fine-tune our ExcelFormer, we designate two tuning configurations: “Mix Tuned” and “Fully Tuned”. “Mix Tuned” refers to only fine-tune hyperparameters of data augmentation (for Feat-Mix and Hid-Mix), while “Fully Tuned” optimizes all hyperparameters, including those related to data augmentation and model architecture. A comprehensive description of all settings can be found in AppendixC. We applied early stopping with a patience of 32 for ExcelFormer.

Datasets. A total of 96 small datasets sourced from the Taptap dataset benchmark222https://huggingface.co/datasets/ztphs980/taptap_datasets were utilized. The criterion for classifying datasets as small is based on having a sample size of less than 10,000.Besides, 21 larger public tabular datasets, ranging in scale from over 10,000 to 581,835 samples were also used. The detailed dataset descriptions are provided in AppendixF.

Compared Models. We compare our new ExcelFormer with two prominent GBDT approaches XGboost(Chen and Guestrin, 2016) and Catboost(Prokhorenkova etal., 2018) and several representative DNNs: FT-Transformer (FTT)(Gorishniy etal., 2021), SAINT(Somepalli etal., 2021), Multilayer Perceptron (MLP), DCN v2(Wang etal., 2021a), AutoInt(Song etal., 2019), and TapPFN (Hollmann etal., 2022). We also include two pre-trained DNNs: TransTab(Wang and Sun, 2022) and XTab(Zhu etal., 2023) for reference. The implementations of XGboost and Catboost mainly follow(Gorishniy etal., 2021). Since we aim to extensively tune XGboost and Catboost for their best performances, we increase the maximum number of estimators/ iterations (i.e., the number of decision trees) from 2000 to 4096 and the number of tuning iterations from 100 to 500, which give a more stringent setting and better performances. The settings for XGboost and Catboost are given in AppendixC. We use the default hyperparameters of pretrained models, TransTab and XTab, and fine-tune them on each dataset. They are not hyperparameter tuned, since their hyperparameter tuning spaces are not given. For large-scale datasets, FT-Transformer, SAINT, and TapFPN were fine-tuned based on the hyperparameters outlined in their respective papers. The architectures and hyperparameter tuning settings of the remaining DNNs follows the paper(Gorishniy etal., 2021). On small datasets, we tuned 50 iterations for each datasets.

Evaluation metrics. We use the area under the ROC Curve (AUC) and accuracy (ACC) for binary classification tasks and multi-class classification tasks. In regression tasks, we employ the negative root mean square error (nRMSE), where the negative sign is introduced to RMSE, aligning its direction with AUC and ACC, such that higher values across all these metrics indicate superior performance. Due to the high diversity among tabular datasets, performance ranks are used as a comprehensive metric, and the detailed results are given in AppendixG.

SettingModelRank (mean ±plus-or-minus\pm± std)
defaultXGboost8.52 ±plus-or-minus\pm± 1.86
hyperparametersCatboost7.52 ±plus-or-minus\pm± 2.44
FTT6.71 ±plus-or-minus\pm± 1.74
Excel w/ Feat-Mix6.62 ±plus-or-minus\pm± 2.44
Excel w/ Hid-Mix4.76 ±plus-or-minus\pm± 1.95
hyperparameterXGboost4.29 ±plus-or-minus\pm± 2.59
Catboost6.24 ±plus-or-minus\pm± 2.39
fine-tunedFT-T5.19 ±plus-or-minus\pm± 2.60
Excel (Mix Tuned)2.38 ±plus-or-minus\pm± 1.53
Excel (Fully Tuned)2.05 ±plus-or-minus\pm± 1.40

ModelExcelFormerFTT (t)XGb (t)Cat (t)MLP (t)DCNv2 (t)AutoInt (t)SAINT (t)TransTab (d)XTab (d)TabPFN (t)
Characteristics: Task TypeClassification
Proportion51%
Setting: Hid-Mix (d)3.884.885.975.776.616.386.636.076.319.504.01
Setting: Mix Tuned3.784.915.955.796.606.396.716.106.379.463.95
Characteristics: Task TypeRegression
Proportion49%
Setting: Hid-Mix (d)3.814.453.434.264.646.265.535.648.218.79/
Setting: Mix Tuned3.174.493.534.284.746.325.685.728.238.83/
Characteristics: #. Sample\geq 500
Proportion43%
Setting: Hid-Mix (d)3.854.504.385.175.575.915.595.246.448.34/
Setting: Mix Tuned3.524.504.395.155.605.995.715.336.488.34/
Characteristics: #. Sample¡ 500
Proportion57%
Setting: Hid-Mix (d)3.454.344.224.235.026.055.905.797.108.92/
Setting: Mix Tuned3.184.384.284.295.056.055.975.797.138.88/
Characteristics: #. Feature#. Feature ¡ 8
Proportion32%
Setting: Hid-Mix (d)3.453.843.985.084.236.326.165.327.529.10/
Setting: Mix Tuned3.273.844.035.064.346.266.215.357.509.13/
Characteristics: #. Feature8 \leq #. Feature ¡ 16
Proportion38%
Setting: Hid-Mix (d)3.764.264.444.616.315.755.395.696.618.17/
Setting: Mix Tuned3.174.334.494.746.335.815.585.786.648.14/
Characteristics: #. Feature#. Feature \geq 16
Proportion30%
Setting: Hid-Mix (d)3.625.194.414.175.055.935.815.646.338.84/
Setting: Mix Tuned3.175.224.484.145.056.055.905.696.458.84/

6.2. Model Performances and Discussions

Performances on Small Datasets.

DNNs are typically data-inefficient, thus we initially investigate whether the proposed ExcelFormer can effectively perform on small datasets. See Table1, our ExcelFormer consistently outperforms other models that undergo dataset-adaptive hyperparameter tuning, regardless of whether the hyperparameters of the ExcelFormer are tuned or not, which underscores the superiority of our proposed ExcelFormer.We observe that ExcelFormer with Hid-Mix slightly outperforms that with Feat-Mix; and if we tune hyperparameters of ExcelFormer, its performance achieves further improvement. Notably, hyperparameter fine-tuning reduces the standard deviations of performance ranks, indicating that applying hyperparameter fine-tuning onto ExcelFormer can yield more consistently superior results. Interestingly, while fine-tuning all the hyperparameters (“Fully Tuned”) should result in better performance ideally, it shows that, under the same fine-tuning iterations, “Mix Tuned” configuration performs better. This might be attributed to the higher efficiency of finely tuning data augmentation setting. To assess the effectiveness of our ExcelFormer’s architecture, we conducted experiments by excluding all data augmentation (Feat-Mix and Hid-Mix) and compare it with existing models. The results show that even without the use of Feat-Mix and Hid-Mix, ExcelFormer still outperforms previous approaches, underscoring the superiority of our architecture.

Performances on Larger Datasets.

We further conduct a comparison between our model and three previous state-of-the-art models: XGboost, Catboost, and FTT. We excluded other models from the comparison due to their relatively inferior performances and the significant computational load when dealing with large datasets. Each model undergoes evaluation with two settings: using default hyperparameters and dataset-adaptive fine-tuning hyperparameters. As depicted in Table2, our model also outperforms the previous models under both settings. Additionally, it is worth noting that our ExcelFormer with Hid-Mix still achieves comparable performance to prior models that undergo hyperparameter tuning, consistent with the findings on small datasets. Different from the conclusion on small datasets, the Fully Tuned ExcelFormer outperforms the Mix Tuned version on large datasets.

Takeaway. We discovered that (1) our model performs well on GBDT-favored smaller datasets and DNN-favored larger ones. This suggests that our design addresses the existing drawbacks of DNNs in tabular prediction. (2) Even with default hyperparameters, ExcelFormer consistently outperforms hyperparameter-tuned competitors on small datasets and performs competitive with them on larger ones. This implies that for users who are not experts in hyperparameter tuning, using our model can still obtain a strong solution. Moreover, even for professional users, our model also stands out as a top choice since the hyperparameter-tuned ExcelFormer performs excellent on various tabular prediction tasks.

6.3. Can ExcelFormer be a sure bet solution across various types of datasets?

We still have to further rigorously examine whether our model performs poorly on certain tabular dataset types to ensure that we have achieved our goal of building a sure bet solution.We divide datasets into various subgroups according to the task type, dataset size, and the number of features, and examine ExcelFormer performance within each subgroup. We adopt two configurations, Hid-Mix (default) and Mix Tuned, for ExcelFormer, while all of the existing models undergo hyperparameter fine-tuning. As shown in Table3, ExcelFormer with Hid-Mix (default) exhibits the best performance in all subgroups except for regression tasks, where it slightly lags behind hyperparameter-tuned XGboost. The Mix Tuned ExcelFormer outperforms other models in all subgroups, indicating that ExcelFormer does not exhibit overt dataset type preferences.

Takeaway. What changes does ExcelFormer bring to the field of tabular data prediction?Refer to Table3, besides our model, runner-up positions are held by TapFPN, FTT, XGBoost, and CatBoost in different subgroups. Notably, TapFPN is solely applicable to classification tasks, CatBoost performs well on datasets with numerous features, and FTT excels on datasets with fewer than 16 features. However, our proposed model demonstrates strong performance across all dataset types, which further proves its status as a sure be solution for tabular datasets.

6.4. Ablation Analysis

baselineSPAIAIGLUrank (±plus-or-minus\pm± std)
4.31 ±plus-or-minus\pm± 0.94
3.87 ±plus-or-minus\pm± 1.58
3.73 ±plus-or-minus\pm± 2.04
2.45 ±plus-or-minus\pm± 1.60
3.71 ±plus-or-minus\pm± 1.52
2.31 ±plus-or-minus\pm± 1.46
BackboneData Augmentationrank (±plus-or-minus\pm± std)
FT-TransformerN/A3.28 ±plus-or-minus\pm± 1.66
Mixup3.80 ±plus-or-minus\pm± 1.39
CutMix2.91 ±plus-or-minus\pm± 1.37
Feat-Mix2.50 ±plus-or-minus\pm± 1.03
Hid-Mix2.24 ±plus-or-minus\pm± 1.00
ExcelFormerN/A3.68 ±plus-or-minus\pm± 1.43
Mixup3.46 ±plus-or-minus\pm± 1.63
CutMix2.88 ±plus-or-minus\pm± 1.21
Feat-Mix2.38 ±plus-or-minus\pm± 1.25
Hid-Mix2.36 ±plus-or-minus\pm± 1.03

Here we investigate the effects of architectural design (SPA, GLU) and data augmentation approaches (Hid-Mix and Feat-Mix), with the results presented in Table4 and Table5, respectively.

In Table4, the baseline model employs an ExcelFormer with the vanilla self-attention module initialized using typical Kaiming initialization(He etal., 2015), along with a vanilla MLP-based FFN. Subsequently, we evaluate how the designed approaches enhance this baseline. It is observed that SPA and IAI individually improve baseline performances, and their joint usage achieves even better results. Additionally, GLU can also significantly enhances the baseline. These findings suggest that our architectural designs, SPA with IAI and GLU, are all well-suited for tabular data predictions. In the last row, where all these components are utilized (ExcelFormer), we demonstrate that their combined utilization leads to the best results. The rotational invariance property brought by SPA and IAI are carefully demonstrated in AppendixB.

See Table5, where we report the comparison results among various data augmentation techniques on both the FTT backbone(Rubachev etal., 2022) and our ExcelFormer. It is crucial to recognize that the performance rankings are computed independently for different backbones, making direct comparisons of ranks unfeasible. It is evident that Mixup demonstrates minimal to no effect and sometimes even exhibits a detrimental impact. This could be attributed to Mixup’s interpolation potentially introducing “error” cases or steering the model towards an overly smooth solution. In contrast, CutMix consistently outperforms Mixup, approaching the performance level of Feat-Mix, albeit with slight inferiority. As discussed in Sec.4, without considering feature importance, Feat-Mix may regress to CutMix; however, feature importance computation is crucial to mitigate the impacts of uninformative features, a common occurrence in tabular datasets. Further experiments are detailed in AppendixA. It is evident that our proposed Feat-Mix and Hid-Mix consistently enhance DNN model performance and prove more effective compared to other Mixup variants.

7. Conclusions

This paper introduces a novel approach aimed at addressing three key limitations of DNNs when applied to tabular data prediction. We present a novel semi-permeable attention module incorporated with an interaction attenuated initialization approach, a GLU based FFN, as well as two data augmentation approaches: Hid-Mix and Feat-Mix. Through the integration of these designs, we present ExcelFormer, a model that maintains the same size as previous tabular Transformers but significantly outperforms existing GBDTs and DNNs in terms of performance, without hyperparameter tuning. Extensive and stratified experiments demonstrate that the ExcelFormer stands out as a sure bet solution for tabular prediction. We believe the proposed framework is highly accessible and user-friendly for even novices working with tabular data.

References

  • (1)
  • Akiba etal. (2019)Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019.Optuna: A next-generation hyperparameter optimization framework. In The ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
  • Arik and Pfister (2021)SercanÖ Arik and Tomas Pfister. 2021.TabNet: Attentive interpretable tabular learning. In The AAAI Conference on Artificial Intelligence.
  • Bachlechner etal. (2021)Thomas Bachlechner, BodhisattwaPrasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley. 2021.Rezero is all you need: Fast convergence at large depth. In Uncertainty in Artificial Intelligence.
  • Breiman (1996)Leo Breiman. 1996.Bagging predictors.Machine Learning (1996).
  • Chen etal. (2023)Jintai Chen, Kuanlun Liao, Yanwen Fang, DannyZ Chen, and Jian Wu. 2023.TabCaps: A Capsule Neural Network for Tabular Data Classification with BoW Routing. In International Conference on Learning Representations.
  • Chen etal. (2022)Jintai Chen, Kuanlun Liao, Yao Wan, DannyZ Chen, and Jian Wu. 2022.DANets: Deep abstract networks for tabular data classification and regression. In The AAAI Conference on Artificial Intelligence.
  • Chen and Guestrin (2016)Tianqi Chen and Carlos Guestrin. 2016.XGBoost: A scalable tree boosting system. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
  • Cheng etal. (2016)Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, etal. 2016.Wide & deep learning for recommender systems. In Workshop on Deep Learning for Recommender Systems.
  • Glorot and Bengio (2010)Xavier Glorot and Yoshua Bengio. 2010.Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics.
  • Goodfellow etal. (2016)Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016.Deep Learning.MIT Press.
  • Gorishniy etal. (2022)Yura Gorishniy, Ivan Rubachev, and Artem Babenko. 2022.On Embeddings for Numerical Features in Tabular Deep Learning. In Advances in Neural Information Processing Systems.
  • Gorishniy etal. (2021)Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. 2021.Revisiting deep learning models for tabular data. In Advances in Neural Information Processing Systems.
  • Grinsztajn etal. (2022)Leo Grinsztajn, Edouard Oyallon, and Gael Varoquaux. 2022.Why do tree-based models still outperform deep learning on typical tabular data?. In Advances in Neural Information Processing Systems.
  • Guo etal. (2021)Huifeng Guo, Bo Chen, Ruiming Tang, Weinan Zhang, Zhenguo Li, and Xiuqiang He. 2021.An embedding learning framework for numerical features in CTR prediction. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
  • Guo etal. (2017)Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017.DeepFM: A factorization-machine based neural network for CTR prediction. In International Joint Conference on Artificial Intelligence.
  • He etal. (2015)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015.Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In International Conference on Computer Vision.
  • Hollmann etal. (2022)Noah Hollmann, Samuel Müller, Katharina Eggensperger, and Frank Hutter. 2022.TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second. In International Conference on Learning Representations.
  • Jeffares etal. (2023)Alan Jeffares, Tennison Liu, Jonathan Crabbé, Fergus Imrie, and Mihaela vander Schaar. 2023.TANGOS: Regularizing tabular neural networks through gradient orthogonalization and specialization.arXiv preprint arXiv:2303.05506 (2023).
  • Kadra etal. (2021)Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. 2021.Well tuned simple nets excel on tabular datasets.Advances in Neural Information Processing Systems (2021).
  • Katzir etal. (2020)Liran Katzir, Gal Elidan, and Ran El-Yaniv. 2020.Net-DNF: Effective deep modeling of tabular data. In International Conference on Learning Representations.
  • Khan etal. (2022)Salman Khan, Muzammal Naseer, Munawar Hayat, SyedWaqas Zamir, FahadShahbaz Khan, and Mubarak Shah. 2022.Transformers in vision: A survey.Comput. Surveys (2022).
  • Kim etal. (2020b)Alisa Kim, Y Yang, Stefan Lessmann, Tiejun Ma, M-C Sung, and JohnnieEV Johnson. 2020b.Can deep learning predict risky retail investors? A case study in financial risk behavior forecasting.European Journal of Operational Research (2020).
  • Kim etal. (2020a)Jang-Hyun Kim, Wonho Choo, and HyunOh Song. 2020a.Puzzle Mix: Exploiting saliency and local statistics for optimal mixup. In International Conference on Machine Learning.
  • LeCun etal. (2002)Yann LeCun, Léon Bottou, GenevieveB Orr, and Klaus-Robert Müller. 2002.Efficient backprop.Neural networks: Tricks of the Trade (2002).
  • Loshchilov and Hutter (2018)Ilya Loshchilov and Frank Hutter. 2018.Decoupled Weight Decay Regularization. In International Conference on Learning Representations.
  • Ng (2004)AndrewY Ng. 2004.Feature selection, L1 vs. L2 regularization, and rotational invariance. In International Conference on Machine Learning.
  • Popov etal. (2019)Sergei Popov, Stanislav Morozov, and Artem Babenko. 2019.Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data. In International Conference on Learning Representations.
  • Prokhorenkova etal. (2018)Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, AnnaVeronika Dorogush, and Andrey Gulin. 2018.CatBoost: Unbiased boosting with categorical features.Advances in Neural Information Processing Systems (2018).
  • Qin etal. (2020)Jie Qin, Jiemin Fang, Qian Zhang, Wenyu Liu, Xingang Wang, and Xinggang Wang. 2020.ResizeMix: Mixing data with preserved object information and true labels.arXiv preprint arXiv:2012.11101 (2020).
  • Rubachev etal. (2022)Ivan Rubachev, Artem Alekberov, Yury Gorishniy, and Artem Babenko. 2022.Revisiting pretraining objectives for tabular deep learning.arXiv preprint arXiv:2207.03208 (2022).
  • Saxe etal. (2014)AndrewM Saxe, JamesL McClelland, and Surya Ganguli. 2014.Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In International Conference on Learning Representations.
  • Shazeer (2020)Noam Shazeer. 2020.GLU variants improve Transformer.arXiv preprint arXiv:2002.05202 (2020).
  • Somepalli etal. (2021)Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, CBayan Bruss, and Tom Goldstein. 2021.SAINT: Improved neural networks for tabular data via row attention and contrastive pre-training.arXiv preprint arXiv:2106.01342 (2021).
  • Song etal. (2019)Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, and Jian Tang. 2019.AutoInt: Automatic feature interaction learning via self-attentive neural networks. In ACM International Conference on Information and Knowledge Management.
  • Tajbakhsh etal. (2020)Nima Tajbakhsh, Laura Jeyaseelan, Qian Li, JeffreyN Chiang, Zhihao Wu, and Xiaowei Ding. 2020.Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation.Medical Image Analysis (2020).
  • Touvron etal. (2021a)Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. 2021a.Training data-efficient image Transformers & distillation through attention. In International Conference on Machine Learning.
  • Touvron etal. (2021b)Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. 2021b.Going deeper with image Transformers. In IEEE/CVF International Conference on Computer Vision.
  • Uddin etal. (2020)AFMShahab Uddin, MstSirazam Monira, Wheemyung Shin, TaeChoong Chung, and Sung-Ho Bae. 2020.SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization. In International Conference on Learning Representations.
  • Vaswani etal. (2017)Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, AidanN Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017.Attention is all you need.Advances in Neural Information Processing Systems (2017).
  • Verma etal. (2019)Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. 2019.Manifold Mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning.
  • Walawalkar etal. (2020)Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, and Marios Savvides. 2020.Attentive Cutmix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification. In International Conference on Acoustics, Speech and Signal Processing.
  • Wang etal. (2017)Ruoxi Wang, Bin Fu, Gang Fu, and Mingliang Wang. 2017.Deep & cross network for ad click predictions. In ADKDD.
  • Wang etal. (2021a)Ruoxi Wang, Rakesh Shivanna, Derek Cheng, Sagar Jain, Dong Lin, Lichan Hong, and Ed Chi. 2021a.DCN v2: Improved deep & cross network and practical lessons for web-scale learning to rank systems. In The ACM Web Conference.
  • Wang and Sun (2022)Zifeng Wang and Jimeng Sun. 2022.TransTab: Learning Transferable Tabular Transformers Across Tables. In Advances in Neural Information Processing Systems.
  • Wang etal. (2021b)Zhuo Wang, Wei Zhang, Ning Liu, and Jianyong Wang. 2021b.Scalable rule-based representation learning for interpretable classification.Advances in Neural Information Processing Systems (2021).
  • Wang etal. (2020)Zhuo Wang, Wei Zhang, LIU Ning, and Jianyong Wang. 2020.Transparent classification with multilayer logical perceptrons and random binarization. In The AAAI Conference on Artificial Intelligence.
  • Wistuba etal. (2015)Martin Wistuba, Nicolas Schilling, and Lars Schmidt-Thieme. 2015.Learning hyperparameter optimization initializations. In IEEE international conference on data science and advanced analytics.
  • Wu etal. (2022)Kevin Wu, Eric Wu, Michael DAndrea, Nandini Chitale, Melody Lim, Marek Dabrowski, Klaudia Kantor, Hanoor Rangi, Ruishan Liu, Marius Garmhausen, etal. 2022.Machine learning prediction of clinical trial operational efficiency.The AAPS Journal (2022).
  • Yan etal. (2023)Jiahuan Yan, Jintai Chen, Yixuan Wu, DannyZ Chen, and Jian Wu. 2023.T2G-Former: Organizing Tabular Features into Relation Graphs Promotes Heterogeneous Feature Interaction.The AAAI Conference on Artificial Intelligence (2023).
  • Yan etal. (2024)Jiahuan Yan, Bo Zheng, Hongxia Xu, Yiheng Zhu, Danny Chen, Jimeng Sun, Jian Wu, and Jintai Chen. 2024.Making Pre-trained Language Models Great on Tabular Prediction. In ICLR.
  • Yoon etal. (2020)Jinsung Yoon, Yao Zhang, James Jordon, and Mihaela vander Schaar. 2020.VIME: Extending the success of self-and semi-supervised learning to tabular domain.Advances in Neural Information Processing Systems (2020).
  • Yun etal. (2019)Sangdoo Yun, Dongyoon Han, SeongJoon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. 2019.CutMix: Regularization strategy to train strong classifiers with localizable features. In International Conference on Computer Vision.
  • Zhang etal. (2018)Hongyi Zhang, Moustapha Cisse, YannN Dauphin, and David Lopez-Paz. 2018.Mixup: Beyond Empirical Risk Minimization. In International Conference On Learning Representations.
  • Zhu etal. (2023)Bingzhao Zhu, Xingjian Shi, Nick Erickson, Mu Li, George Karypis, and Mahsa Shoaran. 2023.XTab: Cross-table Pretraining for Tabular Transformers. In ICML.

Appendix A Why is Feat-Mix superior to CutMix?

The primary distinction between the Feat-Mix and CutMix approaches(Yun etal., 2019) lies in whether the feature importance is considered when synthesizing new samples. To explore this difference, we conducted experiments on several datasets using the architecture of the ExcelFormer as backbone. Our observations were made on both the original tables and the tables augmented with additional columns containing Gaussian noise. See Table6, generally, Feat-Mix outperforms CutMix or performs on par with CutMix on these datasets. However, in tables with noisy columns, we only observed a slight decline in the effectiveness of Feat-Mix (even with an improvement on the cpu dataset), while CutMix exhibited a more significant performance drop under the influence of noisy columns. Given the prevalence of uninformative features in tabular data(Ng, 2004; Grinsztajn etal., 2022), the comparison of their performance and performance drops with noisy data emphasizes the importance of considering feature importance during interpolation. We find that Feat-Mix stands out as a more resilient choice for tabular datasets. By leveraging Feat-Mix instead of CutMix, ExcelFormer can be deemed a more dependable approach for casual users, aligning with our initial intent in this research.

BreastDiabetesCampuscpufruitflyyacht
CutMix0.7020.8220.972-102.06-16.19-3.59
Feat-Mix0.7130.8370.980-79.10-15.86-0.83
CutMix (with noise)0.6880.8090.938-115.10-17.09-4.40
Feat-Mix (with noise)0.7000.8340.969-74.56-16.60-0.89
ΔΔ\Deltaroman_Δ CutMix (\downarrow)0.0140.0130.03413.040.900.81
ΔΔ\Deltaroman_Δ Feat-Mix (\downarrow)0.0130.0030.011-4.540.740.06

Appendix B Rotational Variance Property Evaluation

In (P1) of main paper, we highlight the DNNs’ lack of rotational variance, which makes them less efficient for tabular prediction tasks. This limitation serves as the motivation behind our proposal of the semi-permeable attention (SPA) and the interaction attenuated initialization (IAI) approach.Here we would like to inspect if the ExcelFormer is more non-rotationally invariant with the proposed SPA and IAI.We assess the test performance of ExcelFormer (without using data augmentation) when randomly rotating the datasets. We utilize all binary classification datasets consisting of numerical features and containing fewer than 300 data samples. Additionally, we introduce f𝑓fitalic_f uninformative features into each dataset (assuming that the original table comprises f𝑓fitalic_f features), which are generated using Gaussian noises. As depicted in Fig.5 (a), it is evident that after randomly rotating the datasets, XGBoost and CatBoost exhibit the most significant decline in performances. This observation suggests that they are algorithms with a higher degree of non-rotational invariance, aligning with the findings of (Ng, 2004). While the decline in performance of ExcelFormer and FTT are not as substantial as those of decision tree-based models, it is still noticeable that ExcelFormer’s performance decreases by a larger extent after random rotations, compared to FTT. This observation indicates that our ExcelFormer exhibits a higher degree of non-rotational variance compared to counterpart FTT.

Inversely, we further conducted additive studies, utilizing FTT as the backbone and incorporating our proposed SPA and IAI on FTT. See Fig.5(b), we find that: (i) both SPA and IAI contribute positively to the performances of FTT. (ii) In the presence of random dataset rotations, FTT with IAI and SPA demonstrated a more pronounced performance drop, thereby showcasing the efficacy of SPA and IAI in enhancing the non-rotational invariance property of FTT. Additionally, see Fig.5(c), ablation studies on the ExcelFormer backbone (where neither Feat-Mix nor Hid-Mix was applied) also highlighted the value of SPA and IAI in mitigating the rotational invariance property of DNN models.

ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (5)
Modeleyecaliforniahousejannishiggs-smallaverage time (s)
XGboost (t)699.92325.48238.952260.57425.67790.12
Catboost (t)1241.43589.10387.774100.39714.531406.64
ExcelFormer (d)39.2951.8059.41166.57160.4895.51
Hyper-parameterDistribution
#. Layers L𝐿Litalic_LUniformInt[2, 5]
Representation size d𝑑ditalic_d{64, 128, 256}
#. Heads{4, 8, 16, 32}
Residual dropout rateUniform[0, 0.5]
Learning rateLogUniform[3×1053superscript1053\times 10^{-5}3 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, 103superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT]
Weight decay{0.0, LogUniform[106superscript10610^{-6}10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT, 103superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT]}
(*) Mixup type{Feat-Mix, Hid-Mix, neither}
(*) α𝛼\alphaitalic_α of eta𝑒𝑡𝑎\mathcal{B}etacaligraphic_B italic_e italic_t italic_a distributionUniform[0.1, 3.0]
Hyper-parameterDistribution
Booster“gbtree”
N-estimatorsConst(4096)
Early-stopping-roundsConst(50)
Max depthUniformInt[3, 10]
Min child weightLogUniform[108superscript10810^{-8}10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT, 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT]
SubsampleUniform[0.5, 1.0]
Learning rateLogUniform[105,1superscript105110^{-5},110 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT , 1]
Col sample by levelUniform[0.5, 1]
Col sample by treeUniform[0.5, 1]
Gamma{0, LogUniform[108,102superscript108superscript10210^{-8},10^{2}10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT]}
Lambda{0, LogUniform[108,102superscript108superscript10210^{-8},10^{2}10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT]}
Alpha{0, LogUniform[108,102superscript108superscript10210^{-8},10^{2}10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT]}
#. Tuning iterations500
Hyper-parameterDistribution
Iterations (number of trees)Const(4096)
Od pvalConst(0.001)
Early-stopping-roundsConst(50)
Max depthUniformInt[3, 10]
Learning rateLogUniform[105,1superscript105110^{-5},110 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT , 1]
Bagging temperatureUniform[0, 1]
L2 leaf regLogUniform[1, 10]
Leaf estimation iterationsUniformInt[1, 10]
#. Tuning iterations500

Appendix C Details of Hyper-Parameter Fine-Tuning Settings

For XGboost and Catboost, we follow the implementations and settings in(Gorishniy etal., 2021), while increasing the number of estimators/iterations(i.e., decision trees) and the number of tuning iterations, so as to attain better performance. For our ExcelFormer, we apply the Optuna based tuning(Akiba etal., 2019). The hyper-parameter search spaces of ExcelFormer, XGboost, and Catboost are reported in Table8, Table9, and Table10, respectively. For ExcelFormer, we tune just 50 iterations on the configurations with regard to the data augmentation (it is marked as “Mix Tuned”). For “Fully Tuned” version, we finely tune 50 interations on all the hyper-parameters.

Appendix D Development Time Cost

In the main paper, we have demonstrated on a large array of datasets that, with default hyperparameters, our proposed model outperforms existing models that require time-consuming heavy hyperparameter tuning (typically requiring 50-100 iterations). Among these, XGBoost and CatBoost stand out as two of the most efficient approaches. It is apparent in comparison to FTT that our model demands only approximately 1/100 to 1/50 of the development time. This is due to its close architecture of ExcelFormer to FTT while being competitive or even superior without the need for hyperparameter tuning.

We conducted an analysis of the total time invested in model development for XGBoost, CatBoost, and our proposed ExcelFormer. As shown in Table7, we observed that our model can achieve significantly higher efficiency than both, even with just 50 iterations of hyperparameter tuning applied to XGBoost and CatBoost. Developing an XGBoost or CatBoost model requires 8-15 times more development time than our model, indicating that our approach is user-friendly and environmentally friendly.

Appendix E Implementation Details of Metrics Used in this Work

Feature Importance.

In this study, we employ Normalized Mutual Information (NMI) to assess the importance of various features, as mutual information can capture dependencies between features and targets. We implement NMI using the sklearn Python package. Specifically, for classification tasks, we utilize the ”feature_selection.mutual_info_classif” function, and for regression tasks, we utilize the ”feature_selection.mutual_info_regression” function.

Average Normalized AUC across Datasets.

To aggregate the model performances across datasets, we calculate the average normalized scores(Wistuba etal., 2015) for AUC to comprehensively evaluate the model performances. Specifically, we first normalize the scores among the compared models for given datasets, and then average them across datasets. Formally, among D𝐷Ditalic_D involved datasets, the average normalized score smsubscript𝑠𝑚s_{m}italic_s start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT for the model m𝑚mitalic_m is computed by:

(9)sm,d=sm,dminiM0(si,d)maxiM0(si,d)miniM0(si,d),sm=d=1Dsm,dDformulae-sequencesubscriptsuperscript𝑠𝑚𝑑subscript𝑠𝑚𝑑subscriptmin𝑖subscript𝑀0subscript𝑠𝑖𝑑subscriptmax𝑖subscript𝑀0subscript𝑠𝑖𝑑subscriptmin𝑖subscript𝑀0subscript𝑠𝑖𝑑subscript𝑠𝑚subscriptsuperscript𝐷𝑑1subscriptsuperscript𝑠𝑚𝑑𝐷s^{\prime}_{m,d}=\frac{s_{m,d}-\text{min}_{i\in M_{0}}(s_{i,d})}{\text{max}_{i%\in M_{0}}(s_{i,d})-\text{min}_{i\in M_{0}}(s_{i,d})},s_{m}=\frac{\sum^{D}_{d=%1}s^{\prime}_{m,d}}{D}italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m , italic_d end_POSTSUBSCRIPT = divide start_ARG italic_s start_POSTSUBSCRIPT italic_m , italic_d end_POSTSUBSCRIPT - min start_POSTSUBSCRIPT italic_i ∈ italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_d end_POSTSUBSCRIPT ) end_ARG start_ARG max start_POSTSUBSCRIPT italic_i ∈ italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_d end_POSTSUBSCRIPT ) - min start_POSTSUBSCRIPT italic_i ∈ italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_d end_POSTSUBSCRIPT ) end_ARG , italic_s start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = divide start_ARG ∑ start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d = 1 end_POSTSUBSCRIPT italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m , italic_d end_POSTSUBSCRIPT end_ARG start_ARG italic_D end_ARG

where M0subscript𝑀0M_{0}italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT encompasses all the models compared. The smsubscript𝑠𝑚s_{m}italic_s start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT denotes AUCmsubscriptAUC𝑚\text{AUC}_{m}AUC start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT. We only use binary classification datasets in Fig.5 since the average normalized AUC, ACC, and nRMSE should be separately computed.

Performance Rank.

We performed 5 runs with different random seeds and calculated the average results for each dataset. Additionally, we computed the overall rank across datasets for comparison. Average rank is given to tied values.

Appendix F Detailed Description of Datasets

The details of the 96 used small-scale tabular datasets are summarized in Table11 and Table12. The details of the 21 large-scale datasets are summarized in Table13. We use the same train-valid-test split for all the approaches.

Dataset#. Sample#. Feature#. Num#.CatTask Type
Analytics Vidhya Loan Prediction6141156classification
Audit Data77624213classification
Automobiles201251312classification
Bigg Boss India56721615classification
Breast Cancer Dataset56930300classification
Campus Recruitment2151367classification
chronic kidney disease4001394classification
House Price50617143classification
Compositions of Glass214990classification
Credit Card Approval5901569classification
Customer Classification10001156classification
Development Index225660classification
fitbit dataset45713121classification
Horse Colic Dataset29927918classification
Penguins Classified344642classification
Pima-Indians_Diabetes768880classification
Real Estate DataSet51113112classification
Startup Success Prediction92345936classification
Store Data Performance1351679classification
The Estonia Disaster Passenger List989615classification
AAPL_stock_price_2021_2022346550regression
AAPL_stock_price_2021_2022_1347550regression
AAPL_stock_price_2021_2022_2348550regression
analcatdata_creditscore100633classification
analcatdata_homerun162261214regression
analcatdata_lawsuit264431classification
analcatdata_vineyard468312regression
auto_price15915132regression
autoPrice15915141regression
bodyfat25214140regression
boston50613112regression
boston_corrected50619154regression
Boston-house-price-data50613112regression
cholesterol3031376regression
cleveland3031376regression
cloud108532regression
cps_85_wages5341037regression
cpu209752regression
DEE365660regression
Diabetes-Data-Set768880classification
DiabeticMellitus28197691classification
disclosure_x_bias662330regression
disclosure_x_noise662330regression
disclosure_x_tampered662330regression
disclosure_z662330regression
echom*onths130972regression
EgyptianSkulls150431regression
ELE-1495220regression
fishcatch158752regression
Fish-market159651regression
Dataset#. Sample#. Feature#. Num#.CatTask Type
forest_fires5171284regression
Forest-Fire-Area5171284regression
fruitfly125422regression
HappinessRank_2015158981regression
Heart_disease_classification2961376classification
hungarian29413112classification
Indian-Liver-Patient-Patient5831192classification
Intersectional-Bias-Assessment100018144classification
liver-disorders345550regression
lowbwt189927regression
lungcancer_shedden44223203regression
machine_cpu209660regression
meta52821165regression
nki70.arff14476724classification
no2500770regression
pharynx1951037regression
Pima-Indians-Diabetes768880classification
pm10500770regression
Pokmon-Legendary-Data8011293classification
Reading_Hydro1000261115regression
residential_building3721081008regression
rmftsa_ladata50810100regression
strikes625660regression
student-grade-pass-or-fail-prediction39529425classification
Swiss-banknote-conterfeit-detection200660classification
The-Estonia-Disaster-Passenger-List989615classification
The-Office-Dataset1881028regression
tokyo195944422classification
visualizing_environmental111330regression
weather_ankara321990regression
wisconsin19432320regression
yacht_hydrodynamics308660regression
Absenteeism at work74020713classification
Audit Data77624213classification
Breast Cancer Coimbra116990classification
Cervical cancer (Risk Factors)85830255classification
Climate Model Simulation Crashes54019181classification
Early stage diabetes risk prediction52016115classification
extention of Z-Alizadeh sani dataset303572037classification
HCV data61512111classification
Heart failure clinical records2991275classification
Parkinson Dataset24046442classification
QSAR Bioconcentration classes7791174classification
Quality Assessment of DC9762620classification
User Knowledge Modeling258550classification
Z-Alizadeh Sani303542034classification

DatasetAbbr.Task Type#. Features#. Num#. Cat#. SampleLink
sulfurSUregression66010,081https://www.openml.org/d/44145
bank-marketingBAclassification77010,578https://www.openml.org/d/44126
Brazilian_housesBRregression88010,692https://www.openml.org/d/44141
eyeEYmulticlass2626010,936http://www.cis.hut.fi/eyechallenge2005
MagicTelescopeMAclassification1010013,376https://www.openml.org/d/44125
AileronsAIregression3333013,750https://www.openml.org/d/44137
polPOregression2626015,000https://www.openml.org/d/722
binarized-polBPclassification4848015,000https://www.openml.org/d/722
creditCRclassification1010016,714https://www.openml.org/d/44089
californiaCAregression88020,640https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html
house_salesHSregression1515021,613https://www.openml.org/d/44144
houseHOregression1616022,784https://www.openml.org/d/574
diamondsDIregression66053,940https://www.openml.org/d/44140
helenaHEmulticlass2727065,196https://www.openml.org/d/41169
jannisJAmulticlass5454083,733https://www.openml.org/d/41168
higgs-smallHIclassification2828098,049https://www.openml.org/d/23512
road-safetyROclassification32293111,762https://www.openml.org/d/44161
medicalchargesMEregression330163,065https://www.openml.org/d/44146
SGEMM_GPU_kernel_performanceSGregression936241,600https://www.openml.org/d/44069
covtypeCOmulticlass54540581,012https://www.openml.org/d/1596
nyc-taxi-green-dec-2016NYregression990581,835https://www.openml.org/d/44143

Appendix G Detailed Results on Small and Large Datasets

We present the average results (five runs averaged) of all the models for each dataset. The results for the 96 small-scale datasets can be found in Table14, and the performance on the 21 large-scale datasets is provided in Table15.

DatasetsO.+F (d)O.+H (d)O. (MT)O.(FT)FTT (t)XGb (t)Cat (t)MLP (t)DCNv2 (t)AutoInt(t)SAINT (t)TT (d)XT(d)TapP(t)
Analytics Vidhya Loan Prediction0.74490.74210.74210.74210.72850.74860.70450.73590.74640.73500.75050.72910.72400.7331
Audit Data0.99840.99050.99910.99410.99930.99951.00000.99360.99900.99830.99950.99830.98220.9998
Automobiles0.97740.97980.98690.97500.95830.96790.97260.95120.92260.97260.95360.96790.95450.9845
Bigg Boss India1.00001.00000.98610.98610.97991.00000.98610.99540.99850.90590.97531.00000.97691.0000
Breast Cancer Dataset0.99700.99370.99210.99440.98410.99140.98510.99170.98280.97950.98280.99700.99270.9916
Campus Recruitment0.97950.98460.97950.94870.94870.97950.96670.92560.91030.97440.95900.94360.92320.9821
chronic kidney disease0.99931.00000.99670.99600.99600.99000.99400.99070.97670.98930.98530.99470.99730.9993
House Price0.89040.90150.90150.89830.90110.88180.89770.90260.89240.90130.90540.89710.88630.9007
Compositions of Glass0.89760.85950.89760.92380.85950.86550.90240.81670.90000.69290.79290.79290.79760.8905
Credit Card Approval0.95830.97190.96800.97010.96070.95950.96800.94470.94780.95500.95500.96620.93760.9553
Customer Classification0.57920.57190.52800.60140.59510.47270.58020.53760.64170.59520.63480.61250.61340.6229
Development Index1.00000.96711.00001.00000.98150.92590.98050.95880.93620.96331.00000.98560.92100.9856
fitbit dataset0.79910.82160.80920.80250.81540.81380.79750.80730.81010.76280.79390.81640.78470.8914
Horse Colic Dataset0.75230.74770.73730.74070.69210.69210.72690.70250.61690.60380.70830.71990.69050.7346
Penguins Classified0.99910.99910.99911.00001.00000.99660.99440.99830.99830.99541.00000.99400.98720.9983
Pima-Indians_Diabetes0.83670.83300.81540.81480.80480.78870.76670.82220.80690.79870.80960.81350.75460.8181
Real Estate DataSet0.91760.88940.90410.91760.90450.90100.91670.86210.87600.90140.89450.88830.84710.9029
Startup Success Prediction0.99380.99140.83970.84450.84640.99370.72770.83730.77010.84560.83630.82040.83710.8428
Store Data Performance0.76320.76320.69080.75660.63820.63160.78290.64470.67110.76270.61840.66450.65130.8487
The Estonia Disaster Passenger List0.75720.75590.77080.77130.75460.74990.73790.74640.74010.73790.74360.75050.72480.7518
analcatdata_creditscore1.00001.00001.00001.00001.00000.94000.96671.00000.92000.96670.85331.00000.84671.0000
analcatdata_lawsuit1.00001.00001.00001.00001.00000.99491.00001.00000.98980.95410.98470.97960.93951.0000
Diabetes-Data-Set0.83560.83370.82480.82940.82410.78520.77540.82150.79540.83110.82570.81200.79030.8152
DiabeticMellitus0.98780.98650.98650.98650.97840.97430.94930.74050.82420.89320.92390.90410.73460.9405
Heart_disease_classification0.89840.90960.91520.92410.93420.89900.90180.89840.93420.92410.91740.91070.91520.9252
hungarian0.84460.85960.85960.84840.88220.85780.89100.92730.90600.85340.92610.89850.92730.8722
Indian-Liver-Patient-Patient0.71330.69840.71970.72320.75510.73990.72060.71160.73100.71330.72540.73320.69350.7381
Intersectional-Bias-Assessment0.99530.99580.99600.99790.99160.99250.99770.99560.99440.99620.99820.99420.97360.9965
nki70.arff0.81580.92630.88670.87370.62630.85260.85260.68420.66840.81050.83530.82630.82630.9216
Pima-Indians-Diabetes0.83560.83370.81090.80520.84940.81400.75280.79740.79310.76380.78060.81410.79030.8152
Pokmon-Legendary-Data0.97670.98880.99810.98690.96790.95040.98400.92030.94120.99170.97720.98100.96790.9801
student-grade-pass-or-fail-prediction0.98980.98841.00001.00000.97101.00001.00000.97710.82110.99930.96080.98980.84940.9601
Swiss-banknote-conterfeit-detection0.99250.99501.00001.00000.99751.00001.00001.00000.99751.00001.00001.00000.94501.0000
The-Estonia-Disaster-Passenger-List0.75720.75590.75550.75850.77190.77230.73660.74850.75300.73900.74330.74640.69480.7518
tokyo10.97150.97400.96960.97020.96920.97060.95360.97160.97270.96650.96530.95950.95960.9705
Absenteeism at work0.85790.86650.86690.89210.82810.84020.81450.82780.79730.82990.81190.83520.77130.8339
Audit Data0.99840.99050.99950.99951.00001.00001.00000.99340.99670.99910.99950.99670.98220.9998
Breast Cancer Coimbra0.71330.76220.92760.74830.74830.77620.76920.69930.71680.65030.64340.69230.62240.5923
Cervical cancer (Risk Factors)0.71370.64310.64310.54150.61940.60390.71650.56800.58670.57200.48390.52680.53530.9798
Climate Model Simulation Crashes0.95740.97760.97420.97760.91920.95290.98320.94390.94390.85630.97330.79570.76770.9973
Early stage diabetes risk prediction0.99060.96840.97460.98200.97930.99690.99060.99570.99730.99840.99800.99220.91090.9629
extention of Z-Alizadeh sani dataset0.96380.96510.96510.91210.96380.96060.95090.95220.96510.94570.95740.91340.84161.0000
HCV data0.99820.99420.99650.99880.99820.99240.99300.97141.00000.99821.00000.99770.95230.8511
Heart failure clinical records0.86520.88830.88060.89220.91270.86330.81770.83700.83950.86910.82030.85880.79180.9214
Parkinson Dataset0.91670.92530.93920.92530.93060.85590.90710.92010.92530.92530.92010.91320.89410.9121
QSAR Bioconcentration classes0.77210.83310.83140.84540.87960.83080.83630.84190.86400.82420.81620.82930.80790.8613
Quality Assessment of DC0.54900.52940.52940.82350.39220.60780.94120.74510.74510.29410.19610.43140.39220.3137
User Knowledge Modeling0.97710.97710.97710.95590.99020.93300.96730.96410.95590.96730.97390.87130.85470.9886
Z-Alizadeh Sani0.83850.88630.88630.86180.87730.84240.88630.86180.87600.84500.87340.85920.80410.8527
AAPL_stock_price_2021_2022-2.4201-0.8485-1.1187-0.6742-1.0613-0.4714-1.2488-0.3812-2.9911-1.1469-1.2134-3.7808-3.2483n/a
AAPL_stock_price_2021_2022_1-1.3351-0.7599-0.3584-0.3108-0.6781-0.7711-1.4369-0.7036-1.2721-1.0302-2.4377-2.7802-2.5602n/a
AAPL_stock_price_2021_2022_2-1.4472-0.5832-0.3367-0.3141-0.4005-0.7059-0.9954-0.2768-1.2930-0.9359-2.2318-2.8244-2.4182n/a
analcatdata_homerun-0.7584-0.9188-0.7432-0.7514-0.7456-0.8075-0.7366-0.7425-0.7731-0.7706-0.7452-0.7574-0.8417n/a
analcatdata_vineyard-2.9582-2.7122-3.0034-2.6116-2.4820-2.1594-2.3821-2.4549-2.4602-2.4509-2.4403-3.5946-3.7126n/a
auto_price-1751.0-2103.5-1830.7-2463.1-2244.8-1720.8-1935.9-2702.4-2503.7-2020.4-2905.1-3100.4-3031.4n/a
autoPrice-1751.0-2103.5-1676.9-1831.3-2341.8-1659.6-1977.4-2623.2-4026.9-1840.2-3104.0-3093.8-2969.5n/a
bodyfat-1.0621-0.7597-0.8297-0.5658-0.5431-0.8420-1.1247-1.6816-3.9424-1.7095-1.4271-4.0332-4.3167n/a
boston-2.7724-2.9132-3.0481-3.2503-4.2412-3.0366-3.5042-3.6890-4.0360-4.2976-4.5132-4.9288-4.6731n/a
boston_corrected-3.0906-3.6336-3.3379-3.3726-3.3748-3.2464-3.6352-3.4099-3.4520-3.2328-3.8338-5.5182-5.7764n/a
Boston-house-price-data-3.1074-2.9132-3.0481-3.9133-3.4856-3.1320-3.5397-4.4732-5.6720-4.1353-3.9253-4.7881-4.6171n/a
cholesterol-63.898-63.607-62.204-61.527-61.702-60.718-61.791-62.238-61.145-62.760-62.621-61.434-64.213n/a
cleveland-0.8839-0.8765-0.8686-0.8853-0.9944-0.8863-0.8918-0.9546-0.9936-0.9455-0.8704-1.1198-1.2134n/a
cloud-0.5701-0.6858-0.4539-0.6851-0.4608-0.2720-0.3458-0.6079-0.6326-0.8258-0.7637-0.9437-1.0297n/a
cps_85_wages-4.3197-4.4261-4.2873-4.3968-4.2237-4.6278-4.6009-4.4570-4.4097-4.2979-4.4403-4.7683-4.8651n/a
cpu-79.104-76.943-76.402-91.466-95.975-62.504-104.760-74.299-68.783-122.468-123.213-137.357-131.2979n/a
DEE-0.4023-0.4255-0.4294-0.4278-0.3863-0.4051-0.4239-0.3814-0.8244-0.4174-0.4296-0.6657-0.4780n/a
disclosure_x_bias-21921-21743-21919-21876-21807-22587-21853-22022-21878-21912-22159-22481-23453n/a
disclosure_x_noise-26993-27266-26843-26919-27196-26943-27438-26992-27944-27232-27010-27412-27078n/a
disclosure_x_tampered-27168-27275-27824-27062-27245-27318-27647-27180-27227-27984-27114-27347-27018n/a
disclosure_z-21506-21374-21496-21477-21791-21911-21815-21753-30624-21764-21804-22272-23509n/a
echom*onths-8.8668-9.4200-10.1428-9.9251-11.5086-12.6059-10.4651-11.1439-12.4521-10.1922-9.5287-13.5465-14.0546n/a
EgyptianSkulls-1425.98-1393.52-1403.98-1403.98-1360.73-1460.25-1487.83-1519.55-1243.86-1480.12-1603.05-1575.59-1669.02n/a
ELE-1-736.04-736.25-758.72-749.40-749.72-770.96-782.94-761.12-739.36-779.54-737.94-838.68-816.96n/a
fishcatch-50.863-46.911-86.628-76.929-66.500-79.260-102.405-126.658-89.525-155.355-149.660-180.285-205.606n/a
Fish-market-72.073-70.419-76.484-80.037-88.557-63.291-70.877-112.847-64.847-128.376-69.060-153.163-172.934n/a
forest_fires-109.375-109.339-108.763-107.853-109.139-108.803-107.700-108.925-108.707-108.573-109.064-108.921-108.578n/a
Forest-Fire-Area-109.375-109.339-108.988-107.538-109.026-108.803-106.091-108.945-109.292-109.428-109.349-109.015-108.578n/a
fruitfly-15.856-15.752-15.858-15.732-16.438-20.724-16.224-16.251-18.620-17.561-16.023-15.829-21.850n/a
HappinessRank_2015-0.1402-0.0753-0.0640-0.0753-0.0800-0.0856-0.1765-0.2066-0.1220-0.3450-0.2596-0.9244-1.2549n/a
liver-disorders-3.1001-2.9445-2.9602-3.0291-3.2481-3.0613-2.9170-2.9184-3.5018-3.1161-3.0200-2.8904-3.2965n/a
lowbwt-419.24-421.87-417.13-419.41-451.40-422.76-443.85-447.74-486.07-406.45-420.31-501.78-580.11n/a
lungcancer_shedden-2.8148-2.5672-2.6234-2.6200-2.7049-2.7345-2.5833-2.5232-2.5298-2.8645-2.5481-2.8069-3.4472n/a
machine_cpu-71.259-85.958-90.238-82.287-92.617-78.152-107.735-73.420-89.281-125.633-129.315-187.953-177.951n/a
meta-153.09-162.95-142.67-128.98-164.11-147.92-236.52-141.32-142.79-237.33-146.03-192.56-273.34n/a
no2-0.5015-0.4864-0.4972-0.4967-0.4948-0.4912-0.5082-0.5289-0.5212-0.5127-0.4985-0.6629-0.7692n/a
pharynx-286.57-281.51-277.71-273.68-310.59-279.05-277.78-337.05-492.43-270.79-282.70-328.23-391.29n/a
pm10-0.7670-0.7650-0.7267-0.8022-0.8010-0.7487-0.7331-0.8141-0.9794-0.7942-0.8064-0.8130-0.9376n/a
Reading_Hydro-0.0039-0.0037-0.0040-0.0039-0.0038-0.0036-0.0037-0.0039-0.0041-0.0042-0.0047-0.0188-0.0081n/a
residential_building-351.46-210.01-168.25-196.44-237.14-200.64-306.75-533.02-723.89-533.37-526.71-584.36-643.16n/a
rmftsa_ladata-2.0287-1.8550-1.8238-2.0475-2.0305-2.0150-1.8144-2.0999-2.4473-2.3571-2.0216-2.5843-2.5447n/a
strikes-586.03-592.41-587.24-604.66-604.16-592.34-588.58-620.24-660.56-589.25-599.57-615.12-637.11n/a
The-Office-Dataset-0.3876-0.4189-0.3654-0.4127-0.4152-0.3350-0.3730-0.4256-0.4068-0.4148-0.4843-0.5396-0.5697n/a
visualizing_environmental-2.5584-2.8069-3.0343-3.4924-2.8716-2.3128-2.8110-2.5520-2.9296-3.1022-2.7731-3.4982-3.8523n/a
weather_ankara-1.7222-1.3999-1.5609-1.4824-1.5756-1.8900-1.5884-2.0655-2.6293-1.9707-2.3743-3.3254-2.9359n/a
wisconsin-36.915-37.548-38.315-38.603-36.128-35.500-35.720-34.429-75.613-34.541-37.677-37.229-51.180n/a
yacht_hydrodynamics-0.8310-0.9270-0.7151-1.0738-1.0881-0.7432-1.0243-1.2074-1.2786-2.3096-4.4713-5.1386-6.5417n/a

DatasetsXGb (d)Cat (d)FTT (d)Excel + F (d)Excel + H (d)XGb (t)Cat (t)FTT (t)Excel (M)Excel (F)
SU-0.02025-0.01994-0.01825-0.01840-0.01740-0.01770-0.02200-0.01920-0.01730-0.01610
BA80.2589.2088.2689.0088.6588.9789.1688.6489.2189.16
BR-0.07667-0.07655-0.07390-0.11230-0.06960-0.07690-0.09310-0.07940-0.06270-0.06410
EY69.9769.8571.0671.4472.0972.8872.4171.7374.1478.94
MA86.2193.8393.6693.3893.6693.6993.6693.6994.0494.11
AI-0.0001669-0.0001652-0.0001637-0.0001689-0.0001627-0.0001605-0.0001616-0.0001641-0.0001615-0.0001612
PO-5.342-6.495-4.675-5.694-2.862-4.331-4.622-2.705-2.629-2.636
BP99.1399.9599.1399.9499.9599.9699.9599.9799.9399.96
CR76.5585.1585.2285.2385.2285.1185.1285.1985.2685.36
CA-0.4707-0.4573-0.4657-0.4331-0.4587-0.4359-0.4359-0.4679-0.4316-0.4336
HS-0.1815-0.1790-0.1740-0.1835-0.1773-0.1707-0.1746-0.1734-0.1726-0.1727
HO-3.368-3.258-3.208-3.305-3.147-3.139-3.279-3.142-3.159-3.214
DI-0.2372-0.2395-0.2378-0.2368-0.2387-0.2353-0.2362-0.2389-0.2359-0.2358
HE35.0237.7737.3837.2238.2037.3937.8138.8638.6538.61
JA71.6271.9272.6772.5172.7972.4571.9773.1573.1573.55
HI71.5980.3180.6580.6080.7580.2880.2280.7180.8881.22
RO80.4287.9888.5188.6588.1590.4889.5589.2989.3389.27
ME-0.0819-0.0835-0.0845-0.0821-0.0808-0.0820-0.0829-0.0811-0.0809-0.0808
SG-0.01658-0.03377-0.01866-0.01587-0.01531-0.01635-0.02038-0.01644-0.01465-0.01454
CO96.4292.1396.7197.3897.1796.9296.2597.0097.4397.43
NY-0.3805-0.4459-0.4135-0.3887-0.3930-0.3683-0.3808-0.4248-0.3710-0.3625

ExcelFormer: Can a DNN be a Sure Bet for Tabular Prediction? (2024)
Top Articles
Latest Posts
Article information

Author: Terence Hammes MD

Last Updated:

Views: 6563

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Terence Hammes MD

Birthday: 1992-04-11

Address: Suite 408 9446 Mercy Mews, West Roxie, CT 04904

Phone: +50312511349175

Job: Product Consulting Liaison

Hobby: Jogging, Motor sports, Nordic skating, Jigsaw puzzles, Bird watching, Nordic skating, Sculpting

Introduction: My name is Terence Hammes MD, I am a inexpensive, energetic, jolly, faithful, cheerful, proud, rich person who loves writing and wants to share my knowledge and understanding with you.