What are Dimensions?
Dimensions are features that may be dependent or independent. The concept of dimensions in context to the curse of dimensionality becomes easier to understand with the help of an example. Suppose there is a dataset with 100 features. Now let’s assume you intend to build various separate machine learning models from this dataset. The models can be model-1, model-2, …. model-100. The difference between these models is the number of features.
Suppose we build model-1 with 3 features and model-2 with 5 features (both models have the same dataset). The model-2 has more information than model-1 because its number of features is comparatively higher. So, the accuracy of model-2 is more than that of model-1.
With the increase in the number of features, the model’s accuracy increases. However, after a specific threshold value, the model’s accuracy will not increase, although the number of features increases. This is because a model is fed with a lot of information, making it incompetent to train with correct information.
The phenomenon when a machine learning model’s accuracy decreases, although increasing the number of features after a certain threshold, is called the curse of dimensionality.
Why is it challenging to analyze high-dimensional data?
Humans are ineffective at finding patterns that may be spanned over many dimensions. When more dimensions are added to a machine learning model, the processing power required for the data analysis increases. Moreover, adding more dimensions increases the amount of training data needed to make purposeful data models.
The curse of dimensionality in machine learning is defined as follows,
As the number of dimensions or features increases, the amount of data needed to generalize the machine learning model accurately increases exponentially. The increase in dimensions makes the data sparse, and it increases the difficulty of generalizing the model. More training data is needed to generalize that model better.
The higher dimensions lead to equidistant separation between points. The higher the dimensions, the more difficult it will be to sample from because the sampling loses its randomness.
It becomes harder to collect observations if there are plenty of features. These dimensions make all observations in the dataset to be equidistant from all other observations. The clustering uses Euclidean distance to measure the similarity between the observations. The meaningful clusters can’t be formed if the distances are equidistant.
How to solve the curse of dimensionality?
The following methods can solve the curse of dimensionality .
1) Hughes Phenomenon
The Hughes Phenomenon states that with the increase in the number of features, the classifier’s performance also increases until the optimal number of features is attained. The classifier’s performance degrades when more features are added according to the training sets’ size.
Let’s understand the Hughes Phenomenon with an example. Suppose a dataset consists of all the binary features. We also suppose that the dimensionality is 4, meaning there are 4 features. In this case, the number of data points is 2^4 =16.
If the dimensionality is 5, the number of data points will be 2^5 = 1024. These examples indicate that the number of data points exponentially increases with the dimensionality. So, the number of data points that a machine learning model needs for training is directly proportional to the dimensionality.
From the Hughes Phenomenon, it is concluded that for a fixed-sized dataset, the increment in dimensionality leads to reduced performance of a machine learning model.
The solution to the Hughes Phenomenon is Dimensionality Reduction.
“Dimensionality Reduction” is the data conversion from a high-dimensional into a low-dimensional space. The idea behind this conversion is to let the low-dimensional representation hold some significant properties of the data. These properties will be almost identical to the data’s natural dimensions. Alternatively, it suggests decreasing the dataset’s dimensions.
How does Dimensionality Reduction help solve the Curse of Dimensionality?
- It decreases the dataset’s dimensions and thus decreases the storage space.
- It significantly decreases the computation time. This is because less number of dimensions need less computing time, and ultimately the algorithms train faster than before.
- It improves models’ accuracy.
- It decreases multicollinearity.
- It simplifies the data visualization process. Moreover, it easily identifies a meaningful pattern in the dataset because visualization in 1D/2D/3D space is quite simpler than visualization of more dimensions.
Note that Dimensionality Reduction is categorized into two types, i.e., Feature Selection and Feature Extraction.
Learn Machine Learning Online Courses from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
2) Deep Learning Technique
Deep learning doesn’t encounter the same concerns as other machine learning algorithms when dealing with high-dimensionality applications. This fact makes Neural Network modeling quite effective. Neural Network’s resistance to the curse of dimensionality proves to be quite useful in big data.
The Manifold Hypothesis is one theory that justifies how deep learning solves the curse of dimensionality in data mining . This theory implies that high dimensional data overlaps on a lower dimensional manifold that is equipped in a higher dimensional space.
It implies that in the high-dimensional data, there exists a fundamental pattern in lower-level dimensions that deep learning techniques can effectively manipulate. Hence, for a high-dimensional matrix, neural networks can efficiently find low-dimensional features that don’t exist in the high-dimensional space.
3) Use of Cosine similarity:
The effect of high dimensions in the curse of dimensionality in data mining can be reduced by uniquely measuring distance in a space vector. Specifically, you can use cosine similarity to substitute Euclidean distance. Cosine similarity presents less impact on data in higher dimensional spaces. Cosine similarity is extensively used in word-to-vec, TF-IDF, etc.
Cosine similarity assumes that the observations are made by assuming that the points are spread randomly and uniformly. If the points are not uniformly and randomly organized, the following conditions must be considered.
i) The effect of dimensionality is high when the points are densely located, and dimensionality is high.
ii) The effect of dimensionality is low when the points are sparsely located, and dimensionality is high.
4) PCA
One of the conventional tools capable of solving the curse of dimensionality is PCA (Principal Component Analysis). It converts the data into the most useful space. Hence, it enables the use of lesser dimensions that are quite instructive than the original data. In the pre-processing stage, the nonlinear relations between the initial data components may not be maintained because PCA is a linear tool.
In other words, PCA is a linear dimensionality reduction algorithm that lets you extract a new set of variables from a huge set of variables known as Principal Components.
It is important to note how Principal components are extracted. The first principal component describes the maximum variance in the dataset. The second principal component describes other variances in the dataset; it is unrelated to the first principal component. The third principal component describes the variance that is not described by the first and second principal components.
In a nutshell, PCA finds the best linear combinations of the variables to ensure the spread of points or the variance across the new variable is maximum.
What is the curse of dimensionality?
Curse of Dimensionality refers to a set of problems that arise when working with high-dimensional data. The dimension of a dataset corresponds to the number of attributes/features that exist in a dataset. A dataset with a large number of attributes, generally of the order of a hundred or more, is referred to as high dimensional data. Some of the difficulties that come with high dimensional data manifest during analyzing or visualizing the data to identify patterns, and some manifest while training machine learning models. The difficulties related to training machine learning models due to high dimensional data are referred to as the ‘Curse of Dimensionality’.
Domains of curse of dimensionality
There are a lot of domains where the direct effect of the curse of dimensionality can be seen, “Machine Learning” being the most effective.
Domains of the curse of dimensionality are listed below :
Anomaly Detection
Anomaly detection is used for finding unforeseen items or events in the dataset. In high-dimensional data anomalies often show a remarkable number of attributes which are irrelevant in nature; certain objects occur more frequently in neighbour lists than others.
Combinatorics
Whenever, there is an increase in the number of possible input combinations it fuels the complexity to increase rapidly, and the curse of dimensionality occurs.
Machine Learning
In Machine Learning, a marginal increase in dimensionality also requires a large increase in the volume in the data in order to maintain the same level of performance. The curse of dimensionality is the by-product of a phenomenon which appears with high-dimensional data.
How To Combat The CoD?
Combating COD is not such a big deal until we have dimensionality reduction. Dimensionality Reduction is the process of reducing the number of input variables in a dataset, also known as the process of converting the high-dimensional variables into lower-dimensional variables without changing their attributes of the same.
It does not contain any extra variables, which makes it very simple for analysts to analyze the data leading to faster results for algorithms.
Data Sparsity
The supervised machine learning models are trained to predict the outcome for a given input data sample accurately. While training a model, the available data is used such that part of the data is used for training the model, and a part of the data is used to evaluate how the model performs on unseen data. This evaluation step helps us establish whether the model is generalized or not. Model generalization refers to the models’ ability to predict the outcome for unseen input data accurately. It is important to note that the unseen input data has to come from the same distribution as the one used to train the model. A generalized model’s prediction accuracy on the unseen data should be very close to its accuracy on the training data. An effective way to build a generalized model is to capture different possible combinations of the values of predictor variables and the corresponding targets.
For instance, if we are trying to predict a target, that is dependent on two attributes: gender and age group, we should ideally capture the targets for all possible combinations of values for the two attributes as shown in figure 1. If this data is used to train a model that is capable of learning the mapping between the attribute values and the target, its performance could be generalized. As long as the future unseen data comes from this distribution (a combination of values), the model would predict the target accurately.
In the above example, we assume that the target value depends on gender and age group only. If the target depends on a third attribute, let’s say body type, the number of training samples required to cover all the combinations increases phenomenally. The combinations are shown in figure 2. For two variables, we needed eight training samples. For three variables, we need 24 samples.
The above examples show that, as the number of attributes or the dimensions increases, the number of training samples required to generalize a model also increases phenomenally.
In reality, the available training samples may not have observed targets for all combinations of the attributes. This is because some combination occurs more often than others. Due to this, the training samples available for building the model may not capture all possible combinations. This aspect, where the training samples do not capture all combinations, is referred to as ‘Data sparsity’ or simply ‘sparsity’ in high dimensional data. Data sparsity is one of the facets of the curse of dimensionality. Training a model with sparse data could lead to high-variance or overfitting conditions. This is because while training the model, the model has learnt from the frequently occurring combinations of the attributes and can predict the outcome accurately. In real-time when less frequently occurring combinations are fed to the model, it may not predict the outcome accurately.
Distance Concentration
Another facet of the curse of dimensionality is ‘Distance Concentration’. Distance concentration refers to the problem of all the pairwise distances between different samples/points in the space converging to the same value as the dimensionality of the data increases. Several machine learning models such as clustering or nearest neighbours’ methods use distance-based metrics to identify similarities or proximity of the samples. Due to distance concentration, the concept of proximity or similarity of the samples may not be qualitatively relevant in higher dimensions. Figure 3 shows this aspect graphically [1]. A fixed number of random points are generated from a uniform distribution on a ‘d’ dimensional torus. The ‘d’ here corresponds to the number of dimensions considered at a time.
If you want to get more in-depth knowledge on Clustering, check out our Free Course on Clustering in R at Great Learning Academy. The course will explain machine learning using real-world datasets to ensure that learning is practical and hands-on.
Also Read: Artificial Intelligence Tutorial for Beginners
A density plot of the distances between the points and the probability of frequency of occurrence of the distance is created for different dimensions. For one-dimensional torus, we see that the density is approximately uniform. As the number of dimensions increases, we see that the spread of the frequency plot decreases indicating that distances between different samples or points tend towards a single value as the dimension increases. Figure 4 shows the decrease in the standard deviation of the distribution as the number of dimensions increases.
Leave a Reply