Learn Python
Learn Data Structure & Algorithm
Learn Numpy
Learn Pandas
Learn Matplotlib
Learn Seaborn
Learn Statistics
Learn Math
Learn MATLAB
introduction
Setup
Read data
Data preprocessing
Data cleaning
Handle date-time column
Handling outliers
Encoding
Feature_Engineering
Feature selection filter methods
Feature selection wrapper methods
Multicollinearity
Data split
Feature scaling
Supervised Learning
Regression
Classification
Bias and Variance
Overfitting and Underfitting
Regularization
Ensemble learning
Unsupervised Learning
Clustering
Association Rule
Common
Model evaluation
Cross Validation
Parameter tuning
Code Exercise
Car Price Prediction
Flight Fare Prediction
Diabetes Prediction
Spam Mail Prediction
Fake News Prediction
Boston House Price Prediction
Learn Github
Learn OpenCV
Learn Deep Learning
Learn MySQL
Learn MongoDB
Learn Web scraping
Learn Excel
Learn Power BI
Learn Tableau
Learn Docker
Learn Hadoop
For XGBoost Regressor we will only fit the model. No need to transform the model after fitting.
We can't use accuracy score for regression problem because all the values are numerical values. Here we will check or count the correctly predicted values by the model and then we will do comparison with the original value using r square, mean absolute error, etc error method. For regression problem these error methods are used and for classification problem this methods are used
n training_data_prediction we have the prediction of our model and in Y_train we have the actual values of Price column. Let's use r2_score to see the error. By calculating the error we will be able to see that how much accurate result our model is giving
In training_data_prediction we have the prediction of our model and in Y_train we have the actual values of Price column. Let's use mean_absolute_error to see the error. By calculating the error we will be able to see that how much accurate result our model is giving.