Pushing Constraints by Rule-Driven Pruning Techniques in Non-Uniform Minimum Support for Predicting Obstructive Sleep Apnea

Article Preview

Abstract:

Boosted Association-Ruled Pruned Decision Tree (ARP-DT), the improved version of the Boosted Decision Tree algorithm, was developed by using association-ruled pre-and post-pruning techniques with referring to the pushed minimum support and minimum confidence constraints as well as the association rules applied. The novelty of the Association-Ruled pruning techniques applied mainly embark on the pre-pruning techniques through researching on the maximum number of decision tree splitting, as well as the post-pruning techniques involving subtree replacement and subtree raising. The applied association rules (ARs) augment the mining of frequent itemset (s) or interesting itemset (s) such that appropriate pre-pruning or subtree pruning techniques can be applied before AdaBoost ensemble is implemented. The ARs applied involve the Adaptive Apriori (AA) augmented rule definitions and theorem as stated in this research focuses on the characteristics of the datasets accessed so as to streamline the rule-driven pruning techniques on the Boosting algorithms developed for predicting Obstructive Sleep Apnea (OSA). There is a significant improvement in the prediction accuracies when comparing the classical boosting algorithms and Boosted ARP-DT being applied to the OSA datasets and those online databases from University of California Irvine (UCI) data repositories.

You might also be interested in these eBooks

Info:

[1] D.Y.Y. Sim, C.S. Teh, A. I. Izuanuddin, Improved boosting algorithms by pre-pruning and associative rule mining on decision trees for predicting obstructive sleep apnea, Adv. Sci. Lett. 23(11) (2017) 11593-11598.

DOI: 10.1166/asl.2017.10335

Google Scholar

[2] D.Y.Y. Sim, C.S. Teh, P.K. Banerjee, Prediction model by using Bayesian and cognition-driven techniques: a study in the context of obstructive sleep apnea, Proceeding of the 9th International Conference on Cognitive Sciences, Malaysia, Procedia – Social and Behavioral Sciences. 97 (2013) 528-537.

DOI: 10.1016/j.sbspro.2013.10.269

Google Scholar

[3] D.Y.Y. Sim, C.S. Teh, A.I. Izuanuddin, Adaptive apriori and weighted association rule mining on visual inspected variables for predicting Obstructive Sleep Apnea (OSA), Aus. J. Intelligent Inform. Process. Sys. 14(2) (2014) 39-45.

Google Scholar

[4] D.Y.Y. Sim, C.S. Teh, A.I. Izuanuddin, Improved boosted decision tree algorithms by adaptive apriori and post-pruning for predicting obstructive sleep apnea, Adv. Sci. Lett. 24(3) (2018) 1680-1684.

DOI: 10.1166/asl.2018.11136

Google Scholar

[5] J. Furnkranz, Pruning algorithms for rule learning, Machine Learning. 27(1) (1997) 139-172.

Google Scholar

[6] A. K. Das, Mining rare item sets using both top down and bottom up approach, Int. J. Comp. Sci. Inform. Technol. 7(3) (2016) 1607-1614.

Google Scholar

[7] J. Han, M. Kamber, J. Pei, Data mining concepts and techniques, third ed., Elsevier, Morgan Kaufmann, USA (2012) 17-27, 248-273, 461-488.

Google Scholar

[8] J. Han, J. Pei, Y. Yin, Mining frequent patterns without candidate generation, Proceeding of the 2000 ACM SIGMOD International Conference on Data Mining, New York, USA, CM Press, (2000) 1-12.

DOI: 10.1145/342009.335372

Google Scholar

[9] K. Wang, Y. He, J. Han, Pushing support constraints into association rules mining. IEEE Trans. Knowledge Data Eng. 15(3) (2003) 642-658.

DOI: 10.1109/tkde.2003.1198396

Google Scholar

[10] K. Wang, S. Zhou, S. Liew, Building hierarchical classifiers using class proximity, Proceeding of the 25th International Conference on Very Large Data Bases, San Francisco, USA, Morgan Kaufmann, (1999) 363-374.

Google Scholar

[11] A.S. Galathiya, A.P. Ganatra, C.K. Bhensdadia, Improved decision tree induction algorithm with feature selection, cross validation, model complexity and reduced error pruning, Int. J. Comp. Sci. Inform. Technol. 3(2) (2012) 3427-3431.

Google Scholar

[12] S. K. Pal, P. Mitra, Pattern recognition algorithms for data mining, Chapman & Hall, Florida, USA, CRC Press LLC, 2004, pp.165-168, 170-174.

Google Scholar

[13] [13] G. Hari Prasad, J. Nagamuneiah, A strategy for initiate support check into frequent itemset mining, Int. J. Adv. Res. Comp. Sci. Software Eng. 2(7) (2012) 43-48.

Google Scholar

[14] R. Agrawal, T. Imielinski, A. Swami, Mining association rules between sets of items in large databases, Proceeding of the 1993 ACM SIGMOD International Conference on Management of Data, Washington, USA, ACM (1993) 207-216.

DOI: 10.1145/170036.170072

Google Scholar

[15] L. Breiman, Population theory for boosting ensembles, Annals of Statistics. 32(1) (2004) 1-11.

Google Scholar

[16] L. Breiman, J.H. Friedman, R.A. Olshen, C.J. Stone, Classification and regression trees, Wadsworth, Belmont, USA, 1984, pp.5-12, 22-39.

Google Scholar