Abstract
Original language | English |
---|---|
Title of host publication | Trends and Applications in Knowledge Discovery and Data Mining |
Subtitle of host publication | PAKDD 2016 Workshops, BDM, MLSDA, PACC, WDMBF, Auckland, New Zealand, April 19, 2016, Revised Selected Papers |
Editors | Huiping Cao, Jinyan Li, Ruili Wang |
Place of Publication | Cham |
Publisher | Springer, Cham |
Pages | 35-47 |
Number of pages | 13 |
Volume | LNAI9794 |
ISBN (Electronic) | 9783319429960 |
ISBN (Print) | 9783319429953 |
DOIs | |
Publication status | Published - 15 Jul 2016 |
Event | 20th Pacific-Asia Conference on Knowledge Discovery and Data Mining - Auckland, New Zealand Duration: 19 Apr 2016 → 22 Apr 2016 Conference number: 20 https://pakdd16.wordpress.fos.auckland.ac.nz/ (Link to Conference Website) |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer Verlag |
Volume | 9794 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 20th Pacific-Asia Conference on Knowledge Discovery and Data Mining |
---|---|
Abbreviated title | PAKDD 2016 |
Country | New Zealand |
City | Auckland |
Period | 19/04/16 → 22/04/16 |
Internet address |
|
Fingerprint
Cite this
}
An Improved Self-Structuring Neural Network. / Mohammad, Rami M.; Thabtah, Fadi; McCluskey, Lee.
Trends and Applications in Knowledge Discovery and Data Mining: PAKDD 2016 Workshops, BDM, MLSDA, PACC, WDMBF, Auckland, New Zealand, April 19, 2016, Revised Selected Papers. ed. / Huiping Cao; Jinyan Li; Ruili Wang. Vol. LNAI9794 Cham : Springer, Cham, 2016. p. 35-47 (Lecture Notes in Computer Science; Vol. 9794).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
TY - GEN
T1 - An Improved Self-Structuring Neural Network
AU - Mohammad, Rami M.
AU - Thabtah, Fadi
AU - McCluskey, Lee
PY - 2016/7/15
Y1 - 2016/7/15
N2 - Creating a neural network based classification model is traditionally accomplished using the trial and error technique. However, the trial and error structuring method nornally suffers from several difficulties including overtraining. In this article, a new algorithm that simplifies structuring neural network classification models has been proposed. It aims at creating a large structure to derive classifiers from the training dataset that have generally good predictive accuracy performance on domain applications. The proposed algorithm tunes crucial NN model thresholds during the training phase in order to cope with dynamic behavior of the learning process. This indeed may reduce the chance of overfitting the training dataset or early convergence of the model. Several experiments using our algorithm as well as other classification algorithms, have been conducted against a number of datasets from University of California Irvine (UCI) repository. The experiments’ are performed to assess the pros and cons of our proposed NN method. The derived results show that our algorithm outperformed the compared classification algorithms with respect to several performance measures.
AB - Creating a neural network based classification model is traditionally accomplished using the trial and error technique. However, the trial and error structuring method nornally suffers from several difficulties including overtraining. In this article, a new algorithm that simplifies structuring neural network classification models has been proposed. It aims at creating a large structure to derive classifiers from the training dataset that have generally good predictive accuracy performance on domain applications. The proposed algorithm tunes crucial NN model thresholds during the training phase in order to cope with dynamic behavior of the learning process. This indeed may reduce the chance of overfitting the training dataset or early convergence of the model. Several experiments using our algorithm as well as other classification algorithms, have been conducted against a number of datasets from University of California Irvine (UCI) repository. The experiments’ are performed to assess the pros and cons of our proposed NN method. The derived results show that our algorithm outperformed the compared classification algorithms with respect to several performance measures.
KW - Classification
KW - Neural network
KW - Phishing
KW - Pruning
KW - Structure
UR - http://www.scopus.com/inward/record.url?scp=84978880141&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-42996-0_4
DO - 10.1007/978-3-319-42996-0_4
M3 - Conference contribution
SN - 9783319429953
VL - LNAI9794
T3 - Lecture Notes in Computer Science
SP - 35
EP - 47
BT - Trends and Applications in Knowledge Discovery and Data Mining
A2 - Cao, Huiping
A2 - Li, Jinyan
A2 - Wang, Ruili
PB - Springer, Cham
CY - Cham
ER -