版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
1、April 2, 2024,Data Mining: Concepts and Techniques,1,Data Mining: Concepts and Techniques — Chapter 6 —,Jiawei HanDepartment of Computer Science University of Illinois at Urbana-Champaignwww.cs.uiuc.edu/~hanj
2、9;2006 Jiawei Han and Micheline Kamber, All rights reserved,April 2, 2024,Data Mining: Concepts and Techniques,2,Chapter 6. Classification and Prediction,What is classification? What is prediction?Issues regarding class
3、ification and predictionClassification by decision tree inductionBayesian classificationRule-based classificationClassification by back propagation,Support Vector Machines (SVM) Associative classification Lazy lear
4、ners (or learning from your neighbors)Other classification methodsPredictionAccuracy and error measuresEnsemble methodsModel selectionSummary,,April 2, 2024,Data Mining: Concepts and Techniques,3,Classification p
5、redicts categorical class labels (discrete or nominal)classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new dataPredictio
6、n models continuous-valued functions, i.e., predicts unknown or missing values Typical applicationsCredit approvalTarget marketingMedical diagnosisFraud detection,Classification vs. Prediction,April 2, 2024,Data M
7、ining: Concepts and Techniques,4,Classification—A Two-Step Process,Model construction: describing a set of predetermined classesEach tuple/sample is assumed to belong to a predefined class, as determined by the class la
8、bel attributeThe set of tuples used for model construction is training setThe model is represented as classification rules, decision trees, or mathematical formulaeModel usage: for classifying future or unknown object
9、sEstimate accuracy of the modelThe known label of test sample is compared with the classified result from the modelAccuracy rate is the percentage of test set samples that are correctly classified by the modelTest se
10、t is independent of training set, otherwise over-fitting will occurIf the accuracy is acceptable, use the model to classify data tuples whose class labels are not known,April 2, 2024,Data Mining: Concepts and Techniques
11、,5,Process (1): Model Construction,TrainingData,,,ClassificationAlgorithms,,IF rank = ‘professor’OR years > 6THEN tenured = ‘yes’,Classifier(Model),,,,April 2, 2024,Data Mining: Concepts and Techniques,6,Process
12、(2): Using the Model in Prediction,Classifier,TestingData,,,,,Unseen Data,(Jeff, Professor, 4),,,,Tenured?,April 2, 2024,Data Mining: Concepts and Techniques,7,Supervised vs. Unsupervised Learning,Supervised learning (c
13、lassification)Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observationsNew data is classified based on the training setUnsupervised learning (
14、clustering)The class labels of training data is unknownGiven a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data,April 2, 2024,Data Mining: Concepts a
15、nd Techniques,8,Chapter 6. Classification and Prediction,What is classification? What is prediction?Issues regarding classification and predictionClassification by decision tree inductionBayesian classificationRule-b
16、ased classificationClassification by back propagation,Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors)Other classification methodsPredictionAccuracy and error
17、 measuresEnsemble methodsModel selectionSummary,,April 2, 2024,Data Mining: Concepts and Techniques,9,Issues: Data Preparation,Data cleaningPreprocess data in order to reduce noise and handle missing valuesRelevance
18、 analysis (feature selection)Remove the irrelevant or redundant attributesData transformationGeneralize and/or normalize data,April 2, 2024,Data Mining: Concepts and Techniques,10,Issues: Evaluating Classification Met
19、hods,Accuracyclassifier accuracy: predicting class labelpredictor accuracy: guessing value of predicted attributesSpeedtime to construct the model (training time)time to use the model (classification/prediction time
20、)Robustness: handling noise and missing valuesScalability: efficiency in disk-resident databases Interpretabilityunderstanding and insight provided by the modelOther measures, e.g., goodness of rules, such as decisi
21、on tree size or compactness of classification rules,April 2, 2024,Data Mining: Concepts and Techniques,11,Chapter 6. Classification and Prediction,What is classification? What is prediction?Issues regarding classificati
22、on and predictionClassification by decision tree inductionBayesian classificationRule-based classificationClassification by back propagation,Support Vector Machines (SVM) Associative classification Lazy learners (o
23、r learning from your neighbors)Other classification methodsPredictionAccuracy and error measuresEnsemble methodsModel selectionSummary,,April 2, 2024,Data Mining: Concepts and Techniques,12,Decision Tree Induction:
24、 Training Dataset,This follows an example of Quinlan’s ID3 (Playing Tennis),April 2, 2024,Data Mining: Concepts and Techniques,13,Output: A Decision Tree for “buys_computer”,April 2, 2024,Data Mining: Concepts and Techn
25、iques,14,Algorithm for Decision Tree Induction,Basic algorithm (a greedy algorithm)Tree is constructed in a top-down recursive divide-and-conquer mannerAt start, all the training examples are at the rootAttributes are
26、 categorical (if continuous-valued, they are discreted in advance)Samples are partitioned recursively based on selected attributesTest attributes are selected on the basis of a heuristic or statistical measure (e.g., i
27、nformation gain)Conditions for stopping partitioningAll samples for a given node belong to the same classThere are no remaining attributes for further partitioning – majority voting is employed for classifying the lea
28、fThere are no samples left,April 2, 2024,Data Mining: Concepts and Techniques,15,Attribute Selection Measure: Information Gain (ID3/C4.5),Select the attribute with the highest information gainLet pi be the probability
29、that an arbitrary tuple in D belongs to class Ci, estimated by |Ci, D|/|D|Expected information (entropy) needed to classify a tuple in D:Information needed (after using A to split D into v partitions) to classify D:
30、Information gained by branching on attribute A,April 2, 2024,Data Mining: Concepts and Techniques,16,Attribute Selection: Information Gain,Class P: buys_computer = “yes”Class N: buys_computer = “no”,means “age <=30”
31、has 5 out of 14 samples, with 2 yes’es and 3 no’s. HenceSimilarly,,April 2, 2024,Data Mining: Concepts and Techniques,17,Computing Information-Gain for Continuous-Value Attributes,Let attribute A be a continuous-va
32、lued attributeMust determine the best split point for ASort the value A in increasing orderTypically, the midpoint between each pair of adjacent values is considered as a possible split point(ai+ai+1)/2 is the midpoi
33、nt between the values of ai and ai+1The point with the minimum expected information requirement for A is selected as the split-point for ASplit:D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the se
34、t of tuples in D satisfying A > split-point,April 2, 2024,Data Mining: Concepts and Techniques,18,Gain Ratio for Attribute Selection (C4.5),Information gain measure is biased towards attributes with a large number of
35、valuesC4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain)GainRatio(A) = Gain(A)/SplitInfo(A)Ex.gain_ratio(income) = 0.029/0.926 = 0.031The attribute with the maxim
36、um gain ratio is selected as the splitting attribute,April 2, 2024,Data Mining: Concepts and Techniques,19,Gini index (CART, IBM IntelligentMiner),If a data set D contains examples from n classes, gini index, gini(D) is
37、defined as where pj is the relative frequency of class j in DIf a data set D is split on A into two subsets D1 and D2, the gini index gini(D) is defined asReduction in Impurity:The attribute provides t
38、he smallest ginisplit(D) (or the largest reduction in impurity) is chosen to split the node (need to enumerate all the possible splitting points for each attribute),April 2, 2024,Data Mining: Concepts and Techniques,20,G
39、ini index (CART, IBM IntelligentMiner),Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”Suppose the attribute income partitions D into 10 in D1: {low, medium} and 4 in D2but gini{medium,high} is 0.30 and
40、thus the best since it is the lowestAll attributes are assumed continuous-valuedMay need other tools, e.g., clustering, to get the possible split valuesCan be modified for categorical attributes,April 2, 2024,Data Min
41、ing: Concepts and Techniques,21,Comparing Attribute Selection Measures,The three measures, in general, return good results butInformation gain: biased towards multivalued attributesGain ratio: tends to prefer unbalan
42、ced splits in which one partition is much smaller than the othersGini index: biased to multivalued attributeshas difficulty when # of classes is largetends to favor tests that result in equal-sized partitions and pur
43、ity in both partitions,April 2, 2024,Data Mining: Concepts and Techniques,22,Other Attribute Selection Measures,CHAID: a popular decision tree algorithm, measure based on χ2 test for independenceC-SEP: performs better t
44、han info. gain and gini index in certain casesG-statistics: has a close approximation to χ2 distribution MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred): The best tree as the one
45、that requires the fewest # of bits to both (1) encode the tree, and (2) encode the exceptions to the treeMultivariate splits (partition based on multiple variable combinations)CART: finds multivariate splits based on a
46、 linear comb. of attrs.Which attribute selection measure is the best? Most give good results, none is significantly superior than others,April 2, 2024,Data Mining: Concepts and Techniques,23,Overfitting and Tree Prunin
47、g,Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to noise or outliersPoor accuracy for unseen samplesTwo approaches to avoid overfitting Prepruning: Halt
48、 tree construction early—do not split a node if this would result in the goodness measure falling below a thresholdDifficult to choose an appropriate thresholdPostpruning: Remove branches from a “fully grown” tree—get
49、a sequence of progressively pruned treesUse a set of data different from the training data to decide which is the “best pruned tree”,April 2, 2024,Data Mining: Concepts and Techniques,24,Enhancements to Basic Decision T
50、ree Induction,Allow for continuous-valued attributesDynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervalsHandle missing attribute valuesAssign
51、 the most common value of the attributeAssign probability to each of the possible valuesAttribute constructionCreate new attributes based on existing ones that are sparsely representedThis reduces fragmentation, repe
52、tition, and replication,,,,April 2, 2024,Data Mining: Concepts and Techniques,25,Classification in Large Databases,Classification—a classical problem extensively studied by statisticians and machine learning researchers
53、Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speedWhy decision tree induction in data mining?relatively faster learning speed (than other classification method
54、s)convertible to simple and easy to understand classification rulescan use SQL queries for accessing databasescomparable classification accuracy with other methods,April 2, 2024,Data Mining: Concepts and Techniques,26
55、,Scalable Decision Tree Induction Methods,SLIQ (EDBT’96 — Mehta et al.)Builds an index for each attribute and only class list and the current attribute list reside in memorySPRINT (VLDB’96 — J. Shafer et al.)Construct
56、s an attribute list data structure PUBLIC (VLDB’98 — Rastogi & Shim)Integrates tree splitting and tree pruning: stop growing the tree earlierRainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)Builds an AVC-li
57、st (attribute, value, class label)BOAT (PODS’99 — Gehrke, Ganti, Ramakrishnan & Loh)Uses bootstrapping to create several small samples,April 2, 2024,Data Mining: Concepts and Techniques,27,Scalability Framework for
58、 RainForest,Separates the scalability aspects from the criteria that determine the quality of the tree Builds an AVC-list: AVC (Attribute, Value, Class_label) AVC-set (of an attribute X )Projection of training datase
59、t onto the attribute X and class label where counts of individual class label are aggregatedAVC-group (of a node n )Set of AVC-sets of all predictor attributes at the node n,April 2, 2024,Data Mining: Concepts and Tec
60、hniques,28,Rainforest: Training Set and Its AVC Sets,AVC-set on income,AVC-set on Age,AVC-set on Student,Training Examples,AVC-set on credit_rating,April 2, 2024,Data Mining: Concepts and Techniques,29,Data Cube-Based
61、Decision-Tree Induction,Integration of generalization with decision-tree induction (Kamber et al.’97)Classification at primitive concept levelsE.g., precise temperature, humidity, outlook, etc.Low-level concepts, scat
62、tered classes, bushy classification-treesSemantic interpretation problemsCube-based multi-level classificationRelevance analysis at multi-levelsInformation-gain analysis with dimension + level,April 2, 2024,Data Mini
63、ng: Concepts and Techniques,30,BOAT (Bootstrapped Optimistic Algorithm for Tree Construction),Use a statistical technique called bootstrapping to create several smaller samples (subsets), each fits in memoryEach subset
64、is used to create a tree, resulting in several trees These trees are examined and used to construct a new tree T’It turns out that T’ is very close to the tree that would be generated using the whole data set together
65、Adv: requires only two scans of DB, an incremental alg.,April 2, 2024,Data Mining: Concepts and Techniques,31,Presentation of Classification Results,April 2, 2024,Data Mining: Concepts and Techniques,32,Visualization of
66、a Decision Tree in SGI/MineSet 3.0,April 2, 2024,Data Mining: Concepts and Techniques,33,Interactive Visual Mining by Perception-Based Classification (PBC),April 2, 2024,Data Mining: Concepts and Techniques,34,Chapter 6.
67、 Classification and Prediction,What is classification? What is prediction?Issues regarding classification and predictionClassification by decision tree inductionBayesian classificationRule-based classificationClassi
68、fication by back propagation,Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors)Other classification methodsPredictionAccuracy and error measuresEnsemble methods
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 數(shù)據(jù)挖掘概念與技術(shù)(第2版)習(xí)題答案
- 國際金融英文版習(xí)題chapter_3
- (英文版)國際關(guān)系:重要概念
- 進出口實務(wù)與操作(英文版)chapter3negotiatingbusinesscontract
- 淺析數(shù)據(jù)挖掘概念及技術(shù)
- 數(shù)據(jù)挖掘概念與技術(shù)第三版部分習(xí)題答案
- 淺析數(shù)據(jù)挖掘概念與技術(shù)1new
- 數(shù)據(jù)挖掘概念與技術(shù)第三版部分習(xí)題答案
- 6數(shù)據(jù)挖掘技術(shù)專題
- [教育]英文版越野路虎ppt
- 《吾國與吾民》英文版
- 數(shù)據(jù)挖掘chapter10數(shù)據(jù)挖掘應(yīng)用和發(fā)展趨勢
- 英文版.doc
- 英文版.doc
- 英文版.doc
- 英文版.doc
- 英文版.doc
- 廈門傳統(tǒng)風(fēng)俗英文版、廈門景點介紹英文版
- 《水調(diào)歌頭》英文版
- 英文版.doc
評論
0/150
提交評論