1.統(tǒng)計學(xué)習(xí)筆記
久聞李航老師的這本統(tǒng)計學(xué)習(xí)方法的大名锌畸,苦于數(shù)學(xué)基礎(chǔ)停留在本科期末考試70分的水平瓮增,戰(zhàn)戰(zhàn)兢兢功咒,最近種草人工智能,多方研究從這本書開始再好不過了桦踊,于是硬著頭皮上了椅野,發(fā)現(xiàn)概念并沒有非常晦澀難懂,詳細的筆記沒有來得及記錄竟闪,下面給出兩個不錯的筆記的鏈接离福。
2.算法Python實現(xiàn)
實現(xiàn)前只作簡單介紹,理解不到位的地方炼蛤,求大神指點妖爷,此外入門語言是JAVA,python只知道一些基礎(chǔ)語法理朋,寫的比較丑陋絮识,勿噴。
2.1感知機
感知機(二類分類)根據(jù)已有的輸入和輸出(輸出只有1或-1)嗽上,計算得到分離超平面S(wx+b)次舌,其中w是S的法向量,b是S的截距兽愤。然后通過S對位置的輸入給出預(yù)測的輸出分類結(jié)果彼念。
2.1.1原始感知機算法
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
def loadData():
"""
加載數(shù)據(jù)
eg:
1 1 -1
0 1 -1
3 3 1
4 3 1
2 0.5 -1
3 2 1
4 4 1
1 2 -1
3 3 1
3 4 1
3 1 -1
0.5 3 1
2 2 -1
3 1.8 -1
1 3.5 1
0.5 2.5 -1
"""
data = np.loadtxt('testSet.txt')
dataMat = data[:, 0:2]
labelMat = data[:, 2]
return dataMat, labelMat
def sign(val):
if val >= 0:
return 1
else:
return -1
def trainPerceptron(dataMat, labelMat, eta):
"""
訓(xùn)練模型
eta: learning rate(可選步)
"""
m, n = dataMat.shape
weight = np.zeros(n)
bias = 0
flag = True
while flag:
for i in range(m):
if np.any(labelMat[i] * (np.dot(weight, dataMat[i]) + bias) <= 0):
weight = weight + eta * labelMat[i] * dataMat[i].T
bias = bias + eta * labelMat[i]
print("weight, bias: ", end="")
print(weight, end=" ")
print(bias)
flag = True
break
else:
flag = False
return weight, bias
# 可視化展示分類結(jié)果
def plotResult(dataMat, labelMat, weight, bias):
fig = plt.figure()
axes = fig.add_subplot(111)
type1_x = []
type1_y = []
type2_x = []
type2_y = []
for i in range(len(labelMat)):
if (labelMat[i] == -1):
type1_x.append(dataMat[i][0])
type1_y.append(dataMat[i][1])
if (labelMat[i] == 1):
type2_x.append(dataMat[i][0])
type2_y.append(dataMat[i][1])
type1 = axes.scatter(type1_x, type1_y, marker='x', s=20, c='red')
type2 = axes.scatter(type2_x, type2_y, marker='o', s=20, c='blue')
y = (0.1 * -weight[0] / weight[1] + -bias / weight[1], 4.0 * -weight[0] / weight[1] + -bias / weight[1])
axes.add_line(Line2D((0.1, 4.0), y, linewidth=1, color='blue'))
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
def _init_():
dataMat, labelMat = loadData()
weight, bias = trainPerceptron(dataMat, labelMat, 1)
plotResult(dataMat, labelMat, weight, bias)
return weight, bias
2.1.2運行結(jié)果
2.2k近鄰法
k近鄰法(基本分類或回歸法),根據(jù)訓(xùn)練數(shù)據(jù)輸入的對應(yīng)輸出浅萧,將數(shù)據(jù)進行分類国拇。輸入新的向量X時,找出已有數(shù)據(jù)的k個最靠近X的點惯殊,判斷這些點中最多的類型為T酱吝,則新輸入預(yù)測的對應(yīng)類型為T。求最近距離時一般會用到kd樹土思,優(yōu)化檢索务热。kd樹的創(chuàng)建和檢索過程請查看相關(guān)資料。
2.2.1k鄰近算法
由于導(dǎo)入訓(xùn)練數(shù)據(jù)和可視化代碼類似這里省略
import numpy as np
import matplotlib.pyplot as plt
import operator
# 根據(jù)輸入測試實例進行k-近鄰分類
def classify(in_x, data_set, labels, k):
data_set_size = data_set.shape[0]
diff_mat = np.tile(in_x, (data_set_size, 1)) - data_set
sq_diff_mat = diff_mat ** 2
sq_distances = sq_diff_mat.sum(axis=1)
distances = sq_distances ** 0.5
sorted_dist_indicies = distances.argsort()
class_count = {}
for i in range(k):
vote_ilabel = labels[sorted_dist_indicies[i]]
class_count[vote_ilabel] = class_count.get(vote_ilabel, 0) + 1
sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)
return sorted_class_count[0][0]
def _init_(x, k):
dataMat, labelMat = loadData('testSet.txt')
in_x = x
plotResult(dataMat, labelMat, in_x)
result = int(classify(in_x, dataMat, labelMat, k))
return result
2.2.2運行結(jié)果
類別1黑色己儒,類別2綠色崎岂,類別3黃色,新數(shù)據(jù)是紅色叉叉
簡單的測試結(jié)果如下:
訓(xùn)練數(shù)據(jù):[3,5] k值:1 結(jié)果:3
訓(xùn)練數(shù)據(jù):[3,5] k值:5 結(jié)果:2
訓(xùn)練數(shù)據(jù):[3,5] k值:15 結(jié)果:1