我需要跟踪 F1 分数,同时在 SVM 中调整 C & amp;Sigma,例如下面的代码跟踪准确性,我需要将其更改为 F1-Score,但我无法做到这一点......
%# read some training data
[labels,data] = libsvmread('./heart_scale');
%# grid of parameters
folds = 5;
[C,gamma] = meshgrid(-5:2:15, -15:2:3);
%# grid search, and cross-validation
cv_acc = zeros(numel(C),1);
for i=1:numel(C)
cv_acc(i) = svmtrain(labels, data, ...
sprintf('-c %f -g %f -v %d', 2^C(i), 2^gamma(i), folds));
end
%# pair (C,gamma) with best accuracy
[~,idx] = max(cv_acc);
%# now you can train you model using best_C and best_gamma
best_C = 2^C(idx);
best_gamma = 2^gamma(idx);
%# ...
我看到了以下两个链接
Retraining after Cross Validation with libsvm 10 fold cross-validation in one-against-all SVM (using LibSVM)我明白,我必须首先在训练数据上找到最佳的 C 和 gamma / sigma 参数,然后使用这两个值做一个 LEE-ONE-OUT 交叉验证分类实验,所以我现在想要的是首先做一个网格搜索来调整 C & amp;sigma。请我更喜欢使用 MATLAB-SVM 而不是 LIBSVM。下面是我的 LEE-ONE-OUT 交叉验证分类代码。
... clc
clear all
close all
a = load('V1.csv');
X = double(a(:,1:12));
Y = double(a(:,13));
% train data
datall=[X,Y];
A=datall;
n = 40;
ordering = randperm(n);
B = A(ordering, :);
good=B;
input=good(:,1:12);
target=good(:,13);
CVO = cvparion(target,'leaveout',1);
cp = clperf(target); %# init performance tracker
svmModel=[];
for i = 1:CVO.NumTestSets %# for each fold
trIdx = CVO.training(i);
teIdx = CVO.test(i);
%# train an SVM model over training instances
svmModel = svmtrain(input(trIdx,:), target(trIdx), ...
'Autoscale',true, 'Swplot',false, 'Metd','ls', ...
'BoxConstraint',0.1, 'Kernel_Function','rbf', 'RBF_Sigma',0.1);
%# test using test instances
pred = svmclify(svmModel, input(teIdx,:), 'Swplot',false);
%# evaluate and update performance object
cp = clperf(cp, pred, teIdx);
end
%# get accuracy
accuracy=cp.CorrectRate*100
sensitivity=cp.Sensitivity*100
specificity=cp.Specificity*100
PPV=cp.PositivePredictiveValue*100
NPV=cp.NegativePredictiveValue*100
%# get confusion matrix
%# columns:actual, rows:predicted, last-row: unclified instances
cp.CountingMatrix
recallP = sensitivity;
recallN = specificity;
precisionP = PPV;
precisionN = NPV;
f1P = 2*((precisionP*recallP)/(precisionP + recallP));
f1N = 2*((precisionN*recallN)/(precisionN + recallN));
aF1 = ((f1P+f1N)/2);
我已经改变了 code,但我犯了一些错误,我得到的错误,
a = load('V1.csv');
X = double(a(:,1:12));
Y = double(a(:,13));
% train data
datall=[X,Y];
A=datall;
n = 40;
ordering = randperm(n);
B = A(ordering, :);
good=B;
inpt=good(:,1:12);
target=good(:,13);
k=10;
cvFolds = crossvalind('Kfold', target, k); %# get indices of 10-fold CV
cp = clperf(target); %# init performance tracker
svmModel=[];
for i = 1:k
testIdx = (cvFolds == i); %# get indices of test instances
trainIdx = ~testIdx;
C = 0.1:0.1:1;
S = 0.1:0.1:1;
fscores = zeros(numel(C), numel(S)); %// Pre-allocation
for c = 1:numel(C)
for s = 1:numel(S)
vals = crossval(@(XTRAIN, YTRAIN, XVAL, YVAL)(fun(XTRAIN, YTRAIN, XVAL, YVAL, C(c), S(c))),inpt(trainIdx,:),target(trainIdx));
fscores(c,s) = mean(vals);
end
end
end
[cbest, sbest] = find(fscores == max(fscores(:)));
C_final = C(cbest);
S_final = S(sbest);
.......
功能……
.....
function fscore = fun(XTRAIN, YTRAIN, XVAL, YVAL, C, S)
svmModel = svmtrain(XTRAIN, YTRAIN, ...
'Autoscale',true, 'Swplot',false, 'Metd','ls', ...
'BoxConstraint', C, 'Kernel_Function','rbf', 'RBF_Sigma', S);
pred = svmclify(svmModel, XVAL, 'Swplot',false);
cp = clperf(YVAL, pred)
%# get accuracy
accuracy=cp.CorrectRate*100
sensitivity=cp.Sensitivity*100
specificity=cp.Specificity*100
PPV=cp.PositivePredictiveValue*100
NPV=cp.NegativePredictiveValue*100
%# get confusion matrix
%# columns:actual, rows:predicted, last-row: unclified instances
cp.CountingMatrix
recallP = sensitivity;
recallN = specificity;
precisionP = PPV;
precisionN = NPV;
f1P = 2*((precisionP*recallP)/(precisionP + recallP));
f1N = 2*((precisionN*recallN)/(precisionN + recallN));
fscore = ((f1P+f1N)/2);
end
所以基本想采取你的这一行:
svmModel = svmtrain(input(trIdx,:), target(trIdx), ...
'Autoscale',true, 'Swplot',false, 'Metd','ls', ...
'BoxConstraint',0.1, 'Kernel_Function','rbf', 'RBF_Sigma',0.1);
把它放在一个循环中,改变你的'BoxConstraint'
和'RBF_Sigma'
参数,然后使用crossval
输出参数迭代组合的 f1 分数。
您可以使用单个 for 循环,就像您的 libsvm 代码示例一样(即使用meshgrid
和1:numel()
,这可能更快)或嵌套的 for 循环。
C = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300] %// you must cose your own set of values for the parameters that you want to test. You can either do it this way by explicitly typing out a list
S = 0:0.1:1 %// or you can do it this way using the : operator
fscores = zeros(numel(C), numel(S)); %// Pre-allocation
for c = 1:numel(C)
for s = 1:numel(S)
vals = crossval(@(XTRAIN, YTRAIN, XVAL, YVAL)(fun(XTRAIN, YTRAIN, XVAL, YVAL, C(c), S(c)),input(trIdx,:),target(trIdx));
fscores(c,s) = mean(vals);
end
end
%// Then establish the C and S that gave you the bet f-score. Don't forget that c and s are just indexes tugh!
[cbest, sbest] = find(fscores == max(fscores(:)));
C_final = C(cbest);
S_final = S(sbest);
现在我们只需要定义函数fun
。文档对fun
有这样的说法:
fun 是具有两个输入的函数的函数句柄,即 X 的训练子集 XTRAIN 和 X 的测试子集 XTEST,如下所示:
testval = fun(XTRAIN,XTEST)每次调用时,fun 都应该使用 XTRAIN 来拟合模型,然后使用该拟合模型返回在 XTEST 上计算的一些标准 testval。
所以fun
需要:
输出单个 f-score
将 X 和 Y 的训练和测试集作为输入。请注意,这些都是您实际训练集的子集!将它们更像是您训练集的训练和验证子集。还要注意,crossval 将为您拆分这些集!
在训练子集上训练分类器(使用循环中的当前C
和S
参数)
在测试(或验证)子集上运行新的分类器
计算并输出一个性能指标(在你的情况下,你想要 f1 分数)
您会注意到fun
不能接受任何额外的参数,这就是为什么我将它包装在一个匿名函数中,以便我们可以传递当前的C
和S
值。(即上面的所有@(...)(fun(...))
东西。这只是将我们的六个参数fun
“转换”为所需的 4 个参数
function fscore = fun(XTRAIN, YTRAIN, XVAL, YVAL, C, S)
svmModel = svmtrain(XTRAIN, YTRAIN, ...
'Autoscale',true, 'Swplot',false, 'Metd','ls', ...
'BoxConstraint', C, 'Kernel_Function','rbf', 'RBF_Sigma', S);
pred = svmclify(svmModel, XVAL, 'Swplot',false);
CP = clperf(YVAL, pred)
fscore = ... %// You can do this bit the same way you did earlier
end
我发现了target(trainIdx)
的唯一问题。这是一个行向量,所以我只是用target(trainIdx)
替换了target(trainIdx)
,这是一个列向量。
本站系公益性非盈利分享网址,本文来自用户投稿,不代表边看边学立场,如若转载,请注明出处
评论列表(7条)