How to Calculate F-measure, Precesion, Recall for Naive and Svm Nltk , Erro: string object has no attribute copyHow to use the a k-fold cross validation in scikit with naive bayes classifier and NLTKHow to run naive Bayes from NLTK with Python Pandas?How to do SVM Tagging in NLTK Python on Unicode DataCalculating Precision, Recall, Accuracy using SVMcalculating probability of sentence with Naive Bayes using NLTKClassifying text strings into multiple classes using Naive Bayes with NLTKHow to calculate precision and recall in KerasCalculate Accuracy, Precison and Recall for Naive Bayes classifier (Manual Calculation)Why do I get “expected string or buffer,” when I am working with strings?How can I calculate perplexity using nltk
Does Google Maps take into account hills/inclines for route times?
Grammy Winners Grading
Replacements for swear words
Trying to find a flaw in my proof that there are more rearrangements of an infinite series than real numbers
Crowbar circuit causes unexpected behavior for op amp circuit
Why does resistance reduce when a conductive fabric is stretched?
Where or how can I find what interfaces an out of the box Apex class implements?
Bishop game - python
Is this floating-point optimization allowed?
Why did the Japanese attack the Aleutians at the same time as Midway?
Cops: The Hidden OEIS Substring
As a DM, how to avoid unconscious metagaming when dealing with a high AC character?
Extract an attribute value from XML
Why did my rum cake turn black?
A DVR algebra with weird automorphisms
What would be the ideal melee weapon made of "Phase Metal"?
Can I call 112 to check a police officer's identity in the Czech Republic?
Email about missed connecting flight compensation 5 months after flight, is there a point?
When did the Roman Empire fall according to contemporaries?
Optimising Table wrapping over a Select
The monorail explodes before I can get on it
Are neural networks prone to catastrophic forgetting?
Why can't supermassive black holes merge? (or can they?)
Was adding milk to tea started to reduce employee tea break time?
How to Calculate F-measure, Precesion, Recall for Naive and Svm Nltk , Erro: string object has no attribute copy
How to use the a k-fold cross validation in scikit with naive bayes classifier and NLTKHow to run naive Bayes from NLTK with Python Pandas?How to do SVM Tagging in NLTK Python on Unicode DataCalculating Precision, Recall, Accuracy using SVMcalculating probability of sentence with Naive Bayes using NLTKClassifying text strings into multiple classes using Naive Bayes with NLTKHow to calculate precision and recall in KerasCalculate Accuracy, Precison and Recall for Naive Bayes classifier (Manual Calculation)Why do I get “expected string or buffer,” when I am working with strings?How can I calculate perplexity using nltk
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I have to Calculate Precesion, F-measure and Recall for the Naive and Svm with sentiment classification. it return's me error as string object has no attribute copy.
In code preprocessedTrainingSet gives the processed Training Data and preprocessedTestSet gives the processed test dataset
word_features = buildVocabulary(preprocessedTrainingSet)
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet) #this returns error
I am posting my whole code here:
import csv
import re
from nltk.tokenize import word_tokenize
from string import punctuation
from nltk.corpus import stopwords
import nltk
import sys
import os
nltk.download('punkt')
import csv
import datetime
from bs4 import BeautifulSoup
import re
import itertools
import emoji
def load_dict_smileys():
return
":‑)":"smiley",
":-]":"smiley",
def load_dict_contractions():
return
"ain't":"is not",
"amn't":"am not",
def strip_accents(text):
if 'ø' in text or 'Ø' in text:
#Do nothing when finding ø
return text
text = text.encode('ascii', 'ignore')
text = text.decode("utf-8")
return str(text)
def buildTestSet():
Test_data = []
for line in open('Avengers.csv','r'):
cells = line.split( "," )
Test_data.append(cells[1])
return Test_data
testData = buildTestSet()
def buildTrainingSet(corpusFile):
corpus = []
trainingDataSet = []
with open(corpusFile, "rt", encoding="utf8") as csvFile:
lineReader = csv.reader(csvFile,delimiter=',', quotechar=""")
for row in lineReader:
trainingDataSet.append(row)
return trainingDataSet
corpusFile = "trainingSet.csv"
trainingData = buildTrainingSet(corpusFile)
class PreProcessTweets:
def __init__(self):
self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])
def processTweets(self, list_of_tweets):
processedTweets=[]
for tweet in list_of_tweets:
if testD == 1:
#print(tweet)
processedTweets.append((self._processTweet(tweet),tweet[3]))
else:
processedTweets.append((self._processTweet(tweet[2]),tweet[3]))
return processedTweets
def _processTweet(self, tweet):
tweet = BeautifulSoup(tweet).get_text()
tweet = tweet.replace('x92',"'")
tweet = ' '.join(re.sub("(@[A-Za-z0-9]+)|(#[A-Za-z0-9]+)", " ", tweet).split())
tweet = ' '.join(re.sub("(w+://S+)", " ", tweet).split())
tweet = ' '.join(re.sub("[.,!?:;-=]", " ", tweet).split())
#Lower case
tweet = tweet.lower()
CONTRACTIONS = load_dict_contractions()
tweet = tweet.replace("’","'")
words = tweet.split()
reformed = [CONTRACTIONS[word] if word in CONTRACTIONS else word for word in words]
tweet = " ".join(reformed)
tweet = ''.join(''.join(s)[:2] for _, s in itertools.groupby(tweet))
SMILEY = load_dict_smileys()
words = tweet.split()
reformed = [SMILEY[word] if word in SMILEY else word for word in words]
tweet = " ".join(reformed)
#Deal with emojis
tweet = emoji.demojize(tweet)
#Strip accents
tweet= strip_accents(tweet)
tweet = tweet.replace(":"," ")
tweet = ' '.join(tweet.split())
return tweet
testD = 0
tweetProcessor = PreProcessTweets()
preprocessedTrainingSet = tweetProcessor.processTweets(trainingData)
testD = 1
preprocessedTestSet = tweetProcessor.processTweets(testData)
def buildVocabulary(preprocessedTrainingData):
all_words = []
for (words, sentiment) in preprocessedTrainingData:
all_words.extend(words)
wordlist = nltk.FreqDist(all_words)
word_features = wordlist.keys()
return word_features
def extract_features(tweet):
tweet_words=set(tweet)
features=
for word in word_features:
features['contains(%s)' % word]=(word in tweet_words)
return features
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet]
if NBResultLabels.count('positive') > NBResultLabels.count('negative'):
print("Overall Positive Sentiment")
print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%")
else:
print("Overall Negative Sentiment")
print("Negative Sentiment Percentage = " + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + "%")
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet)
print(accuracy*100)
the result should come in this way
precision recall f1-score support
0 0.65 1.00 0.79 17
1 0.57 0.75 0.65 16
2 0.33 0.06 0.10 17
avg / total 0.52 0.60 0.51 50
python-3.x nltk svm precision naivebayes
add a comment |
I have to Calculate Precesion, F-measure and Recall for the Naive and Svm with sentiment classification. it return's me error as string object has no attribute copy.
In code preprocessedTrainingSet gives the processed Training Data and preprocessedTestSet gives the processed test dataset
word_features = buildVocabulary(preprocessedTrainingSet)
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet) #this returns error
I am posting my whole code here:
import csv
import re
from nltk.tokenize import word_tokenize
from string import punctuation
from nltk.corpus import stopwords
import nltk
import sys
import os
nltk.download('punkt')
import csv
import datetime
from bs4 import BeautifulSoup
import re
import itertools
import emoji
def load_dict_smileys():
return
":‑)":"smiley",
":-]":"smiley",
def load_dict_contractions():
return
"ain't":"is not",
"amn't":"am not",
def strip_accents(text):
if 'ø' in text or 'Ø' in text:
#Do nothing when finding ø
return text
text = text.encode('ascii', 'ignore')
text = text.decode("utf-8")
return str(text)
def buildTestSet():
Test_data = []
for line in open('Avengers.csv','r'):
cells = line.split( "," )
Test_data.append(cells[1])
return Test_data
testData = buildTestSet()
def buildTrainingSet(corpusFile):
corpus = []
trainingDataSet = []
with open(corpusFile, "rt", encoding="utf8") as csvFile:
lineReader = csv.reader(csvFile,delimiter=',', quotechar=""")
for row in lineReader:
trainingDataSet.append(row)
return trainingDataSet
corpusFile = "trainingSet.csv"
trainingData = buildTrainingSet(corpusFile)
class PreProcessTweets:
def __init__(self):
self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])
def processTweets(self, list_of_tweets):
processedTweets=[]
for tweet in list_of_tweets:
if testD == 1:
#print(tweet)
processedTweets.append((self._processTweet(tweet),tweet[3]))
else:
processedTweets.append((self._processTweet(tweet[2]),tweet[3]))
return processedTweets
def _processTweet(self, tweet):
tweet = BeautifulSoup(tweet).get_text()
tweet = tweet.replace('x92',"'")
tweet = ' '.join(re.sub("(@[A-Za-z0-9]+)|(#[A-Za-z0-9]+)", " ", tweet).split())
tweet = ' '.join(re.sub("(w+://S+)", " ", tweet).split())
tweet = ' '.join(re.sub("[.,!?:;-=]", " ", tweet).split())
#Lower case
tweet = tweet.lower()
CONTRACTIONS = load_dict_contractions()
tweet = tweet.replace("’","'")
words = tweet.split()
reformed = [CONTRACTIONS[word] if word in CONTRACTIONS else word for word in words]
tweet = " ".join(reformed)
tweet = ''.join(''.join(s)[:2] for _, s in itertools.groupby(tweet))
SMILEY = load_dict_smileys()
words = tweet.split()
reformed = [SMILEY[word] if word in SMILEY else word for word in words]
tweet = " ".join(reformed)
#Deal with emojis
tweet = emoji.demojize(tweet)
#Strip accents
tweet= strip_accents(tweet)
tweet = tweet.replace(":"," ")
tweet = ' '.join(tweet.split())
return tweet
testD = 0
tweetProcessor = PreProcessTweets()
preprocessedTrainingSet = tweetProcessor.processTweets(trainingData)
testD = 1
preprocessedTestSet = tweetProcessor.processTweets(testData)
def buildVocabulary(preprocessedTrainingData):
all_words = []
for (words, sentiment) in preprocessedTrainingData:
all_words.extend(words)
wordlist = nltk.FreqDist(all_words)
word_features = wordlist.keys()
return word_features
def extract_features(tweet):
tweet_words=set(tweet)
features=
for word in word_features:
features['contains(%s)' % word]=(word in tweet_words)
return features
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet]
if NBResultLabels.count('positive') > NBResultLabels.count('negative'):
print("Overall Positive Sentiment")
print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%")
else:
print("Overall Negative Sentiment")
print("Negative Sentiment Percentage = " + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + "%")
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet)
print(accuracy*100)
the result should come in this way
precision recall f1-score support
0 0.65 1.00 0.79 17
1 0.57 0.75 0.65 16
2 0.33 0.06 0.10 17
avg / total 0.52 0.60 0.51 50
python-3.x nltk svm precision naivebayes
add a comment |
I have to Calculate Precesion, F-measure and Recall for the Naive and Svm with sentiment classification. it return's me error as string object has no attribute copy.
In code preprocessedTrainingSet gives the processed Training Data and preprocessedTestSet gives the processed test dataset
word_features = buildVocabulary(preprocessedTrainingSet)
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet) #this returns error
I am posting my whole code here:
import csv
import re
from nltk.tokenize import word_tokenize
from string import punctuation
from nltk.corpus import stopwords
import nltk
import sys
import os
nltk.download('punkt')
import csv
import datetime
from bs4 import BeautifulSoup
import re
import itertools
import emoji
def load_dict_smileys():
return
":‑)":"smiley",
":-]":"smiley",
def load_dict_contractions():
return
"ain't":"is not",
"amn't":"am not",
def strip_accents(text):
if 'ø' in text or 'Ø' in text:
#Do nothing when finding ø
return text
text = text.encode('ascii', 'ignore')
text = text.decode("utf-8")
return str(text)
def buildTestSet():
Test_data = []
for line in open('Avengers.csv','r'):
cells = line.split( "," )
Test_data.append(cells[1])
return Test_data
testData = buildTestSet()
def buildTrainingSet(corpusFile):
corpus = []
trainingDataSet = []
with open(corpusFile, "rt", encoding="utf8") as csvFile:
lineReader = csv.reader(csvFile,delimiter=',', quotechar=""")
for row in lineReader:
trainingDataSet.append(row)
return trainingDataSet
corpusFile = "trainingSet.csv"
trainingData = buildTrainingSet(corpusFile)
class PreProcessTweets:
def __init__(self):
self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])
def processTweets(self, list_of_tweets):
processedTweets=[]
for tweet in list_of_tweets:
if testD == 1:
#print(tweet)
processedTweets.append((self._processTweet(tweet),tweet[3]))
else:
processedTweets.append((self._processTweet(tweet[2]),tweet[3]))
return processedTweets
def _processTweet(self, tweet):
tweet = BeautifulSoup(tweet).get_text()
tweet = tweet.replace('x92',"'")
tweet = ' '.join(re.sub("(@[A-Za-z0-9]+)|(#[A-Za-z0-9]+)", " ", tweet).split())
tweet = ' '.join(re.sub("(w+://S+)", " ", tweet).split())
tweet = ' '.join(re.sub("[.,!?:;-=]", " ", tweet).split())
#Lower case
tweet = tweet.lower()
CONTRACTIONS = load_dict_contractions()
tweet = tweet.replace("’","'")
words = tweet.split()
reformed = [CONTRACTIONS[word] if word in CONTRACTIONS else word for word in words]
tweet = " ".join(reformed)
tweet = ''.join(''.join(s)[:2] for _, s in itertools.groupby(tweet))
SMILEY = load_dict_smileys()
words = tweet.split()
reformed = [SMILEY[word] if word in SMILEY else word for word in words]
tweet = " ".join(reformed)
#Deal with emojis
tweet = emoji.demojize(tweet)
#Strip accents
tweet= strip_accents(tweet)
tweet = tweet.replace(":"," ")
tweet = ' '.join(tweet.split())
return tweet
testD = 0
tweetProcessor = PreProcessTweets()
preprocessedTrainingSet = tweetProcessor.processTweets(trainingData)
testD = 1
preprocessedTestSet = tweetProcessor.processTweets(testData)
def buildVocabulary(preprocessedTrainingData):
all_words = []
for (words, sentiment) in preprocessedTrainingData:
all_words.extend(words)
wordlist = nltk.FreqDist(all_words)
word_features = wordlist.keys()
return word_features
def extract_features(tweet):
tweet_words=set(tweet)
features=
for word in word_features:
features['contains(%s)' % word]=(word in tweet_words)
return features
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet]
if NBResultLabels.count('positive') > NBResultLabels.count('negative'):
print("Overall Positive Sentiment")
print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%")
else:
print("Overall Negative Sentiment")
print("Negative Sentiment Percentage = " + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + "%")
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet)
print(accuracy*100)
the result should come in this way
precision recall f1-score support
0 0.65 1.00 0.79 17
1 0.57 0.75 0.65 16
2 0.33 0.06 0.10 17
avg / total 0.52 0.60 0.51 50
python-3.x nltk svm precision naivebayes
I have to Calculate Precesion, F-measure and Recall for the Naive and Svm with sentiment classification. it return's me error as string object has no attribute copy.
In code preprocessedTrainingSet gives the processed Training Data and preprocessedTestSet gives the processed test dataset
word_features = buildVocabulary(preprocessedTrainingSet)
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet) #this returns error
I am posting my whole code here:
import csv
import re
from nltk.tokenize import word_tokenize
from string import punctuation
from nltk.corpus import stopwords
import nltk
import sys
import os
nltk.download('punkt')
import csv
import datetime
from bs4 import BeautifulSoup
import re
import itertools
import emoji
def load_dict_smileys():
return
":‑)":"smiley",
":-]":"smiley",
def load_dict_contractions():
return
"ain't":"is not",
"amn't":"am not",
def strip_accents(text):
if 'ø' in text or 'Ø' in text:
#Do nothing when finding ø
return text
text = text.encode('ascii', 'ignore')
text = text.decode("utf-8")
return str(text)
def buildTestSet():
Test_data = []
for line in open('Avengers.csv','r'):
cells = line.split( "," )
Test_data.append(cells[1])
return Test_data
testData = buildTestSet()
def buildTrainingSet(corpusFile):
corpus = []
trainingDataSet = []
with open(corpusFile, "rt", encoding="utf8") as csvFile:
lineReader = csv.reader(csvFile,delimiter=',', quotechar=""")
for row in lineReader:
trainingDataSet.append(row)
return trainingDataSet
corpusFile = "trainingSet.csv"
trainingData = buildTrainingSet(corpusFile)
class PreProcessTweets:
def __init__(self):
self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])
def processTweets(self, list_of_tweets):
processedTweets=[]
for tweet in list_of_tweets:
if testD == 1:
#print(tweet)
processedTweets.append((self._processTweet(tweet),tweet[3]))
else:
processedTweets.append((self._processTweet(tweet[2]),tweet[3]))
return processedTweets
def _processTweet(self, tweet):
tweet = BeautifulSoup(tweet).get_text()
tweet = tweet.replace('x92',"'")
tweet = ' '.join(re.sub("(@[A-Za-z0-9]+)|(#[A-Za-z0-9]+)", " ", tweet).split())
tweet = ' '.join(re.sub("(w+://S+)", " ", tweet).split())
tweet = ' '.join(re.sub("[.,!?:;-=]", " ", tweet).split())
#Lower case
tweet = tweet.lower()
CONTRACTIONS = load_dict_contractions()
tweet = tweet.replace("’","'")
words = tweet.split()
reformed = [CONTRACTIONS[word] if word in CONTRACTIONS else word for word in words]
tweet = " ".join(reformed)
tweet = ''.join(''.join(s)[:2] for _, s in itertools.groupby(tweet))
SMILEY = load_dict_smileys()
words = tweet.split()
reformed = [SMILEY[word] if word in SMILEY else word for word in words]
tweet = " ".join(reformed)
#Deal with emojis
tweet = emoji.demojize(tweet)
#Strip accents
tweet= strip_accents(tweet)
tweet = tweet.replace(":"," ")
tweet = ' '.join(tweet.split())
return tweet
testD = 0
tweetProcessor = PreProcessTweets()
preprocessedTrainingSet = tweetProcessor.processTweets(trainingData)
testD = 1
preprocessedTestSet = tweetProcessor.processTweets(testData)
def buildVocabulary(preprocessedTrainingData):
all_words = []
for (words, sentiment) in preprocessedTrainingData:
all_words.extend(words)
wordlist = nltk.FreqDist(all_words)
word_features = wordlist.keys()
return word_features
def extract_features(tweet):
tweet_words=set(tweet)
features=
for word in word_features:
features['contains(%s)' % word]=(word in tweet_words)
return features
trainingFeatures=nltk.classify.apply_features(extract_features,preprocessedTrainingSet)
NBayesClassifier=nltk.NaiveBayesClassifier.train(trainingFeatures)
NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet]
if NBResultLabels.count('positive') > NBResultLabels.count('negative'):
print("Overall Positive Sentiment")
print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%")
else:
print("Overall Negative Sentiment")
print("Negative Sentiment Percentage = " + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + "%")
accuracy = nltk.classify.util.accuracy(NBayesClassifier, preprocessedTestSet)
print(accuracy*100)
the result should come in this way
precision recall f1-score support
0 0.65 1.00 0.79 17
1 0.57 0.75 0.65 16
2 0.33 0.06 0.10 17
avg / total 0.52 0.60 0.51 50
python-3.x nltk svm precision naivebayes
python-3.x nltk svm precision naivebayes
edited Mar 26 at 11:47
Ashutosh Eve
asked Mar 26 at 4:56
Ashutosh EveAshutosh Eve
418 bronze badges
418 bronze badges
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55350093%2fhow-to-calculate-f-measure-precesion-recall-for-naive-and-svm-nltk-erro-str%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.
Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55350093%2fhow-to-calculate-f-measure-precesion-recall-for-naive-and-svm-nltk-erro-str%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown