FeatureHasher與DictVectorizer的比較?
通過同時使用FeatureHasher和DictVectorizer對文本文檔進行矢量化來進行比較。
該示例僅演示語法和速度。 它實際上對提取的向量沒有任何幫助。 有關實際學習文本文檔的信息,請參見示例腳本{document_classification_20newsgroups,clustering} .py。
由于哈希沖突,預計DictVectorizer和FeatureHasher報告的術語數量會有差異。
輸出:
Usage: /home/circleci/project/examples/text/plot_hashing_vs_dict_vectorizer.py [n_features_for_hashing]
The default number of features is 2**18.
Loading 20 newsgroups training data
3803 documents - 6.245MB
DictVectorizer
done in 1.313812s at 4.753MB/s
Found 47928 unique terms
FeatureHasher on frequency dicts
done in 0.842164s at 7.415MB/s
Found 43873 unique terms
FeatureHasher on raw tokens
done in 0.792912s at 7.876MB/s
Found 43873 unique terms
輸入:
# Author: Lars Buitinck
# License: BSD 3 clause
from collections import defaultdict
import re
import sys
from time import time
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction import DictVectorizer, FeatureHasher
def n_nonzero_columns(X):
"""返回CSR矩陣X中非零列的數量"""
return len(np.unique(X.nonzero()[1]))
def tokens(doc):
"""從文檔中提取出符號(tokens)。
在這里我們使用一個簡單的正則表達式將字符串分解為符號。
對于更多原則性方法,請參見CountVectorizer或TfidfVectorizer。
"""
return (tok.lower() for tok in re.findall(r"\w+", doc))
def token_freqs(doc):
'''從doc中提取一個將令牌映射到其頻率的字典。'''
freq = defaultdict(int)
for tok in tokens(doc):
freq[tok] += 1
return freq
categories = [
'alt.atheism',
'comp.graphics',
'comp.sys.ibm.pc.hardware',
'misc.forsale',
'rec.autos',
'sci.space',
'talk.religion.misc',
]
# 下面這行取消注釋以使用更大的注釋集(超過11k個文檔)
# categories = None
print(__doc__)
print("Usage: %s [n_features_for_hashing]" % sys.argv[0])
print(" The default number of features is 2**18.")
print()
try:
n_features = int(sys.argv[1])
except IndexError:
n_features = 2 ** 18
except ValueError:
print("not a valid number of features: %r" % sys.argv[1])
sys.exit(1)
print("Loading 20 newsgroups training data")
raw_data, _ = fetch_20newsgroups(subset='train', categories=categories,
return_X_y=True)
data_size_mb = sum(len(s.encode('utf-8')) for s in raw_data) / 1e6
print("%d documents - %0.3fMB" % (len(raw_data), data_size_mb))
print()
print("DictVectorizer")
t0 = time()
vectorizer = DictVectorizer()
vectorizer.fit_transform(token_freqs(d) for d in raw_data)
duration = time() - t0
print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration))
print("Found %d unique terms" % len(vectorizer.get_feature_names()))
print()
print("FeatureHasher on frequency dicts")
t0 = time()
hasher = FeatureHasher(n_features=n_features)
X = hasher.transform(token_freqs(d) for d in raw_data)
duration = time() - t0
print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration))
print("Found %d unique terms" % n_nonzero_columns(X))
print()
print("FeatureHasher on raw tokens")
t0 = time()
hasher = FeatureHasher(n_features=n_features, input_type="string")
X = hasher.transform(tokens(d) for d in raw_data)
duration = time() - t0
print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration))
print("Found %d unique terms" % n_nonzero_columns(X))
腳本的總運行時間:0分3.296秒。