RuntimeError: t() expects a 2D Variable, but self is 3D
As of PyTorch v0.2, the .t() function is only applicable to 2D Variables: pytorch/pytorch@8d33603.
https://github.com/facebookresearch/ParlAI/issues/270
https://discuss.pytorch.org/t/how-to-transpose-3d-matrix-in-pytorch-v0-2/7102
2017年10月25日 星期三
UnicodeDecodeError
python3 使用 python2 pickle
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128)
2017年7月31日 星期一
source command not found
$ls -l `which sh`
/bin/sh -> dash
$sudo dpkg-reconfigure dash #Select "no" when you're asked
[...]
$ls -l `which sh`
/bin/sh -> bash
reference:
https://stackoverflow.com/questions/13702425/source-command-not-found-in-sh-shell
/bin/sh -> dash
$sudo dpkg-reconfigure dash #Select "no" when you're asked
[...]
$ls -l `which sh`
/bin/sh -> bash
reference:
https://stackoverflow.com/questions/13702425/source-command-not-found-in-sh-shell
caffe
圖像轉DB
convert_imageset [FLAGS] ROOTFOLDER/ LISTFILE DB_NAME
creat_filelist.sh
# /usr/bin/env sh
DATA=examples/images
echo "Create train.txt..."
rm -rf $DATA/train.txt
find $DATA -name *cat.jpg | cut -d '/' -f3 | sed "s/$/ 1/">>$DATA/train.txt
find $DATA -name *bike.jpg | cut -d '/' -f3 | sed "s/$/ 2/">>$DATA/tmp.txt
cat $DATA/tmp.txt>>$DATA/train.txt
rm -rf $DATA/tmp.txt
echo "Done.."
create_lmdb.sh
#!/usr/bin/en sh
DATA=examples/images
rm -rf $DATA/img_train_lmdb
build/tools/convert_imageset --shuffle \
--resize_height=256 --resize_width=256 \
/home/xxx/caffe/examples/images/ $DATA/train.txt $DATA/img_train_lmdb
計算mean值/opt/caffe/build/tools/compute_image_mean ./my_data/img_train_lmdb ./my_caffe/my_mean.binaryproto
conver_mean.py
#!/usr/bin/env python
import numpy as np
import sys,caffe
if len(sys.argv)!=3:
print "Usage: python convert_mean.py mean.binaryproto mean.npy"
sys.exit()
blob = caffe.proto.caffe_pb2.BlobProto()
bin_mean = open( sys.argv[1] , 'rb' ).read()
blob.ParseFromString(bin_mean)
arr = np.array( caffe.io.blobproto_to_array(blob) )
npy_mean = arr[0]
np.save( sys.argv[2] , npy_mean )
2017年7月27日 星期四
pyodbc connect to sql server
Install:
https://github.com/mkleehammer/pyodbc/wiki/Install
code:
import pyodbc
#connect to db
conn = pyodbc.connect(
r'DRIVER={ODBC Driver 13 for SQL Server};'
r'SERVER=127.0.0.1;'
r'DATABASE=DB_table;'
r'UID=yasam;'
r'PWD=password'
)
cursor = conn.cursor()
sqlInsert="INSERT INTO [dbo].[test_table](RecordID,Model,SubmitDate,CountryCode,Score,Comment) VALUES "
for i, d in enumerate(df):
print(i)
#truncate to 1024 for db size
if len(d[5])>1024:
d[5]=d[5][:1024]
if len(d[1])>32:
d[2]=d[1][:32]
d[5]=d[5].replace("'","''") # Comment, replace ' for sql
d[2]=d[2].replace("'","''") #Model
d[4]=str(d[4]) # float to str
temp="("+",".join(["N'"+dd+"'" for dd in d])+")" #N for encoding
tList.append(temp)
if i == len(df)-1:
text=','.join(tList)
cursor.execute(sqlInsert+text) #last insert
elif i % 10 == 9:
text=','.join(tList)
print(text)
cursor.execute(sqlInsert+text) #batch insert
tList=[]
temp=''
conn.commit()
https://github.com/mkleehammer/pyodbc/wiki/Install
code:
import pyodbc
#connect to db
conn = pyodbc.connect(
r'DRIVER={ODBC Driver 13 for SQL Server};'
r'SERVER=127.0.0.1;'
r'DATABASE=DB_table;'
r'UID=yasam;'
r'PWD=password'
)
cursor = conn.cursor()
sqlInsert="INSERT INTO [dbo].[test_table](RecordID,Model,SubmitDate,CountryCode,Score,Comment) VALUES "
for i, d in enumerate(df):
print(i)
#truncate to 1024 for db size
if len(d[5])>1024:
d[5]=d[5][:1024]
if len(d[1])>32:
d[2]=d[1][:32]
d[5]=d[5].replace("'","''") # Comment, replace ' for sql
d[2]=d[2].replace("'","''") #Model
d[4]=str(d[4]) # float to str
temp="("+",".join(["N'"+dd+"'" for dd in d])+")" #N for encoding
tList.append(temp)
if i == len(df)-1:
text=','.join(tList)
cursor.execute(sqlInsert+text) #last insert
elif i % 10 == 9:
text=','.join(tList)
print(text)
cursor.execute(sqlInsert+text) #batch insert
tList=[]
temp=''
conn.commit()
reference:
pyodbc 用法
https://my.oschina.net/zhengyijie/blog/35587
Inserting multiple rows in a single SQL query
https://stackoverflow.com/questions/452859/inserting-multiple-rows-in-a-single-sql-query
Pyodbc query string quote escaping
使用兩個''避免或用?方式
unicode 問題(沒遇到)
http://blog.csdn.net/samed/article/details/50539742
2017年7月13日 星期四
python3 gensim id2token not fund
rspList=sorted(glob.glob('./data/*'))
df=[]
for rsp in rspList:
data=pd.read_csv(rsp)
df.append(data)
df=pd.concat(df)
stoplist= set('i am you are he she is a for of the and to in'.split())
sents=df['translated_feedback'][df['translated_feedback']!='\\N'] #remove no response
texts=[[word for word in sent.translate(trans_table).lower().split()
if word not in stoplist] for sent in sents.values] #remove stopwords and punctuation
texts=list(filter(None,texts)) #filter empty list
#print(texts)
feq=defaultdict(int)
for text in texts:
for token in text:
feq[token]+=1
texts=[[token for token in text if feq[token]>1] for text in texts] #remove low frequency
dic=corpora.Dictionary(texts) #build dictionary
dic.save('./dictionary.dict')
#print(dic)
corpus =[dic.doc2bow(text) for text in texts] #build bag of words corpus
corpora.MmCorpus.serialize('./corpus.mm',corpus)
#print(corpus)
when i want to get id2token in dic, it is empty dictionary {}
dic.id2token
{}
I have to traverse(iterate) the dic ones
for k,v in dic.items():
pass
dic.id2token
{0:'yes',1:'got',2:'it'}
df=[]
for rsp in rspList:
data=pd.read_csv(rsp)
df.append(data)
df=pd.concat(df)
stoplist= set('i am you are he she is a for of the and to in'.split())
sents=df['translated_feedback'][df['translated_feedback']!='\\N'] #remove no response
texts=[[word for word in sent.translate(trans_table).lower().split()
if word not in stoplist] for sent in sents.values] #remove stopwords and punctuation
texts=list(filter(None,texts)) #filter empty list
#print(texts)
feq=defaultdict(int)
for text in texts:
for token in text:
feq[token]+=1
texts=[[token for token in text if feq[token]>1] for text in texts] #remove low frequency
dic=corpora.Dictionary(texts) #build dictionary
dic.save('./dictionary.dict')
#print(dic)
corpus =[dic.doc2bow(text) for text in texts] #build bag of words corpus
corpora.MmCorpus.serialize('./corpus.mm',corpus)
#print(corpus)
when i want to get id2token in dic, it is empty dictionary {}
dic.id2token
{}
I have to traverse(iterate) the dic ones
for k,v in dic.items():
pass
dic.id2token
{0:'yes',1:'got',2:'it'}
2017年6月28日 星期三
build caffe error
Q1: Can't find hdf5.h when build caffe
Ans:
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ and
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial/
Q2: caffe compilation error (on make all step) "[.build_release/lib/libcaffe.so.1.0.0-rc3] Error 1
Ans:
sudo ln -s /usr/lib/x86_64-linux-gnu/libboost_python-py35.so /usr/lib/x86_64-linux-gnu/libboost_python3.so
Q3: .build_release/lib/libcaffe.so: undefined reference to `cv::imdecode
Ans:
# OPENCV_VERSION := 3
To use OpenCV 3.X with Caffe, you should uncomment this line. You can refer to the 197th line in Makefile for the reason.
Q4: Multi-GPU
Ans:
uncomment USE_NCCL := 1 in Makefile.config and install nccl first,
$ git clone https://github.com/NVIDIA/nccl.git
$ cd nccl
$ sudo make install -j8
Q5: runing caffe error ../build/tools/caffe: error while loading shared libraries: libnccl.so.1: cannot open shared object file: No such file or directory
Ans:
add libnccl.so.1 path to LD_LIBRARY_PATH
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64:/usr/local/lib"
Q6: can't import caffe in python
make pycaffe
make distribute
referance:
https://github.com/NVIDIA/DIGITS/issues/156
https://github.com/BVLC/caffe/issues/4610
https://github.com/BVLC/caffe/issues/4621
Ans:
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ and
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial/
Q2: caffe compilation error (on make all step) "[.build_release/lib/libcaffe.so.1.0.0-rc3] Error 1
Ans:
sudo ln -s /usr/lib/x86_64-linux-gnu/libboost_python-py35.so /usr/lib/x86_64-linux-gnu/libboost_python3.so
Q3: .build_release/lib/libcaffe.so: undefined reference to `cv::imdecode
Ans:
# OPENCV_VERSION := 3
To use OpenCV 3.X with Caffe, you should uncomment this line. You can refer to the 197th line in Makefile for the reason.
Q4: Multi-GPU
Ans:
uncomment USE_NCCL := 1 in Makefile.config and install nccl first,
$ git clone https://github.com/NVIDIA/nccl.git
$ cd nccl
$ sudo make install -j8
Q5: runing caffe error ../build/tools/caffe: error while loading shared libraries: libnccl.so.1: cannot open shared object file: No such file or directory
Ans:
add libnccl.so.1 path to LD_LIBRARY_PATH
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64:/usr/local/lib"
make pycaffe
make distribute
export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH
referance:
https://github.com/NVIDIA/DIGITS/issues/156
https://github.com/BVLC/caffe/issues/4610
https://github.com/BVLC/caffe/issues/4621
2017年6月21日 星期三
ubuntu中文亂碼
install locales:
sudo apt-get update && sudo apt-get install locales
sudo locale-gen zh_TW zh_TW.UTF-8 zh_CN.UTF-8 en_US.UTF-8
export LANG="zh_TW.UTF-8"
顯示目前已安裝語系
$ locale -a
顯示語系設定
$ locale
產生語系檔案
sudo locale-gen zh_TW zh_TW.UTF-8 zh_CN.UTF-8 en_US.UTF-8
選擇安裝語系
sudo dpkg-reconfigure locales
sudo update-locale LANG="zh_TW.UTF-8" LANGUAGE="zh_TW"
個人
vim ~/.bashrc
export LC_CTYPE=zh_TW.UTF-8 # 可以輸入UTF-8中文
export LC_MESSAGES=zh_TW.UTF-8 # 可以顯示UTF-8中文
系統全域設定
sudo vim /etc/default/locale
LC_CTYPE=zh_TW.UTF-8
LC_MESSAGES=zh_TW.UTF-8
or
sudo vim /etc/environment
LC_CTYPE=zh_TW.UTF-8
LC_MESSAGES=zh_TW.UTF-8
reference:
https://unix.stackexchange.com/questions/223533/cant-install-locales
http://www.davidpai.tw/ubuntu/2011/ubuntu-set-locale/
sudo apt-get update && sudo apt-get install locales
sudo locale-gen zh_TW zh_TW.UTF-8 zh_CN.UTF-8 en_US.UTF-8
export LANG="zh_TW.UTF-8"
顯示目前已安裝語系
$ locale -a
顯示語系設定
$ locale
產生語系檔案
sudo locale-gen zh_TW zh_TW.UTF-8 zh_CN.UTF-8 en_US.UTF-8
選擇安裝語系
sudo dpkg-reconfigure locales
sudo update-locale LANG="zh_TW.UTF-8" LANGUAGE="zh_TW"
個人
vim ~/.bashrc
export LC_CTYPE=zh_TW.UTF-8 # 可以輸入UTF-8中文
export LC_MESSAGES=zh_TW.UTF-8 # 可以顯示UTF-8中文
系統全域設定
sudo vim /etc/default/locale
LC_CTYPE=zh_TW.UTF-8
LC_MESSAGES=zh_TW.UTF-8
or
sudo vim /etc/environment
LC_CTYPE=zh_TW.UTF-8
LC_MESSAGES=zh_TW.UTF-8
reference:
https://unix.stackexchange.com/questions/223533/cant-install-locales
http://www.davidpai.tw/ubuntu/2011/ubuntu-set-locale/
2017年6月19日 星期一
docker run jupyter notebook
docker run -ti -p 8888:8888 docker_image_name
jupyter notebook --ip '*' ---port 8888 --allow-root
error:
in client:
A connection to the notebook server could not be established. The notebook will continue trying to reconnect. Check your network connection or notebook server configuration.
in server:
404 GET /nbextensions/widgets/notebook/js/extension.js?v=20170619093750
公司proxy擋ipython notebook 執行
顯示圖片 IOPub data rate exceeded.
--NotebookApp.iopub_data_rate_limit=1.0e10
reference
https://stackoverflow.com/questions/43288550/iopub-data-rate-exceeded-when-viewing-image-in-jupyter-notebook
jupyter notebook --ip '*' ---port 8888 --allow-root
error:
in client:
A connection to the notebook server could not be established. The notebook will continue trying to reconnect. Check your network connection or notebook server configuration.
in server:
404 GET /nbextensions/widgets/notebook/js/extension.js?v=20170619093750
公司proxy擋ipython notebook 執行
顯示圖片 IOPub data rate exceeded.
--NotebookApp.iopub_data_rate_limit=1.0e10
reference
https://stackoverflow.com/questions/43288550/iopub-data-rate-exceeded-when-viewing-image-in-jupyter-notebook
2017年6月14日 星期三
ipython can't reload module when run modified file
fileA.py
import fileB
...
fileB.py
print (0)
in IPYTHON
First time, when i run fileA.py, it print 0
then, i modify the fileB.py
print(1)
second time, run fileA.py, it also print 0
because the run magic function do not use deepreload
reference:
https://stackoverflow.com/questions/13150259/ipython-re-importing-modules-when-using-run
http://ipython.org/ipython-doc/dev/api/generated/IPython.lib.deepreload.html
import fileB
...
fileB.py
print (0)
in IPYTHON
First time, when i run fileA.py, it print 0
then, i modify the fileB.py
print(1)
second time, run fileA.py, it also print 0
because the run magic function do not use deepreload
reference:
https://stackoverflow.com/questions/13150259/ipython-re-importing-modules-when-using-run
http://ipython.org/ipython-doc/dev/api/generated/IPython.lib.deepreload.html
2017年6月7日 星期三
ParlAI train an attentive LSTM example error
In ParlAI
Trains an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples (pytorch and regex):
```bash
python examples/drqa/train.py -t squad -bs 32
```
Trains an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples (pytorch and regex):
```bash
python examples/drqa/train.py -t squad -bs 32
```
with an error:
version `GOMP_4.0' not found
solution:
rename or remove the libgomp.so.1 in pytorch package
force it to use libgomp.so.1 installed by the system
sudo mv /usr/local/lib/python3.5/dist-packages/torch/lib/libgomp.so.1 /usr/local/lib/python3.5/dist-packages/torch/lib/libgomp.so.1.back
reference:
https://github.com/hughperkins/pytorch/issues/28
2017年6月3日 星期六
2017年5月22日 星期一
Install opencv3.2.0 and opencv_contrib3.2.0 with CUDA7.5
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential cmake pkg-config
sudo apt-get install libjpeg8-dev libtiff5-dev libjasper-dev libpng12-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
download:
https://github.com/opencv/opencv/releases
https://github.com/opencv/opencv_contrib/releases
確保兩個版本一致都是3.2
cd ~/opencv-3.2.0
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_OPENGL=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.2.0/modules ..
make -j8
sudo make install
sudo ldconfig
======================================================================
install opencv from pip
sudo pip install opencv-python
sudo pip install opencv-contrib-python:
======================================================================
issus:
fata error: LAPACKE_H_PATH-NOTFOUND when building OpenCV 3.2
sudo apt-get install liblapacke-dev checkinstall
If use nvidia-docker to install opencv with cuda, the issue is following
linked by target "example_gpu_alpha_comp" in directory /home/image/opencv-3.2.0/samples/gpu
-- Configuring incomplete, errors occurred!
Don't forget add
-DCMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs
after cmake
refference:
https://github.com/opencv/opencv/issues/6577
If using python don't forget install numpy
pip install numpy
sudo apt-get upgrade
sudo apt-get install build-essential cmake pkg-config
sudo apt-get install libjpeg8-dev libtiff5-dev libjasper-dev libpng12-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
download:
https://github.com/opencv/opencv/releases
https://github.com/opencv/opencv_contrib/releases
確保兩個版本一致都是3.2
cd ~/opencv-3.2.0
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_OPENGL=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.2.0/modules ..
make -j8
sudo make install
sudo ldconfig
======================================================================
install opencv from pip
sudo pip install opencv-python
sudo pip install opencv-contrib-python:
======================================================================
issus:
fata error: LAPACKE_H_PATH-NOTFOUND when building OpenCV 3.2
sudo apt-get install liblapacke-dev checkinstall
If use nvidia-docker to install opencv with cuda, the issue is following
linked by target "example_gpu_alpha_comp" in directory /home/image/opencv-3.2.0/samples/gpu
-- Configuring incomplete, errors occurred!
Don't forget add
-DCMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs
after cmake
refference:
https://github.com/opencv/opencv/issues/6577
If using python don't forget install numpy
pip install numpy
訂閱:
文章 (Atom)