本文是 2020人工神经网络第一次作业 的参考答案第十部分
➤第十题参考答案-第1小题
1.题目分析
(1)数据集分析
数据集合包括有两个目录:test, train。其中分别包括有95,510张车牌号的图片。图片命名的最后一个字符表明了车牌号的颜色。
(2)数据转换
使用本文后面作业程序附件中的:1.将车牌图片目录转换成YUV将test、train两个车牌号的图片转换成YUV参数。形成两个数据NPZ文件:test_data_npz, train_data_npz。每个文件包括有"yuv", "color"两个数组。
使用本文后面作业程序附件中的2.将训练数据的颜色转换成one-hot编码将数据中的颜色转换成one-hot的形式。one-hot的三个分量分别表示为(蓝、白、黄)。
(3)构建网络
构建单隐层神经网络,其结构如下图所示:
- 隐层节点:4,传递函数:sigmoid
- 输入层节点:3
- 输出层节点:3,传递函数:线性。
▲ 网络结构
具体程序参见本文后面的程序:4.车牌颜色识别BP网络算法。
其中的BP网络算法参见程序:5.单隐层BP网络算法。
2.求解过程
训练参数:
- 学习速率: η = 0.5 \eta = 0.5 η=0.5
- 迭代次数: N = 5000 N = 5000 N=5000
下图显示了训练误差和测试样本变化情况。
▲ 网络训练误差收敛曲线
3.结果分析
由于测试样本过于少,所以网络的误差收敛到0.1以下之后,测试误差便减小到4,并一直维持在这个水平。
对于训练样本中出现错误的两个车牌号如下:
▲ 产生错误的两个样本
000280_白.jpg
000497_蓝.jpg
其中一个是白色车牌号,它本身就是非常少的样本。
另外一个是本身就标识出错的样本。
➤第十题参考答案-第2小题
1.题目分析
(1)数据集合
- 训练数据集合:3269
- 测试数据集合:631
▲ 车牌号图片集合
(2)数据分析
通过6.车牌字符尺寸统计获得训练数据集合和测试数据集合字符图片的尺寸和字符的分布。
- 训练集合尺寸分布
训练样本个数:3060
▲ 训练字符尺寸分布
- 测试集合尺寸分布
测试样本个数:587
▲ 测试字符尺寸分布
▲ 测试与训练字符几何中各个字符出现次数
- 字符种类:
c_all=['0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','京','冀','津']
(3)图片转换
- 生成目标one-hot编码
使用附录程序 : 7.生成One-Hot识别目标。
目标编码长度:39
- 转换字符图片
使用附录程序:8.将图片进行数据转换 将测试和训练图片转换成8×16的灰度图片,并形成一维向量。
图片编码长度:128
▲ 归一化后的数字图像
2.求解过程
(1)神经网络结构
采用单隐层神经网络:
- 输入层节点:128
- 隐层节点:20,传递函数:sigmoid(x)。
- 输出节点:39:线性函数: f ( x ) = x f\left( x \right) = x f(x)=x
- 在输出层增加SOFTMAX层。
▲ BP网络结构
网络实现见后面附录中的程序: 11.字符识别程序。
(2)训练参数
- 训练1:
- 学习速率: η = 0.5 \eta = 0.5 η=0.5
- 迭代次数: N = 10000 N = 10000 N=10000
最终错误识别字符数量:299
▲ 网络误差和错误率变化曲线
- 训练2:
- 学习速率: η = 0.2 \eta = 0.2 η=0.2
- 迭代次数: N = 10000 N = 10000 N=10000
最终错误识别字符数量:512
▲ 网络误差和错误率变化曲线
- 训练3:
- 学习速率: η = 0.1 \eta = 0.1 η=0.1
- 迭代次数: N = 10000 N = 10000 N=10000
最终错误识别字符数量:603
▲ 网络误差和错误率变化曲线
- 训练4:
- 学习速率: η = 0.9 \eta = 0.9 η=0.9
- 迭代次数: N = 10000 N = 10000 N=10000
最终错误识别字符数量:1009
▲ 网络误差和错误率变化曲线
- 训练5:
- 学习速率: η = 0.6 \eta = 0.6 η=0.6
- 迭代次数: N = 10000 N = 10000 N=10000
最终错误识别字符数量:832
▲ 网络误差和错误率变化曲线
- 训练5:
- 学习速率: η = 0.4 \eta = 0.4 η=0.4
- 迭代次数: N = 10000 N = 10000 N=10000
最终错误识别字符数量:361
▲ 网络误差和错误率变化曲线
3.结果分析
(1)训练超参
通过实验,当学习速率选择为0.5时,训练收敛的结果最好,错误率大约为6%左右。
下面识别后的结果:
▲ 识别结果
(2)识别错误结果
总共有194个识别错误的结果,下面是识别错误的结果对应各自的字符:
▲ 所有识别错误的字符
上面识别结果是通过后面附录程序中的:12.显示字符识别结果程序进行显示。
➤第十题参考答案-第3小题
第三小题的参考答案参见下面链接: 第三小题参考答案链接
➤※ 作业程序
1.将车牌图片目录转换成YUV
from PIL import Image
train_dir = r'D:\Temp\dataset\color\train'
test_dir = r'D:\Temp\dataset\color\test'
#------------------------------------------------------------
def dl_dir_data(dirstr):
"""Convert the driver license picture to YUV vector
In: dirstr-directory of the picture file
Out: yuv_array, color_array
"""
color_array = []
yuv_array = []
cm = array([[0.299, 0.587, 0.114], [-0.147108, -0.288804, 0.435912], [0.614777, -0.514799,-0.099978]])
for f in os.listdir(dirstr):
fname = os.path.join(dirstr, f)
# printf(f, fname)
img = Image.open(fname).convert('RGB')
imgdata = array(img)/255.0
imgdata = imgdata.dot(cm)
yuv = mean(imgdata, (0,1))
yuv_array.append(list(yuv))
color_array.append(f.split('.')[0][-1])
return yuv_array, color_array
if __name__ == "__main__":
yuv,color = dl_dir_data(train_dir)
tspsave('train_data', yuv=yuv, color=color)
printf(shape(yuv), shape(color))
2.将训练数据的颜色转换成one-hot编码
#------------------------------------------------------------
yuv0, color0 = tspload('test_data', 'yuv', 'color')
yuv1, color1 = tspload('train_data', 'yuv', 'color')
def color_code(colorname, cd):
cdnum = len(cd)
cddim = []
for c in colorname:
id = cd.index(c)
code = zeros(cdnum)
code[id] = 1
cddim.append(code)
return cddim
#------------------------------------------------------------
if __name__ == "__main__":
colordim = list(set(hstack((color0, color1))))
color0_c = color_code(color0, colordim)
color1_c = color_code(color1, colordim)
printf(color1_c)
tspsave('testc_data', yuv=list(yuv0), color=color0_c)
tspsave('trainc_data', yuv=list(yuv1), color=color1_c)
4.车牌颜色识别BP网络算法
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# HW110_1_BP.PY -- by Dr. ZhuoQing 2020-11-19
#
# Note:
#============================================================
from headm import *
from bp1sigmoid import *
yuv0, color0 = tspload('testc_data', 'yuv', 'color')
yuv1, color1 = tspload('trainc_data', 'yuv', 'color')
x_train = yuv1
y_train = color1.T
#------------------------------------------------------------
#------------------------------------------------------------
# Define the training
DISP_STEP = 100
#------------------------------------------------------------
pltgif = PlotGIF()
#------------------------------------------------------------
def train(X, Y, num_iterations, learning_rate, print_cost=False, Hn=10):
n_x = X.shape[1]
n_y = Y.shape[0]
n_h = Hn
lr = learning_rate
parameters = initialize_parameters(n_x, n_h, n_y)
XX,YY = x_train, y_train #shuffledata(x_train, y_train)
costdim = []
errordim = []
for i in range(0, num_iterations):
A2, cache = forward_propagate(XX, parameters)
cost = calculate_cost(A2, YY, parameters)
grads = backward_propagate(parameters, cache, XX, YY)
parameters = update_parameters(parameters, grads, lr)
if print_cost and i % DISP_STEP == 0:
err = error(yuv0, color0.T)
printf('Cost after iteration:%i: %f, error:%d'%(i, cost, err))
costdim.append(cost)
errordim.append(err)
return parameters, costdim,errordim
#------------------------------------------------------------
def error(A2, y):
A2[A2 > 0.5] = 1
A2[A2 <= 0.5] = 0
return sum(A2 != y)
#------------------------------------------------------------
parameters,cost,err = train(x_train,y_train, 5000, 0.5, True, 4)
A2, cache = forward_propagate(x_train, parameters)
stepdim = arange(0, len(cost)) * DISP_STEP
plt.subplot(211)
plt.plot(stepdim, cost)
plt.xlabel("Step")
plt.ylabel("Cost")
plt.grid(True)
plt.tight_layout()
plt.subplot(212)
plt.plot(stepdim, err)
plt.xlabel("Step")
plt.ylabel("Error")
plt.grid(True)
plt.tight_layout()
plt.show()
'''
cost_dim = []
Hn_dim = []
Err_dim = []
for i in linspace(0.01, 0.5, 100):
Hn = i + 1
parameter,costdim = train(x_train, y_train, 2000, i, True, 10)
cost_dim.append(costdim)
Hn_dim.append(i)
Err_dim.append(costdim[-1])
tspsave('data', costdim = cost_dim, Hndim=Hn_dim, err=Err_dim)
plt.plot(Hn_dim, Err_dim)
plt.xlabel("Learning Rate")
plt.ylabel("Error")
plt.grid(True)
plt.tight_layout()
plt.show()
'''
#------------------------------------------------------------
# END OF FILE : HW110_1_BP.PY
#============================================================
5.单隐层BP网络算法
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# BP1SIGMOID.PY -- by Dr. ZhuoQing 2020-11-17
#
# Note:
#============================================================
from headm import *
#------------------------------------------------------------
# Samples data construction
random.seed(int(time.time()))
#------------------------------------------------------------
def shuffledata(X, Y):
id = list(range(X.shape[0]))
random.shuffle(id)
return X[id], (Y.T[id]).T
#------------------------------------------------------------
# Define and initialization NN
def initialize_parameters(n_x, n_h, n_y):
W1 = random.randn(n_h, n_x) * 0.5 # dot(W1,X.T)
W2 = random.randn(n_y, n_h) * 0.5 # dot(W2,Z1)
b1 = zeros((n_h, 1)) # Column vector
b2 = zeros((n_y, 1)) # Column vector
parameters = {'W1':W1,
'b1':b1,
'W2':W2,
'b2':b2}
return parameters
#------------------------------------------------------------
# Forward propagattion
# X:row->sample;
# Z2:col->sample
def forward_propagate(X, parameters):
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
Z1 = dot(W1, X.T) + b1 # X:row-->sample; Z1:col-->sample
A1 = 1/(1+exp(-Z1))
Z2 = dot(W2, A1) + b2 # Z2:col-->sample
A2 = Z2 # Linear output
cache = {'Z1':Z1,
'A1':A1,
'Z2':Z2,
'A2':A2}
return Z2, cache
#------------------------------------------------------------
# Calculate the cost
# A2,Y: col->sample
def calculate_cost(A2, Y, parameters):
err = [x1-x2 for x1,x2 in zip(A2.T, Y.T)]
cost = [dot(e,e) for e in err]
return mean(cost)
#------------------------------------------------------------
# Backward propagattion
def backward_propagate(parameters, cache, X, Y):
m = X.shape[0] # Number of the samples
W1 = parameters['W1']
W2 = parameters['W2']
A1 = cache['A1']
A2 = cache['A2']
dZ2 = (A2 - Y)
dW2 = dot(dZ2, A1.T) / m
db2 = sum(dZ2, axis=1, keepdims=True) / m
dZ1 = dot(W2.T, dZ2) * (A1 * (1-A1))
dW1 = dot(dZ1, X) / m
db1 = sum(dZ1, axis=1, keepdims=True) / m
grads = {'dW1':dW1,
'db1':db1,
'dW2':dW2,
'db2':db2}
return grads
#------------------------------------------------------------
# Update the parameters
def update_parameters(parameters, grads, learning_rate):
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
dW1 = grads['dW1']
db1 = grads['db1']
dW2 = grads['dW2']
db2 = grads['db2']
W1 = W1 - learning_rate * dW1
W2 = W2 - learning_rate * dW2
b1 = b1 - learning_rate * db1
b2 = b2 - learning_rate * db2
parameters = {'W1':W1,
'b1':b1,
'W2':W2,
'b2':b2}
return parameters
#------------------------------------------------------------
# END OF FILE : BP1SIGMOID.PY
#============================================================
6.车牌字符尺寸统计
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# HW10_2_DATA.PY -- by Dr. ZhuoQing 2020-11-29
#
# Note:
#============================================================
from headm import *
from PIL import Image
test_dir = r'D:\Temp\dataset\number\test'
train_dir = r'D:\Temp\dataset\number\train'
#------------------------------------------------------------
def picture_statistic(dirs):
train_file = os.listdir(dirs)
filesize = []
file_c = []
for f in train_file:
f = f.replace('\u0525', '')
f = f.replace('\xb3', '')
filename = os.path.join(dirs, f)
if os.path.isfile(filename):
img = Image.open(filename).convert('RGB')
imgdata = array(img)
filesize.append(imgdata.shape)
file_c.append(f.split('.')[0][-1])
return filesize, file_c
#------------------------------------------------------------
if __name__ == "__main__":
printf("Process train image:")
sizedim, c_dim = picture_statistic(train_dir)
tspsave('train_size', sizedim=sizedim, c_dim=c_dim)
printf("Process test image:")
sizedim, c_dim = picture_statistic(test_dir)
tspsave('test_size', sizedim=sizedim, c_dim=c_dim)
#------------------------------------------------------------
# END OF FILE : HW10_2_DATA.PY
#============================================================
7.生成One-Hot识别目标
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# TEST1.PY -- by Dr. ZhuoQing 2020-11-29
#
# Note:
#============================================================
from headm import *
sizedim0, c_dim0 = tspload('train_size', 'sizedim', 'c_dim')
sizedim1, c_dim1 = tspload('test_size', 'sizedim', 'c_dim')
#------------------------------------------------------------
'''
c_dim = hstack((c_dim0, c_dim1))
c_all = list(set(c_dim))
c_all = sorted(c_all)
printf(c_all)
tspsave('code_name', c_all=c_all)
plt.hist(sorted(c_dim), label='字符分布')
plt.xlabel("字符")
plt.ylabel("出现次数")
plt.grid(True)
plt.legend(loc="upper right")
plt.title("字符分布")
plt.tight_layout()
plt.show()
'''
#------------------------------------------------------------
'''
plt.hist(sizedim1[:,0], label='长')
plt.hist(sizedim1[:,1], label='宽')
plt.xlabel("Size")
plt.ylabel("Number")
plt.legend(loc='upper right')
plt.title('测试字符图片长宽尺寸的分布')
plt.grid(True)
plt.tight_layout()
plt.show()
'''
#------------------------------------------------------------
'''
c_all = tspload('code_name', 'c_all')
def c_2_code(cdim, cdict):
codedim = []
dict_len = cdict.shape[0]
cc_list = list(cdict)
for cc in cdim:
id = cc_list.index(cc)
one_hot_code = zeros(dict_len).astype(int)
one_hot_code[id] = 1
codedim.append(list(one_hot_code))
return array(codedim)
cd = c_2_code(c_dim0, c_all)
tspsave('c_dim_one_hot0', cc=cd)
cd = c_2_code(c_dim1, c_all)
tspsave('c_dim_one_hot', cc=cd)
printf('\a')
'''
#------------------------------------------------------------
#------------------------------------------------------------
# END OF FILE : TEST1.PY
#============================================================
8.将图片进行数据转换
#------------------------------------------------------------
def picture_resize(dirs, width, height):
train_file = os.listdir(dirs)
filedata = []
for f in train_file:
f = f.replace('\u0525', '')
f = f.replace('\xb3', '')
filename = os.path.join(dirs, f)
if os.path.isfile(filename):
img = Image.open(filename).resize((width, height)).convert('RGB')
imgdata = array(img).mean(2).reshape(1, -1)
filedata.append(squeeze(imgdata))
return array(filedata)
#------------------------------------------------------------
if __name__ == "__main__":
data = picture_resize(test_dir, 16, 32)
printf(data.shape)
tspsave('test_data', image_data=data)
data = picture_resize(train_dir, 16, 32)
printf(data.shape)
tspsave('train_data', image_data=data)
printf('\a')
9.增加有SOFTMAX输出的BP网络
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# BP1SIGMOID.PY -- by Dr. ZhuoQing 2020-11-17
#
# Note:
#============================================================
from headm import *
#------------------------------------------------------------
# Samples data construction
random.seed(int(time.time()))
#------------------------------------------------------------
def shuffledata(X, Y):
id = list(range(X.shape[0]))
random.shuffle(id)
return X[id], (Y.T[id]).T
#------------------------------------------------------------
# Define and initialization NN
def initialize_parameters(n_x, n_h, n_y):
W1 = random.randn(n_h, n_x) * 0.5 # dot(W1,X.T)
W2 = random.randn(n_y, n_h) * 0.5 # dot(W2,Z1)
b1 = zeros((n_h, 1)) # Column vector
b2 = zeros((n_y, 1)) # Column vector
parameters = {'W1':W1,
'b1':b1,
'W2':W2,
'b2':b2}
return parameters
#------------------------------------------------------------
def softmax(xx):
expxx = exp(xx)
return expxx/sum(expxx, 0)
#------------------------------------------------------------
# Forward propagattion
# X:row->sample;
# Z2:col->sample
def forward_propagate(X, parameters):
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
Z1 = dot(W1, X.T) + b1 # X:row-->sample; Z1:col-->sample
A1 = 1/(1+exp(-Z1))
Z2 = dot(W2, A1) + b2 # Z2:col-->sample
A2 = Z2 # Linear output
A2 = softmax(Z2)
Z2 = A2
cache = {'Z1':Z1,
'A1':A1,
'Z2':Z2,
'A2':A2}
return Z2, cache
#------------------------------------------------------------
# Calculate the cost
# A2,Y: col->sample
def calculate_cost(A2, Y, parameters):
err = [x1-x2 for x1,x2 in zip(A2.T, Y.T)]
cost = [dot(e,e) for e in err]
return mean(cost)
#------------------------------------------------------------
# Backward propagattion
def backward_propagate(parameters, cache, X, Y):
m = X.shape[0] # Number of the samples
W1 = parameters['W1']
W2 = parameters['W2']
A1 = cache['A1']
A2 = cache['A2']
dZ2 = (A2 - Y)
dW2 = dot(dZ2, A1.T) / m
db2 = sum(dZ2, axis=1, keepdims=True) / m
dZ1 = dot(W2.T, dZ2) * (A1 * (1-A1))
dW1 = dot(dZ1, X) / m
db1 = sum(dZ1, axis=1, keepdims=True) / m
grads = {'dW1':dW1,
'db1':db1,
'dW2':dW2,
'db2':db2}
return grads
#------------------------------------------------------------
# Update the parameters
def update_parameters(parameters, grads, learning_rate):
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
dW1 = grads['dW1']
db1 = grads['db1']
dW2 = grads['dW2']
db2 = grads['db2']
W1 = W1 - learning_rate * dW1
W2 = W2 - learning_rate * dW2
b1 = b1 - learning_rate * db1
b2 = b2 - learning_rate * db2
parameters = {'W1':W1,
'b1':b1,
'W2':W2,
'b2':b2}
return parameters
#------------------------------------------------------------
# END OF FILE : BP1SIGMOID.PY
#============================================================
11.字符识别程序
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# HW10_2.PY -- by Dr. ZhuoQing 2020-11-29
#
# Note:
#============================================================
from headm import *
from bp1sigmoid import *
image_data0 = tspload('train_data', 'image_data')
image_data1 = tspload('test_data', 'image_data')
cc0 = tspload('c_dim_one_hot0', 'cc').T
cc1 = tspload('c_dim_one_hot1', 'cc').T
x_train = image_data0
y_train = cc0
x_test = image_data1
y_test = cc1
#------------------------------------------------------------
DISP_STEP = 100
def train(X, Y, number_iteration, learning_rate, print_cost=False, Hn=10):
n_x = X.shape[1]
n_y = Y.shape[0]
n_h = Hn
lr = learning_rate
printff(n_x, n_h, n_y, lr)
parameters = initialize_parameters(n_x, n_h, n_y)
XX, YY = X, Y
costdim = []
errordim = []
#--------------------------------------------------------
for i in range(0, number_iteration):
A2, cache = forward_propagate(XX, parameters)
cost = calculate_cost(A2, YY, parameters)
grads = backward_propagate(parameters, cache, XX, YY)
parameters = update_parameters(parameters, grads, lr)
if print_cost and i % DISP_STEP == 0:
err = error_num(A2.copy(), YY)
printf('Cost after iteration:%i, %f, error:%d'%(i, cost, err))
costdim.append(cost)
errordim.append(err)
return parameters, costdim, errordim
#------------------------------------------------------------
def error_num(A2, Y):
AA2 = A2.copy()
THRESHOLD = 0.2
AA2[AA2 > THRESHOLD] = 1
AA2[AA2 < THRESHOLD] = 0
AA2 = AA2.astype(int)
# printf(Y[:, 0:10])
# printf(AA2[:, 0:10])
compare = (AA2 != Y)
result = sum(compare, 0)
return sum(result != 0)
#------------------------------------------------------------
parameters, cost, err = train(x_train, y_train, 10000, 0.4, True, 20)
A2, cache = forward_propagate(x_test, parameters)
tspsave('netresult_0.1', A2 = A2)
stepdim = arange(0, len(cost)) * DISP_STEP
plt.subplot(211)
plt.plot(stepdim, cost)
plt.xlabel('Step')
plt.ylabel('Cost')
plt.grid(True)
plt.tight_layout()
plt.subplot(212)
plt.plot(stepdim, err)
plt.xlabel('Step')
plt.ylabel('Error Number')
plt.grid(True)
plt.tight_layout()
plt.show()
#------------------------------------------------------------
# END OF FILE : HW10_2.PY
#============================================================
12.显示字符识别结果程序
#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# ANAL1.PY -- by Dr. ZhuoQing 2020-11-29
#
# Note:
#============================================================
from headm import *
from PIL import Image
imageid = 236
#------------------------------------------------------------
image_data = tspload('train_data', 'image_data')
cc = tspload('c_dim_one_hot0', 'cc').T
c_all = tspload('code_name', 'c_all')
cc1 = tspload('c_dim_one_hot0', 'cc')
A2 = tspload('netresult_0.1', 'A2')
diff = list(where(argmax(cc, 0) != argmax(A2, 0)))[0]
printf(diff)
#printf(c_all)
#------------------------------------------------------------
def code2name(id):
idname = argmax(A2[:,id])
return c_all[idname]
#------------------------------------------------------------
TEMP_FILE = r'd:\temp\1.bmp'
tsprefreshimagebuffer(imageid)
#------------------------------------------------------------
tspsetfont(imageid, 80, 16)
def show_image(data, canvas, x, y, codeid):
im = Image.fromarray(data.reshape(-1, 8).astype(uint8))
im.save(TEMP_FILE)
tspshowimage(canvas, x, y, x+16, y+32, TEMP_FILE)
code = code2name(codeid)
tsptext(imageid, x + 4, y+34, code)
IMG_ROW = 7
IMG_COL = 30
#------------------------------------------------------------
max_num = IMG_ROW * IMG_COL
for i in range(IMG_ROW):
for j in range(IMG_COL):
id = i * IMG_COL + j
if id >= len(diff): continue
id = diff[id]
show_image(image_data[id], imageid, j * 20, i * 60, id)
#------------------------------------------------------------
'''
for i in range(IMG_ROW):
for j in range(IMG_COL):
id = i * IMG_COL + j
show_image(image_data[id], imageid, j * 20, i * 40)
'''
#------------------------------------------------------------
tsprv()
printf('\a')
#------------------------------------------------------------
# END OF FILE : ANAL1.PY
#============================================================