2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩8頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、<p><b>  中文3276字</b></p><p><b>  附錄</b></p><p><b>  英文原文:</b></p><p>  Chinese Journal of Electronics</p><p>  Vo1.15,No.3,July

2、 2006</p><p>  A Speaker--Independent Continuous Speech</p><p>  Recognition System Using Biomimetic Pattern Recognition</p><p>  WANG Shoujue and QIN Hong</p><p>  (La

3、boratory of Artificial Neural Networks,Institute ol Semiconductors,</p><p>  Chinese Academy Sciences,Beijing 100083,China)</p><p>  Abstract—In speaker-independent speech recognition,the disadv

4、antage of the most diffused technology(HMMs,or Hidden Markov models)is not only the need of many more training samples,but also long train time requirement. This Paper describes the use of Biomimetic pattern recognition(

5、BPR)in recognizing some mandarin continuous speech in a speaker-independent Manner. A speech database was developed for the course of study.The vocabulary of the database consists of 15 Chinese dish’s names, the length &

6、lt;/p><p>  Key words—Biomimetic pattern recognition, Speech recogniton,Hidden Markov models(HMMs),Dynamic time warping(DTW).</p><p>  I.Introduction</p><p>  The main goal of Automati

7、c speech recognition(ASR)is to produce a system which will recognize accurately normal human speech from any speaker.The recognition system may be classified as speaker-dependent or speaker-independent.The speaker depend

8、ence requires that the system be personally trained with the speech of the person that will be involved with its operation in order to achieve a high recognition rate.For applications on the public facilities,on the othe

9、r hand,the system must be capable o</p><p>  II.Introduction of Biomimetic Pattern Recognition and Multi—Weights Neuron Networks</p><p>  Biomimetic pattern recognition</p><p>  Tra

10、ditional Pattern Recognition aims at getting the optimal classification of different classes of sample in the feature space.However, the BPR intends to find the optimal coverage of the samples of the same type. It is fro

11、m the Principle of Homology—Continuity,that is to say,if there are two samples of the same class, the difference between them must be gradually changed. So a gradual change sequence must be exists between the two samples

12、. In BPR theory.the construction of the sample subspace o</p><p>  2.Multi-weights neuron and multi-weights neuron networks</p><p>  A Multi-weights neuron can be described as follows: </p>

13、;<p>  ,Where: are m-weights vectors;X is the input vector;is the neuron’s computation function; is the threshold;f is the activation function.</p><p>  According to dimension theory, in the feature s

14、pace ,,the function construct a (n-1)-dimensional hypersurface in n-dimensional space which is determined by the weights .It divides the n-dimensional space into two parts.If is a closed hypersurface, it constructs a fi

15、nite subspace.</p><p>  According to the principle of BPR,determination the subspace of a certain type of samples basing on the type of samples itself.If we can find out a set of multi-weights neurons(Multi-

16、weights neuron networks) that covering all the training samples,the subspace of the neural networks represents the sample subspace. When an unknown sample is in the subspace, it can be determined to be the same type of t

17、he training samples.Moreover,if a new type of samples added, it is not necessary to retrain anyon</p><p>  III.System Description</p><p>  The Speech recognition system is divided into two main

18、blocks. The first one is the signal pre-processing and speech feature extraction block.The other one is the Multi-weights neuron networks, which performs the task of BPR.</p><p>  1.Speech feature extraction

19、</p><p>  Mel based Campestral Coefficients(MFCC) is used as speech features.It is calculated as follows:A/D conversion;Endpoint detection using short time energy and Zero crossing rate(ZCR);Preemphasis and

20、hamming windowing;Fast Fourier transform;DCT transform.The number of features extracted for each frame is 16,and 32 frames are chosen for every utterance.A 512-dimensiona1-Me1-Cepstral feature vector( numerical values) r

21、epresented the pronunciation of every word.</p><p>  Multi-weights neuron networks architecture</p><p>  As a new general purpose theoretical model of pattern Recognition, here BPR is realized b

22、y multi-weights neuron Networks. In training of a certain class of samples,an multi-weights neuron subNetwork should be established.The subNetwork consists of one input layer.one multi-weights neuron hidden layer and one

23、 output layer. Such a subNetwork can be considered as a mapping. ,Whereis the output of a Multi-weights neuron. There are m hidden Multi-weights neurons.i= 1,2, …,m, is the input vector.</p><p>  IV .Trainin

24、g for MWN Networks</p><p>  Basics of MWN networks training</p><p>  Training one multi-weights neuron subNetwork requires calculating the multi-weights neuron layer weights.The multi-weights ne

25、uron and the training algorithm used was that of Ref.[4].In this algorithm,if the number of training samples of each class is,we can useneurons.In this paper,N =30.,is a function with multi-vector input,one scalar quanti

26、ty output.</p><p>  Optimization method</p><p>  According to the comments in IV.1,if there are many training samples, the neuron number will be very large thus reduce the recognition speed.In t

27、he case of learning several classes of samples, knowledge of the class membership of training samples is available. We use this information in a supervised training algorithm to reduce the network scales.</p><

28、p>  When training class A,we looked the left training samples of the other 14 classes as class B. So there are 30 training samples in set and 420 training samples in set.Firstly select 3 samples from A, and we have

29、a neuron:.Let,where i= 1,2,…,30;</p><p>  ,where j= 1,2,…420;,we specify a value ,.If ,removed from set A, thus we get a new set.We continue until the number of samples in set is,then the training is ended,

30、and the subNetwork of class A has a hidden layer of neurons.</p><p>  V.Experiment Results</p><p>  A speech database consisting of 15 Chinese dish’s names was developed for the course of study

31、. The length of each name is 4 Chinese words, that is to say, each sample of speech is a continuous string of 4 words, such as “yu xiang rou si”,“gong bao ji ding”,etc.It was organized into two sets:training set and test

32、 set. The speech signal is sampled at 16kHz and 16-bit resolution.</p><p>  Table 1.Experimental result at of different values</p><p>  450 utterances constitute the training set used to train t

33、he multi-weights neuron networks. The 450 ones belong to 10 speakers(5 males and 5 females) who are from different Chinese provinces. Each of the speakers uttered each of the word 3 times. The test set had a total of 539

34、 utterances which involved another 4 speakers who uttered the 15 words arbitrarily.</p><p>  The tests made to evaluate the recognition system were carried out on different from 0.5 to 0.95 with a step incr

35、ement of 0.05.The experiment results atof different values are shown in Table 1.</p><p>  Obviously,the networks was able to achieve full recognition of training set at any.From the experiments,it was found

36、that achieved hardly the same recognition rate as the Basic algorithm. In the mean time, the MWNs used in the networks are much less than of the Basic algorithm. </p><p>  Table 2.Experiment results of BPR b

37、asic algorithm</p><p>  Experiments were also carried on to evaluate Continuous density hidden Markov models (CDHMM),Dynamic time warping(DTW) and Biomimetic pattern recognition(BPR) for speech recognition,

38、emphasizing the performance of each method across decreasing amounts of training samples as well</p><p>  as requirement of train time. The CDHMM system was implemented with 5 states per word.Viterbi-algorit

39、hm and Baum-Welch re-estimation are used for training and recognition.The reference templates for DTW system are the training samples themselves. Both the CDHMM and DTW technique are implemented using the programs in Ref

40、.[11].We give in Table 2 the experiment results comparison of BPR Basic algorithm,Dynamic time warping (DTW)and Hidden Markov models (HMMs) method.</p><p>  The HMMs system was based on Continuous density hi

41、dden Markov models(CDHMMs),and was implemented with 5 states per name.</p><p>  VI.Conclusions and Acknowledgments</p><p>  In this paper, A mandarin continuous speech recognition system based o

42、n BPR is established.Besides,a training samples selection method is also used to reduce the networks scales. As a new general purpose theoretical model of pattern Recognition,BPR could be used in speech recognition too,

43、and the experiment results show that it achieved a higher performance than HMM s and DTW. </p><p>  References</p><p>  [1]WangShou-jue,“Blomimetic (Topological) pattern recognition-A new model

44、of pattern recognition theory and its application”,Acta Electronics Sinica,(inChinese),Vo1.30,No.10,PP.1417-1420,2002.</p><p>  [2]WangShoujue,ChenXu,“Blomimetic (Topological) pattern recognition-A new model

45、 of pattern recognition theory and its application”, Neural Networks,2003.Proceedings of the International Joint Conference on Neural Networks,Vol.3,PP.2258-2262,July 20-24,2003.</p><p>  [3]WangShoujue,Zhao

46、Xingtao,“Biomimetic pattern recognition theory and its applications”,Chinese Journal of Electronics,V0l.13,No.3,pp.373-377,2004.</p><p>  [4]Xu Jian.LiWeijun et a1,“Architecture research and hardware impleme

47、ntation on simplified neural computing system for face identification”,Neuarf Networks,2003.Proceedings of the Intern atonal Joint Conference on Neural Networks,Vol.2,PP.948-952,July 20-24 2003.</p><p>  [5]

48、Wang Zhihai,Mo Huayi et al,“A method of biomimetic pattern recognition for face recognition”,</p><p>  Neural Networks,2003.Proceedings of the International Joint Conference on Neural Networks,Vol.3,pp.2216-

49、2221,20-24 July 2003. </p><p>  [6]WangShoujue,WangLiyan et a1,“A General Purpose Neuron Processor with Digital-Analog Processing”,Chinese Journal of Electornics,Vol.3,No.4,pp.73-75,1994.</p><p>

50、;  [7]Wang Shoujue,LiZhaozhou et a1,“Discussion on the basic mathematical models of neurons in general purpose neuro-computer”,Acta Electronics Sinica(in Chinese),Vo1.29,No.5,pp.577-580,2001.</p><p>  [8]Wan

51、gShoujue,Wang Bainan,“Analysis and theory of high-dimension space geometry of artificial neural networks”,Acta Electronics Sinica (in Chinese),Vo1.30,No.1,pp.1-4,2001.</p><p>  [9]WangShoujue,Xujian et a1,“M

52、ulti-camera human-face personal identiifcation system based on the </p><p>  biomimetic pattern recognition”,Acta Electronics Sinica (in Chinese),Vo1.31,No.1,pp.1-3,2003.</p><p>  [10]Ryszard En

53、gelking,Dimension Theory,PWN-Polish Scientiifc Publishers—Warszawa,1978.</p><p>  [11]QiangHe,YingHe,Matlab Porgramming,Tsinghua University Press,2002.</p><p><b>  中文翻譯:</b></p>

54、;<p>  電子學(xué)報 2006年7月15卷第3期</p><p>  基于仿生模式識別的非特定人連續(xù)語音識別系統(tǒng)</p><p><b>  王守覺 秦虹</b></p><p>  (中國,北京 100083,中科院半導(dǎo)體研究所人工神經(jīng)網(wǎng)絡(luò)實(shí)驗(yàn)室)</p><p>  摘要:在非特定人語音識別中,隱馬爾

55、科夫模型(HMMs)是使用最多的技術(shù),但是它的不足之處在于:不僅需要更多的訓(xùn)練樣本,而且訓(xùn)練的時間也很長。本文敘述了仿生模式識別(BPR)在小字表非特定人普通話連續(xù)語音識別中的應(yīng)用。我們專為此項(xiàng)研究建立了一個語音數(shù)據(jù)庫。數(shù)據(jù)庫中的詞匯包括15個中國菜名。每個菜名的長度是4個漢字。我們使用了基于多權(quán)神經(jīng)元(MWN)模型的神經(jīng)網(wǎng)絡(luò)(NNs),來訓(xùn)練和辨識語聲。我們測試出了多權(quán)神經(jīng)元(MWN)的數(shù)量,在這個數(shù)量下,基于神經(jīng)網(wǎng)絡(luò)(NNs)的仿生

56、模式識別(BPR),能夠獲得最優(yōu)的性能。這個系統(tǒng)基于仿生模式識別(BPR),它能夠?qū)崿F(xiàn)實(shí)時識別,針對來自中國不同省份但是說著相同的中國話的人們,最優(yōu)的一個選項(xiàng)的識別率達(dá)到98.14%,最優(yōu)的前兩個選項(xiàng)識別率達(dá)到了99.81%。我們還進(jìn)行了實(shí)驗(yàn),對語音識別中的CDHMM、DTW、BPR三種算法進(jìn)行了評估。實(shí)驗(yàn)結(jié)果顯示BPR優(yōu)于CDHMM和DTW,尤其是在有限長度樣本的情況下。</p><p>  關(guān)鍵詞:仿生模式識

57、別,語音識別,隱馬爾科夫模型(HMMs),動態(tài)時間規(guī)整(DTW)</p><p><b>  引言</b></p><p>  自動語音識別的主要目標(biāo)是,構(gòu)建一個識別系統(tǒng),該系統(tǒng)能夠準(zhǔn)確識別來自任意說話者的正常話語。識別系統(tǒng)可以被分為特定人識別和非特定人識別。為了獲得較高的識別率,特定人識別需要對實(shí)施語音操作的對象進(jìn)行單獨(dú)訓(xùn)練。在另一方面,為了應(yīng)用在公共設(shè)施上,該系統(tǒng)

58、必須能夠識別很多人發(fā)出的聲音,這些人有不同的性別、年齡、口音等等。在公共設(shè)施的基本領(lǐng)域,非特定人識別有很多更多的應(yīng)用。在非特定人語音識別中,隱馬爾科夫模型(HMMs)是最為廣泛使用的技術(shù),但是它的不足之處在于:不僅需要更多的訓(xùn)練樣本,而且訓(xùn)練的時間也很長。自從王守覺先生第一次提出仿生模式識別(BPR)以來,仿生模式識別(BPR)已經(jīng)被應(yīng)用在物體識別、人臉辨識和面部識別等等方面,并獲得了更好的性能。經(jīng)過一些修改之后,我們也能夠很容易地將這

59、個建模技術(shù),應(yīng)用在語音識別當(dāng)中。在本文中,我們提出了一個基于仿生模式識別(BPR)的實(shí)時普通語音識別系統(tǒng)。仿生模式識別(BPR)優(yōu)于隱馬爾科夫模型(HMMs),尤其是在有限長度樣本的情況下。這是一個小詞匯量非特定人連續(xù)語音識別系統(tǒng)。整個系統(tǒng)是在windows98/2000/XP的PC環(huán)境下,利用同高精度雙權(quán)值突觸神經(jīng)</p><p>  2.對仿生模式識別(BPR)和多權(quán)神經(jīng)元網(wǎng)絡(luò)(MWNN)的簡要介紹</

60、p><p>  (1). 仿生模式識別(BPR)</p><p>  傳統(tǒng)的模式識別,旨在在特征空間里對不同種類的樣本進(jìn)行最優(yōu)的分類。然而仿生模式識別(BPR)是想要找到每一類具有相同類型的樣本的精確覆蓋。它的基礎(chǔ)是“類內(nèi)連續(xù)性準(zhǔn)則”,也就是說,任意兩個屬于相同類的樣本,它們的特征差異必定是漸變的。這樣,在這兩個樣本之間,必定存在無數(shù)個特征漸變的樣本點(diǎn)。在仿生模式識別(BPR)理論中,每個類型

61、的樣本的樣本子空間的構(gòu)建,僅僅依賴于類型本身。具體來講,就是一個特定類型的樣本的樣本子空間的構(gòu)建,需要分析被訓(xùn)練樣本的類型同在多維空間里對具有復(fù)雜的幾何形狀的物體的覆蓋而使用的方法之間的關(guān)系。</p><p>  (2). 多權(quán)神經(jīng)元網(wǎng)絡(luò)(MWNN)</p><p>  多權(quán)神經(jīng)元可以用下面的式子來描述:</p><p>  ,這里是一個m維權(quán)重向量;X是輸入向量;

62、是神經(jīng)元計算函數(shù);是閾值;f是動作函數(shù)。</p><p>  根據(jù)維度理論,在特征空間,里面,函數(shù)在由權(quán)重決定的n維空間里,建立了一個(n-1) 維超曲面。它將n維空間分成了兩個部分。如果是一個封閉的超曲面的話,它就建立了一個有限的子空間。</p><p>  根據(jù)仿生模式識別(BPR)的原則,一類特定類型的樣本的子空間的建立,是基于它自身的類型的。如果我們能夠找出一個能夠覆蓋所有訓(xùn)練樣本

63、的多權(quán)神經(jīng)元(多權(quán)神經(jīng)元網(wǎng)絡(luò))的集合的話,神經(jīng)網(wǎng)絡(luò)的子空間就代表了樣本的子空間。當(dāng)一個未知的樣本出現(xiàn)在子空間里面時,我們就可以判斷它是否與訓(xùn)練樣本具有相同的類型。更進(jìn)一步,當(dāng)我們加入一個新類型的樣本時,我們不需要重新訓(xùn)練任何一個已經(jīng)被訓(xùn)練過了的樣本類型。一個特定的樣本類型與其他的樣本類型的訓(xùn)練是毫無關(guān)系的。</p><p><b>  3.系統(tǒng)描述</b></p><p&

64、gt;  語言識別系統(tǒng)可以分為兩個模塊。第一個是信號預(yù)處理和語音特征提取模塊,另外一個就是執(zhí)行仿生模式識別(BPR)任務(wù)的多權(quán)神經(jīng)元網(wǎng)絡(luò)。</p><p>  (1).語音特征提取</p><p>  Mel倒譜系數(shù)(MFCC)被用于作為語音特征。它的計算過程如下:</p><p>  A/D轉(zhuǎn)換;利用短時能量和過零率進(jìn)行端點(diǎn)檢測;預(yù)加重和Hamming窗口化;快速

65、傅里葉變換;DCT變換。為每幀數(shù)據(jù)提取16個特征位,為每個說話者選擇32幀數(shù)據(jù)。1個512維Mel倒譜特征向量(數(shù)值)代表1個漢字的發(fā)音。</p><p>  (2).多權(quán)神經(jīng)元網(wǎng)絡(luò)結(jié)構(gòu)</p><p>  作為模式識別中的一種新的通用理論模型,這里的仿生模式識別(BPR)通過多權(quán)神經(jīng)元網(wǎng)絡(luò)來實(shí)現(xiàn)。</p><p>  在對一類特定的樣本的訓(xùn)練中,我們必須建立一個多

66、權(quán)神經(jīng)元子網(wǎng)絡(luò)。這個多權(quán)神經(jīng)元子網(wǎng)絡(luò)包括1個輸入層,1個多權(quán)神經(jīng)元隱藏層和1個輸出層。這樣的一個子網(wǎng)絡(luò)可以用下面的映射來描述:。,這里是多權(quán)神經(jīng)元的輸出,有m個隱藏的多權(quán)神經(jīng)元,其中:i= 1,2, …,m,是輸入向量。</p><p>  4.對多權(quán)神經(jīng)元網(wǎng)絡(luò)進(jìn)行訓(xùn)練</p><p>  (1).有關(guān)多權(quán)神經(jīng)元網(wǎng)絡(luò)訓(xùn)練的基礎(chǔ)知識</p><p>  訓(xùn)練一個多權(quán)神

67、經(jīng)元子網(wǎng)絡(luò)需要計算每層多權(quán)神經(jīng)元的權(quán)重。多權(quán)神經(jīng)元和使用的訓(xùn)練算法詳見參考[4].在這個算法中,如果每類訓(xùn)練樣本的數(shù)目是的話,我們可以使用個神經(jīng)元。在本文中,N=30,是一個標(biāo)量輸出,它是一個關(guān)于多向量輸入的函數(shù)。</p><p><b>  (2).優(yōu)化方法</b></p><p>  依據(jù)上面(1)中所述,如果有很多訓(xùn)練樣本,神經(jīng)元數(shù)目將會很多以至于降低了識別速度

68、。在學(xué)習(xí)幾類樣本的情況下,關(guān)于訓(xùn)練樣本的各個類之間的關(guān)系的知識是可以獲得的。在一個受監(jiān)督的訓(xùn)練算法中,我們使用這個信息來減小網(wǎng)絡(luò)的規(guī)模。</p><p>  當(dāng)訓(xùn)練A類樣本時,我們觀察B類樣本中留下的14類樣本。這樣在集合中就有30個樣本,在集合中就有420個訓(xùn)練樣本。首先從A中選取3個樣本,我得到一個神經(jīng)元。令,其中i= 1,2,…,30;,其中j= 1,2,…420;,我們分配一個數(shù)值,。如果,在集合A中將剔

69、除出去,這樣我們得到一個新的集合。繼續(xù)直到在集合中的樣本的數(shù)目是,然后訓(xùn)練過程結(jié)束,A類子網(wǎng)絡(luò)就有一個包含()個神經(jīng)元的隱藏層。</p><p><b>  5.實(shí)驗(yàn)結(jié)果</b></p><p>  我們專為此項(xiàng)研究建立了1個包括15個中國菜名的語言數(shù)據(jù)庫。每個菜名的長度是4個漢字,即每個語音樣本是一個連續(xù)的4個漢字的字符串,比如“魚香肉絲”,“宮保雞丁”等等。我們將

70、其劃分為兩個集合:訓(xùn)練集合測試集合。語言信號采樣率為16KHz,分辨率為16位。</p><p>  表1 取不同值時的實(shí)驗(yàn)結(jié)果</p><p>  450個聲音構(gòu)成了訓(xùn)練集合,用于訓(xùn)練多權(quán)神經(jīng)元網(wǎng)絡(luò)。這450個聲音屬于10個來自中國不同省份的說話者(5名男性和5名女性)。每個說話者將每個漢字重復(fù)3次。測試集合總共有539個聲音,其中包括4名可以任意說15個漢字的說話者的聲音。</

71、p><p>  我們利用這些測試來評價,從0.5到0.95,級差為0.05的識別系統(tǒng)。不同值下的實(shí)驗(yàn)結(jié)果劍表1。顯然,這個網(wǎng)絡(luò)可以在任意的值下,對訓(xùn)練集合獲得全部的識別。從實(shí)驗(yàn)結(jié)果可以看出,在=0.5的情況下,獲得的識別率幾乎與基本算法相同。但是,在網(wǎng)絡(luò)中所用的多權(quán)神經(jīng)元數(shù)目卻比基本算法少得多。</p><p>  表2 BPR基本算法實(shí)驗(yàn)結(jié)果 </p><p>  

72、對語音識別當(dāng)中的連續(xù)密度隱馬爾科夫模型(CDHMM),動態(tài)時間規(guī)整(DTW)和仿生模式識別(BPR) ,我們進(jìn)行了評估,重點(diǎn)考察每種方法在減少訓(xùn)練樣本的數(shù)量和訓(xùn)練時間這兩項(xiàng)指標(biāo)下的性能。連續(xù)密度隱馬爾科夫模型(CDHMM)系統(tǒng)完成每個漢字的識別需要5個狀態(tài)。Viterbi算法和Baum-Welch重估計被用于訓(xùn)練和識別。DTW系統(tǒng)的參考模板就是訓(xùn)練樣本本身。CDHMM和DTW技術(shù)都是通過運(yùn)用參考[11]中的程序來實(shí)現(xiàn)的。我們在表2中,對

73、BPR基本算法、DTW、 HMMs三種算法的實(shí)驗(yàn)結(jié)果進(jìn)行了比較。HMMs系統(tǒng)基于連續(xù)密度隱馬爾科夫模型(CDHMMs) ,并且每個名字需要5個狀態(tài)來實(shí)現(xiàn)。</p><p><b>  6.結(jié)論和致謝</b></p><p>  在本文中,我們建立了一個基于仿生模式識別(BPR)的普通話連續(xù)語音識別系統(tǒng)。另外,我們使用了一個選擇訓(xùn)練樣本的方法,來減少網(wǎng)絡(luò)的規(guī)模。作為模式

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論