2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩16頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、<p><b>  外文原文與譯文</b></p><p><b>  外文原文</b></p><p>  Neural Network Introduction</p><p>  1.Objectives</p><p>  As you read these words you a

2、re using a complex biological neural network. You have a highly interconnected set of some 1011 neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons,a rich assembly of tissu

3、e and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience.</p><p>  Scientists have only

4、 just begun to understand how biological neural networks operate. It is generally understood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning

5、 is viewed as the establishment of new connections between neurons or the modification of existing connections.</p><p>  This leads to the following question: Although we have only a rudimentary understandin

6、g of biological neural networks, is it possible to construct a small set of simple artificial “neurons” and perhaps train them to serve a useful function? The answer is “yes.”This book, then, is about artificial neural n

7、etworks.</p><p>  The neurons that we consider here are not biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon.

8、 Networks of these artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform useful functions. This book is about such neurons, the networks that contain them and their

9、 training.</p><p><b>  2.History</b></p><p>  The history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggl

10、ed for decades to develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John Anderson and Edwa

11、rd Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction t</p><p>  Histories of some of the main neural network contribu

12、tors are included at the beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.</p><p> 

13、 At least two ingredients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept, a way of thinking about a topic, some view of it that gives clarity not there befor

14、e. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the

15、 soul or a source of heat. In the 17th century medical practi</p><p>  Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the syst

16、em. For instance, the mathematics necessary for the reconstruction of images from computer-aided topography (CAT) scans was known many years before the availability of high-speed computers and efficient algorithms finall

17、y made it practical to implement a useful CAT system.</p><p>  The history of neural networks has progressed through both conceptual innovations and implementation developments. These advancements, however,

18、seem to have occurred in fits and starts rather than by steady evolution.</p><p>  Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consis

19、ted primarily of interdisciplinary work in physics, psychology and neurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision,

20、 conditioning, etc.,and did not include specific mathematical models of neuron operation.</p><p>  The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [Mc

21、Pi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the</p><p>  neural network field.<

22、/p><p>  McCulloch and Pitts were followed by Donald Hebb [Hebb49], who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a m

23、echanism for learning in biological neurons.</p><p>  The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learnin

24、g rule by Frank Rosenblatt [Rose58]. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural n

25、etwork research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of proble</p><p>  At about the same time, Bernard Widrow and Ted Hoff [WiHo60] introduce

26、d a new learning algorithm and used it to train adaptive linear neural networks, which were similar in structure and capability to Rosenblatt’s perception. The Widrow Hoff learning rule is still in use today. (See Chapte

27、r 10 for more on Widrow-Hoff learning.)</p><p>  Unfortunately, both Rosenblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin M

28、insky and Seymour Papert [MiPa69]. Rosenblatt and Widrow were</p><p>  aware of these limitations and proposed new networks that would overcome them. However, they were not able to successfully modify their

29、learning algorithms to train the more complex networks.</p><p>  Many people, influenced by Minsky and Papert, believed that further research on neural networks was a dead end. This, combined with the fact t

30、hat there were no powerful digital computers on which to experiment,</p><p>  caused many researchers to leave the field. For a decade neural network research was largely suspended. Some important work, howe

31、ver, did continue during the 1970s. In 1972 Teuvo Kohonen [Koho72] and James Anderson [Ande72] independently and separately developed new neural networks that could act as memories. Stephen Grossberg [Gros76] was also ve

32、ry active during this period in the investigation of self-organizing networks.</p><p>  Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers

33、with which to experiment. During the 1980s both of these impediments were overcome, and research</p><p>  in neural networks increased dramatically. New personal computers and</p><p>  workstati

34、ons, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced. </p><p>  Two new concepts were most responsible for the rebirth of neural net works. Th

35、e first was the use of statistical mechanics to explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfie

36、ld [Hopf82]. </p><p>  The second key development of the 1980s was the backpropagation algo rithm for training multilayer perceptron networks, which was discovered independently by several different research

37、ers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland [RuMc86]. This algorithm was the answer to the criticisms Minsky and Papert had made in the 1960s. (See C

38、hapters 11 and 12 for a development of the backpropagation algorithm.)</p><p>  These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written

39、, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As noted below, it is not clear where all of this will lead US.</p><p>  The brief historical

40、 account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in the neural network field has progressed. As one might note, the progress has

41、 not always been "slow but sure." There have been periods of dramatic progress and periods when relatively little has been accomplished.</p><p>  Many of the advances in neural networks have had to

42、 do with new concepts, such as innovative architectures and training. Just as important has been the availability of powerful new computers on which to test these new concepts. &

43、lt;/p><p>  Well, so much for the history of neural networks to this date. The real question is, "What will happen in the next ten to twenty years?" Will neural networks take a permanent place as a ma

44、thematical/engineering tool, or will they fade away as have so many promising technologies? At present, the answer seems to be that neural networks will not only have their day but will have a permanent place, not as a s

45、olution to every problem, but as a tool to be used in appropriate situations. In addition, r</p><p>  Although it is difficult to predict the future success of neural networks, the large number and wide vari

46、ety of applications of this new technology are very encouraging. The next section describes some of these applications.</p><p>  3.Applications</p><p>  A recent newspaper article described the

47、use of neural networks in literature research by Aston University. It stated that "the network can be taught to recognize individual writing styles, and the researchers used it to compare works attributed to Shakesp

48、eare and his contemporaries." A popular science television program recently documented the use of neural networks by an Italian research institute to test the purity of olive oil. These examples are indicative of th

49、e broad range of applications</p><p>  The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MATLAB with the permission of the Math Works, Inc.&l

50、t;/p><p>  The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning with the adaptive channel equalizer in about 1984. This device, which is an outstanding commercial su

51、ccess, is a single-neuron network used in long distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor,

52、a sonar classifier and a risk analysis system.</p><p>  Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the literature follows.&l

53、t;/p><p><b>  Aerospace</b></p><p>  High performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, air

54、craft component fault detectors</p><p>  Automotive</p><p>  Automobile automatic guidance systems, warranty activity analyzers </p><p><b>  Banking </b></p><

55、p>  Check and other document readers, credit application evaluators</p><p><b>  Defense</b></p><p>  Weapon steering, target tracking, object discrimination, facial recognition, n

56、ew kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification</p><p>  Electronics</p><

57、p>  Code sequence prediction, integrated circuit chip layout, process control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling</p><p>  Entertainment</p><p>  Anima

58、tion, special effects, market forecasting</p><p><b>  Financial</b></p><p>  Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit line use analysis,

59、 portfolio trading program, corporate financial analysis, currency price prediction</p><p><b>  Insurance</b></p><p>  Policy application evaluation, product optimization</p>

60、<p>  Manufacturing</p><p>  Manufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing

61、, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and managem

62、ent, dynamic modeling of chemical process systems</p><p><b>  Medical</b></p><p>  Breast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of transplant ti

63、mes, hospital expense reduction, hospital quality improvement, emergency room test advisement </p><p>  0il and Gas</p><p>  Exploration</p><p><b>  Robotics</b>

64、</p><p>  Trajectory control, forklift robot, manipulator controllers, vision systems</p><p><b>  Speech</b></p><p>  Speech recognition, speech compression, vowel class

65、ification, text to speech synthesis</p><p>  Securities</p><p>  Market analysis, automatic bond rating, stock trading advisory systems</p><p>  Telecommunications</p><p&

66、gt;  Image and data compression, automated information services,real-time translation of spoken language, customer payment processing systems</p><p>  Transportation</p><p>  Truck brake diagnos

67、is systems, vehicle scheduling, routing systems</p><p>  Conclusion</p><p>  The number of neural network applications, the money that has been invested in neural network software and hardware,

68、and the depth and breadth of interest in these devices have been growing rapidly.</p><p>  4.Biological Inspiration</p><p>  The artificial neural networks discussed in this text are only remote

69、ly related to their biological counterparts. In this section we will briefly describe those characteristics of brain function that have inspired the development of artificial neural networks.</p><p>  The br

70、ain consists of a large number (approximately 1011) of highly connected elements (approximately 104 connections per element) called neurons. For our purposes these neurons have three principal components: the dendrites,

71、the cell body and the axon. The dendrites are tree-like receptive networks of nerve fibers that carry electrical signals into the cell body. The cell body effectively sums and thresholds these incoming signals. The axon

72、is a single long fiber that carries the signal from th</p><p>  Figure 6.1 Schematic Drawing of Biological Neurons</p><p>  Some of the neural structure is defined at birth. Other parts are deve

73、loped through learning, as new connections are made and others waste away. This development is most noticeable in the early stages of life. For example, it has been shown that if a young cat is denied use of one eye duri

74、ng a critical window of time, it will never develop normal vision in that eye.</p><p>  Neural structures continue to change throughout life. These later changes tend to consist mainly of strengthening or we

75、akening of synaptic junctions. For instance, it is believed that new memories are formed by modification of these synaptic strengths. Thus, the process of learning a new friend's face consists of altering various syn

76、apses.</p><p>  Artificial neural networks do not approach the complexity of the brain. There are, however, two key similarities between biological and artificial neural networks. First, the building blocks

77、of both networks are simple computational devices (although artificial neurons are much simpler than biological neurons) that are highly interconnected. Second, the connections between neurons determine the function of t

78、he network. The primary objective of this book will be to determine the appropriate conne</p><p>  It is worth noting that even though biological neurons are very slow when compared to electrical circuits, t

79、he brain is able to perform many tasks much faster than any conventional computer. This is in part because of the massively parallel structure of biological neural networks; all of the neurons are operating at the same t

80、ime. Artificial neural networks share this parallel structure. Even though most artificial neural networks are currently implemented on conventional digital computers, thei</p><p>  In the following chapter

81、we will introduce our basic artificial neuron and will explain how we can combine such neurons to form networks. This will provide a background for Chapter 3, where we take our first look at neural networks in action.<

82、;/p><p><b>  譯文</b></p><p><b>  神經(jīng)網(wǎng)絡(luò)概述</b></p><p><b>  1.目的</b></p><p>  當(dāng)你現(xiàn)在看這本書的時候,就正在使用一個復(fù)雜的生物神經(jīng)網(wǎng)絡(luò)。你有一個約為1011個神經(jīng)元的高度互連的集合幫助你完成閱讀、呼吸、

83、運動和思考。你的每一個生物神經(jīng)元都是生物組織和化學(xué)物質(zhì)的有機(jī)結(jié)合。若不考慮其速度的話,可以說每個神經(jīng)元都是一個復(fù)雜的微處理器。你的某些神經(jīng)結(jié)構(gòu)是與生俱來的,而其他一些則是在實踐中形成的。 </p><p>  科學(xué)家們才剛剛開始對生物神經(jīng)網(wǎng)絡(luò)工作機(jī)理有所認(rèn)識。一般認(rèn)為,包括記憶在內(nèi)的所有生物神經(jīng)功能,都存儲在神經(jīng)元和及其之間的連接上。學(xué)習(xí)被看作是在神經(jīng)元之間建立新的連接或?qū)σ延械倪B接進(jìn)行修改的過程。這便將引

84、出下面一個問題:既然我們已經(jīng)對生物神經(jīng)網(wǎng)絡(luò)有一個基本的認(rèn)識,那么能否利用一些簡單的人工“神經(jīng)元”構(gòu)造一個小系統(tǒng),然后對其進(jìn)行訓(xùn)練,從而使它們具有一定有用功能呢?回答是肯定的。本書正是要討論有關(guān)人工神經(jīng)網(wǎng)絡(luò)工作機(jī)理的一些問題。</p><p>  我們在這里考慮的神經(jīng)元不是生物神經(jīng)元。它們是對生物神經(jīng)元極其簡單的抽象,可以用程序或硅電路實現(xiàn)。雖然由這些神經(jīng)元組成的網(wǎng)絡(luò)的能力遠(yuǎn)遠(yuǎn)不及人腦的那么強(qiáng)大,但是可對其進(jìn)行訓(xùn)練

85、,以實現(xiàn)一些有用的功能。本書所要介紹的正是有關(guān)于這樣的神經(jīng)元,以及包含這些神經(jīng)元的網(wǎng)絡(luò)及其訓(xùn)練方法。</p><p><b>  2.歷史</b></p><p>  在人工神經(jīng)網(wǎng)絡(luò)的發(fā)展歷程中,涌現(xiàn)了許多在不同領(lǐng)域中富有創(chuàng)造性的傳奇人物,他們艱苦奮斗幾十年,提出了許多至今仍然讓我們受益的概念。許多作者都記載了這一歷史。一本特別有趣的書是由John Anderson和

86、Edward Rosenfeld寫的《神經(jīng)計算:研究的基礎(chǔ)》(Neurocomputing:Foudations of Research)。在該書中,他們收集并編輯了一組由43篇具有特別歷史意義的論文,每一篇前面都有一段歷史觀點的導(dǎo)言。</p><p>  本書各章開始包括了一些主要神經(jīng)網(wǎng)絡(luò)研究人員的歷史,所以這里不必贅述。但是,還是有必要簡單地回顧一下神經(jīng)網(wǎng)絡(luò)的主要發(fā)展歷史。</p><p&

87、gt;  對技術(shù)進(jìn)步而言,有兩點是必需的:概念與實現(xiàn)。首先,必須有一個思考問題的概念,一根據(jù)這些概念明確所面臨的問題。這就要求概念包含一種簡單的思想,或者更具特色,并且引入數(shù)學(xué)描述。為了理解這一點,讓我們看看心臟的研究歷史。在不同時期,心臟被看成靈魂的中心或身體的熱源。17世紀(jì)的醫(yī)生們認(rèn)識到心臟是一個血泵,于是科學(xué)家們開始設(shè)計實驗,研究泵的行為。這些實驗最終開創(chuàng)了循環(huán)系統(tǒng)理論??梢哉f,沒有泵的概念,就不會有人們對心臟的深人認(rèn)識。<

88、/p><p>  概念及其相應(yīng)的數(shù)學(xué)描述還不足以使新技術(shù)走向成熟,除非能通過某種方式實現(xiàn)這種系統(tǒng)。比如,雖然多年前就從數(shù)學(xué)上知道根據(jù)計算機(jī)輔助層析成像(CAT)掃描可以重構(gòu)圖像,但是直到有了高速計算機(jī)和有效的算法才使其走向?qū)嵱茫⒆罱K實現(xiàn)了有用的CAT系統(tǒng)。</p><p>  神經(jīng)網(wǎng)絡(luò)的發(fā)展史同時包含了概念創(chuàng)新和實現(xiàn)開發(fā)的進(jìn)步。但是這些成果的取得并不是一帆風(fēng)順的。</p>&l

89、t;p>  神經(jīng)網(wǎng)絡(luò)領(lǐng)域研究的背景工作始于19世紀(jì)末和20世紀(jì)初。它源于物理學(xué)、心理學(xué)和神經(jīng)生理學(xué)的跨學(xué)科研究,主要代表人物有 Herman Von Helmholts,Ernst Mach和Ivan Pavlov。這些早期研究主要還是著重于有關(guān)學(xué)習(xí)、視覺和條件反射等一般理論,并沒有包含有關(guān)神經(jīng)元工作的數(shù)學(xué)模型。</p><p>  現(xiàn)代對神經(jīng)網(wǎng)絡(luò)的研究可以追溯到220世紀(jì)40年代 Warren McCul

90、loch和Walter Pitts的工作[McPi43]。他們從原理上證明了人工神經(jīng)網(wǎng)絡(luò)可以計算任何算術(shù)和邏輯函數(shù)。通常認(rèn)為他們的工作是神經(jīng)網(wǎng)絡(luò)領(lǐng)域研究工作的開始。</p><p>  在McCulloch和Pitts之后,Donald Hebb【Hebb49】指出,經(jīng)典的條件反射</p><p>  現(xiàn))是由單個神經(jīng)元的性質(zhì)引起的。他提出了生物神經(jīng)元的一種學(xué)習(xí)機(jī)制。</p>

91、<p>  人工神經(jīng)網(wǎng)絡(luò)第一個實際應(yīng)用出現(xiàn)在20世紀(jì)50年代后期,F(xiàn)arnk Rosenblatt[Rose58]提出了感知機(jī)網(wǎng)絡(luò)和聯(lián)想學(xué)習(xí)規(guī)則。Rosenblatt和他的同事構(gòu)造了一個感知機(jī)網(wǎng)絡(luò),并公開演示了它進(jìn)行模式識別的能力。這次早期的成功引起了許多人對神經(jīng)網(wǎng)絡(luò)研究的興趣。不幸的是,后來研究表明基本的感知機(jī)網(wǎng)絡(luò)只能解決有限的幾類問題。</p><p>  同時,Bernard Widrow和Te

92、d Hoff[WiHo60]引入了一個新的學(xué)習(xí)算法用于訓(xùn)練自適應(yīng)線性神經(jīng)網(wǎng)絡(luò)。它在結(jié)構(gòu)和功能上類似于Rosenblatt的感知機(jī)。Widrow—Hoff學(xué)習(xí)規(guī)則至今仍然還在使用。</p><p>  但是,Rosenblatt和Widrow的網(wǎng)絡(luò)都有同樣的固有局限性。這些局限性在 Marvin Minsky和Symour Papert的書【MiPa69】中有廣泛的論述。Rosenblatt和Widrow也十分清楚

93、這些局限性,并提出了一些新的網(wǎng)絡(luò)來克服這些局限性。但是他們沒能成功找到訓(xùn)練更加復(fù)雜網(wǎng)絡(luò)的學(xué)習(xí)算法。</p><p>  許多人受到Minsky和 Papert的影響,相信神經(jīng)網(wǎng)絡(luò)的研究已走人了死胡同。同時由于當(dāng)時沒有功能強(qiáng)大的數(shù)字計算機(jī)來支持各種實驗,從而導(dǎo)致許多研究者紛紛離開這一研究領(lǐng)域。神經(jīng)網(wǎng)絡(luò)的研究就這樣停滯了十多年。</p><p>  即使如此,在20世紀(jì)70年代,科學(xué)家們?nèi)匀辉?/p>

94、該領(lǐng)域開展了許多重要的工作。1972年Teuvo Kohonen[Koho72]和James Anderson[Ande72]分另獨立提出了能夠完成記憶的新型神經(jīng)網(wǎng)絡(luò)。這一時期,Stephen Grossberg[Gros76]在自組織網(wǎng)絡(luò)方面的研究也十分活躍。</p><p>  前面我們說過,在60年代,由于缺乏新思想和用于實驗的高性能計算機(jī)。曾一度動搖了人們對神經(jīng)網(wǎng)絡(luò)的研究興趣。到了80年代,隨著個人計算機(jī)

95、和工作站計算能力的急劇增強(qiáng)和廣泛應(yīng)用,以及不斷引人新的概念,克服了擺在神經(jīng)網(wǎng)絡(luò)研究面前的障礙,人們對神經(jīng)網(wǎng)絡(luò)的研究熱情空前高漲。</p><p>  有兩個新概念對神經(jīng)網(wǎng)絡(luò)的復(fù)興具有極其重大的意義。其一是:用統(tǒng)計機(jī)理解釋某些類型的遞歸網(wǎng)絡(luò)的操作,這類網(wǎng)絡(luò)可作為聯(lián)想存儲器。物理學(xué)家John Hopfield的研究論文[Hopf82]論述了這些思想。</p><p>  其二是:在20世紀(jì)80

96、年代,幾個不同的研究者分別開發(fā)出了用于訓(xùn)練多層感知機(jī)的反傳算法。其中最具影響力的反傳算法是David Rumelhart和James McClelland[RuMc86]提出的。該算法有力地回答了60年代Minsky和Papert對神經(jīng)網(wǎng)絡(luò)的責(zé)難。</p><p>  這些新進(jìn)展對神經(jīng)網(wǎng)絡(luò)研究領(lǐng)域重新注入了活力。在過去的10年中,人們發(fā)表了成千上萬的神經(jīng)網(wǎng)絡(luò)研究論文,神經(jīng)網(wǎng)絡(luò)也有了很多應(yīng)用。許多理論和實踐工作蜂擁

97、而至,以致于我價至今還不十分清楚這將會把我偷帶向何方。</p><p>  以上簡略的歷史回顧并沒有列出所有對神經(jīng)網(wǎng)絡(luò)作出重要貢獻(xiàn)的人,但它能使讀者知道神經(jīng)網(wǎng)絡(luò)是如何發(fā)展而來的。讀者或許會注意到,這個發(fā)展趨勢并不總是“緩慢而堅定”的,而是曾經(jīng)有急劇發(fā)展的時期,也有相對停滯的時期。</p><p>  許多神經(jīng)網(wǎng)絡(luò)研究進(jìn)展都與新概念的提出有關(guān),如革新的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)和訓(xùn)練規(guī)則。同樣十分重要的是

98、,高性能計算機(jī)的出現(xiàn)使新概念能夠得到檢驗。。</p><p>  好了,對神經(jīng)網(wǎng)絡(luò)的歷史就說這么多。真正的問題是:“以后的10到20年會怎樣?”神經(jīng)網(wǎng)絡(luò)將演變?yōu)橐粋€永久的數(shù)學(xué)/工程工具,還是像許多曾大有希望的技術(shù)那樣退出歷史舞臺?目前來看,似乎神經(jīng)網(wǎng)絡(luò)不僅有興旺的時日,而且能取得一個永久的地位,即使它不能解決所有問題,但在某些適當(dāng)?shù)膱龊线€是非常有用的工具。另外,要記住我們現(xiàn)在對人腦的認(rèn)識仍很膚淺,相信將來某一天神

99、經(jīng)網(wǎng)絡(luò)將會取得最重要的進(jìn)展。</p><p>  盡管很難預(yù)料神經(jīng)網(wǎng)絡(luò)今后能否成功,但這種新技術(shù)的大量而廣泛應(yīng)用還是令人鼓舞的。</p><p><b>  3.應(yīng)用</b></p><p>  最近報紙報道Aston大學(xué)用神經(jīng)網(wǎng)絡(luò)來進(jìn)行文獻(xiàn)研究。這篇報道說“神經(jīng)網(wǎng)絡(luò)可以用來識別個人的寫作風(fēng)格,研究人員用它比較了莎士比亞和他同時代人的著作”。一

100、個大眾科學(xué)電視節(jié)目最近報道了某意大利的研究結(jié)構(gòu)用神經(jīng)網(wǎng)絡(luò)測試橄欖油的純度。這些例子從一個側(cè)面說明神經(jīng)網(wǎng)絡(luò)有極其廣泛的應(yīng)用領(lǐng)域。正是因為它適合于解決實際問題,所以其應(yīng)用領(lǐng)域在不斷擴(kuò)大,它不僅可以廣泛應(yīng)用于工程??茖W(xué)和數(shù)學(xué)領(lǐng)域,也可廣泛應(yīng)用于醫(yī)學(xué)、商業(yè)、金融和文學(xué)等領(lǐng)域。神經(jīng)網(wǎng)絡(luò)在許多領(lǐng)域的廣泛應(yīng)用,使其極具吸引力。同時,基于高速計算機(jī)和快速算法,也可以用神經(jīng)網(wǎng)絡(luò)解決過去許多計算量很大的復(fù)雜工業(yè)問題。</p><p>

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論