外文翻譯--機(jī)器人模仿控制論_第1頁
已閱讀1頁,還剩9頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

1、<p>  Learning Control of Robot Manipulators</p><p>  ROBERTO HOROWITZ</p><p>  Department of Mechanical Engineering</p><p>  University of California at Berkeley</p>&l

2、t;p>  Berkeley,CA 94720,U.S.A</p><p>  Phone:(510)642-4675</p><p>  e-mail:horowitz@canaima.berkeley.edu</p><p><b>  Abstract</b></p><p>  Learning contr

3、ol encompasses a class of control algorithms for programmable machines such as robots which attain, through an interactive process, the motor dexterity that enables the machine to execute complex tasks. In this paper we

4、discuss the use of function identification and adaptive control algorithms in learning controllers for robot manipulators. In particular, we discuss the similarities and differences between betterment learning schemes, r

5、epetitive controllers and adaptive learning sc</p><p>  Key words: Learning control, adaptive control, repetitive control, robotics</p><p>  Introduction</p><p>  The emulation of h

6、uman learning has long been among the most sought after and elusive goals in robotics and artificial intelligence. Many aspects of human learning are still not well understood. However, much progress has been achieved in

7、 robotics motion control toward emulating how humans develop the necessary motor skills to execute complex motions. In this paper we will refer to learning controllers as the class of control systems that generate a cont

8、rol action in an interactive manner using </p><p>  The term learning control in the robot motion control context was perhaps first used by Arimoto and his colleagues(c,f(Arimoto et al.,1984;Arimoto et al.,1

9、988)).Arimoto defined learning control as the class of control algorithms that achieve asymptotic zero error tracking by an interactive betterment process ,which Arimoto called learning. In this process a single finite h

10、orizon tracking task is repeatedly performed by the robot, starting always from the same initial condition. The control actio</p><p>  Parallel to the development of the learning and betterment control schem

11、es, a significant amount of research has been directed toward the application of repetitive control algorithms for robot trajectory tracking and other motion control problems (c.f.(Hara et al.,1988;Tomizuka et al.,1989;T

12、omizuka,1992)). The basic objective in repetitive control is to cancel an unknown periodic disturbance or to track an unknown periodic reference trajectory. In it’s simplest form, the periodic signal generat</p>

13、<p>  My interest in learning and repetitive control arouse in 1987, as a consequence of studying the stability of a class of adaptive and repetitive controllers for robot manipulators with my former student and col

14、league Nader Sadegh. My colleague and friend Masayoshi Tomizuka had been working very actively in the area of repetitive control and he introduced me to this problem. At that time there was much activity in the robotics

15、and control communities toward finding adaptive control algorithms for </p><p>  Unfortunately, as discussed in (Hara et al., 1988; Tomizuka et al.,1989; Sadegh et al.,1990), the asymptotic convergence of th

16、e basic repetitive control system can only be guaranteed under restrictive conditions in the plant dynamics or restrictions in the nature of the disturbance signals. These conditions are generally not satisfied in robot

17、control applications. Most often ,modifications in the update schemes are introduced ,such as the so called Q filter modification (Hara et al.,1988;Tomizu</p><p>  Our research has revealed that the robustne

18、ss limitation of the basic betterment and repetitive control laws and the inability of these algorithms to learn multiple tasks in part stem from the fact that all these schemes use point to point function adaptation alg

19、orithms. These algorithms only update the value of the control input at the current instant of time and do not provide a mechanism for updating the control input at neighboringpoints. However, in most applications the co

20、ntrol function th</p><p>  One solution to the interpolation problem in robot learning control was presented in(Miller,1987)with the use of the so called “cerebellar model arithmetic computer” (CMAC).In this

21、 algorithm an input vector is mapped to several locations in an intermediate memory, and the output vector is computed by summing over the values stored in all the locations to which the input vector was mapped. The mapp

22、ing of input vectors has the property that inputs near to each other map to overlapping regions in i</p><p>  In (Messner et al.,1991) we introduced a class of function identification algorithms for learning

23、 control systems based on integral transforms, in order to address the robustness and interpolation problems of point to point repetitive and learning betterment controllers mentioned above. In these adaptive learning al

24、gorithms unknown functions are defined in terms of integral equations of the first kind which consist of known kernels and unknown influence functions. The learning process involves t</p><p>  The reminder o

25、f the paper we discuss the use of learning control in the robot tracking control context ,and stress the similarities and between betterment learning chemes, repetitive control schemes and learning schemes based on integ

26、ral transforms. Conclusions and reflections on some of the outstanding problems in this area are included in the last section.</p><p><b>  機(jī)器人模仿控制論</b></p><p><b>  羅伯特霍洛維茨</

27、b></p><p><b>  機(jī)械工程學(xué)系</b></p><p><b>  加州大學(xué)伯克利分校</b></p><p>  伯克利分校,加州94720 ,美國</p><p>  電話: ( 510 ) 642-4675</p><p>  電子郵箱: horo

28、witz@canaima.berkeley.edu</p><p><b>  摘要</b></p><p>  模仿控制涵蓋了一類可編程機(jī)器的控制算法,如機(jī)器人的動(dòng)作就是通過一個(gè)互動(dòng)的進(jìn)程,以及能讓機(jī)器來執(zhí)行復(fù)雜任務(wù)的機(jī)動(dòng)馬達(dá)來實(shí)現(xiàn)的。在本文中,我們討論了機(jī)器人模仿控制器功能的識(shí)別和自適應(yīng)控制算法的使用。我們還特別討論了在積分變換基礎(chǔ)上改進(jìn)模仿方案,重復(fù)控制器和自適

29、應(yīng)模仿方案的異同,突出了在積分變換基礎(chǔ)上自適應(yīng)模仿算法的穩(wěn)定性和收斂性,并給出了表明其中一些特性的實(shí)驗(yàn)結(jié)果。</p><p>  關(guān)鍵詞:模仿控制,自適應(yīng)控制,重復(fù)控制,機(jī)器人</p><p><b>  導(dǎo)言</b></p><p>  機(jī)器人技術(shù)和模仿人類的人工智能一直是最難以實(shí)現(xiàn)的追求和目標(biāo)。雖然關(guān)于人類許多方面的模仿仍然沒有得到很好的實(shí)

30、現(xiàn),但是在模仿人類如何獲得執(zhí)行復(fù)雜的動(dòng)作所必需的運(yùn)動(dòng)技能上,機(jī)器人運(yùn)動(dòng)控制已經(jīng)取得了很大進(jìn)展。在本文中,我們將參照模仿控制器類的控制系統(tǒng),生成一個(gè)以迭代方式進(jìn)行的控制動(dòng)作,并運(yùn)用功能適應(yīng)算法,以執(zhí)行規(guī)定的動(dòng)作。在典型的模仿控制應(yīng)用軟件中,測(cè)試系統(tǒng)測(cè)試出錯(cuò)誤信號(hào)后便更新控制輸入,而適應(yīng)算法也因此不斷地提高控制系統(tǒng)的性能,從而受控制的機(jī)器可以反復(fù)執(zhí)行規(guī)定的任務(wù)。</p><p>  在研究機(jī)器人運(yùn)動(dòng)控制的領(lǐng)域里,模仿

31、控制這一術(shù)語也許是第一次為Arimoto和他的同事們所使用(c,f(Arimoto et al.,1984;Arimoto et al.,1988))。Arimoto把模仿控制定義為通過迭代方式改善進(jìn)程從而達(dá)到零誤差漸近跟蹤的一類控制算法,他也把它命名為模仿。在這個(gè)過程中,機(jī)器人總是從相同的初始條件開始,在一個(gè)單一的有限度的范圍內(nèi)進(jìn)行反復(fù)的任務(wù)追蹤。控制動(dòng)作的每次測(cè)試結(jié)果相當(dāng)于控制動(dòng)作的前一次測(cè)試結(jié)果再加上加上條件比例跟蹤誤差及其時(shí)間導(dǎo)

32、數(shù)。</p><p>  與模仿和改善控制方案并行發(fā)展的是,大量的研究已經(jīng)直接針對(duì)機(jī)器人軌跡跟蹤重復(fù)控制算法的應(yīng)用和其他運(yùn)動(dòng)控制問題(c.f.(Hara et al.,1988;Tomizuka et al.,1989;Tomizuka,1992))。重復(fù)控制的基本目標(biāo)是消除不明周期干擾或跟蹤未知定期參考軌跡。在它最簡單的形式中,許多重復(fù)控制算法的定期信號(hào)發(fā)生器與改善模仿規(guī)律很相似(Arimoto et al.,

33、1984; Arimoto et al.,1988)。然而,在模仿過程中的行為改善控制器有時(shí)間界限,該行為不斷重復(fù)控制器上調(diào)節(jié)器的動(dòng)作。此外,對(duì)于模仿改善的方法,假定機(jī)器人在每一次模仿試驗(yàn)中總是從相同的初始條件開始執(zhí)行任務(wù)的,但是這不是重復(fù)控制方法那種情況。</p><p>  我在模仿和重復(fù)控制方面的興趣開始于1987年,是因?yàn)槟菚r(shí)我和我的校友及同事Nader Sadegh一起學(xué)習(xí)研究了一類有關(guān)機(jī)器人自適應(yīng)和重

34、復(fù)控制器穩(wěn)定性的知識(shí)。我的同事和朋友Masayoshi Tomizuka在重復(fù)控制這塊領(lǐng)域里一直都非常積極地去研究,也是他把我引入了這個(gè)課題。當(dāng)時(shí)機(jī)器人技術(shù)和控制領(lǐng)域里有很多人都積極的為機(jī)器人尋找能漸近穩(wěn)定的自適應(yīng)控制算法,所以這些算法都必須通過嚴(yán)格的證明。最近,Slotine和Li(1986),Sadegh和Horowitz(1987以及 Wen and Baynard(1988)已經(jīng)通過運(yùn)用鈍性解決了這個(gè)問題。與此相反的是,在那一時(shí)

35、期大部分模仿和重復(fù)控制的的穩(wěn)定性成果都是建立在幾個(gè)不現(xiàn)實(shí)的假設(shè)之上的,或者是動(dòng)態(tài)的機(jī)器人線性假定,或者是被認(rèn)為可能是至少部分線性反饋控制。此外,還有人認(rèn)為在大多數(shù)工程中,即使是短暫的模仿,機(jī)器人的實(shí)際反應(yīng)也是定期或重復(fù)的,并且還可直接測(cè)量出聯(lián)合加速度。最近我和Nader在我們的自適應(yīng)控制研究中已證明了這種說法,并得出結(jié)論,我們認(rèn)為模仿控制器可使用類似的做法進(jìn)行合成和分析。我們覺得,模仿和控制器的主要優(yōu)點(diǎn)在于它的簡單和直接</p&g

36、t;<p>  不幸的是,正如在(Hara et al.,1988; Tomizuka et al.,1989; Sadegh et al.,1990)討論的一樣,漸近收斂的基礎(chǔ)的重復(fù)控制系統(tǒng)只有在機(jī)械動(dòng)態(tài)或局限性干擾信號(hào)的嚴(yán)格限制性條件下才能保證。這些條件一般不適合應(yīng)用在機(jī)器人的控制中。大多數(shù)情況下,更新方案后有相應(yīng)的修改,如所謂的Q濾波器修改(Hara et al.,1988;Tomizuka et al.,1989)

37、,可以增強(qiáng)重復(fù)控制器強(qiáng)度,但是要以犧牲限制其跟蹤性能為代價(jià)。同樣,就機(jī)器人開始時(shí)的每一個(gè)模仿實(shí)驗(yàn)的初始條件而言,在合理的假設(shè)下,融合改善模仿方案已經(jīng)得到證明。到目前為止,我們討論的改善模仿和重復(fù)控制方案的另一個(gè)缺點(diǎn),就是這些算法是為單一任務(wù)的迭代模仿而提出的。在這些領(lǐng)域里沒有任何研究工作可提供拓展模仿工序的一個(gè)這樣的機(jī)制,它能使機(jī)器同時(shí)模仿大量家務(wù)工作,或提是供一個(gè)系統(tǒng)的機(jī)制,即運(yùn)用通過模仿特殊任務(wù)獲得的靈活性去執(zhí)行稍微有點(diǎn)不同但具有類

38、似性質(zhì)的工作。在Nader離開伯克利分校成為一名佐治亞理工學(xué)院的教員后,我和Bill Messner開始對(duì)這些問題進(jìn)行了研究。</p><p>  我們的研究表明,本質(zhì)改變和重復(fù)控制規(guī)律的巨大局限性和這些用來模仿多樣任務(wù)的算法的失效在某種程度上源于一個(gè)事實(shí),即所有這些方案都是使用點(diǎn)對(duì)點(diǎn)功能適應(yīng)算法。這些算法僅僅只更新了控制輸入在當(dāng)前即時(shí)的時(shí)間內(nèi)的實(shí)用性,但并沒有提供一種可以在相鄰時(shí)間內(nèi)更新控制輸入的機(jī)制。然而,大

39、多數(shù)應(yīng)用中的必須查明的控制功能,通常至少是分段連續(xù)的。因此,在某一特定點(diǎn)上控制的實(shí)用性將和附近的點(diǎn)的實(shí)用性幾乎一樣,而點(diǎn)對(duì)點(diǎn)的功能更新規(guī)律不能充分利用這種情況。這個(gè)焦點(diǎn)已經(jīng)更廣泛地影響模仿問題和可尋址內(nèi)容存儲(chǔ)器的問題。讓我們考慮把機(jī)器人的多任務(wù)模仿控制算法的情況作為一個(gè)例子。在此應(yīng)用程序中,幾個(gè)功能變量必須確定,即機(jī)器人的逆動(dòng)力學(xué)參數(shù)。在有限的時(shí)間內(nèi)用于訓(xùn)練改善控制的軌道不能訪問到主函數(shù)每一個(gè)點(diǎn)(或載體)。因此,在用點(diǎn)至點(diǎn)更新的規(guī)律時(shí),

40、執(zhí)行一次任務(wù)中控制輸入功能的精確識(shí)別將不會(huì)提供生成控制輸入任務(wù)的任何信息,除非其他類似的軌道相交,或者使用了某種插值。類似地,在可尋址內(nèi)容存儲(chǔ)器中,可取的做法是模仿算法應(yīng)有一個(gè)“插值“的性能,因此,輸入向量是類似以前的經(jīng)驗(yàn)輸入向量,但對(duì)系統(tǒng)來說還是新的向量,輸出向量還是類似以前的經(jīng)驗(yàn)輸出載體</p><p>  在機(jī)器人模仿控制的插值問題上,(Miller,1987)一書提出了利用所謂的“小腦模型算術(shù)計(jì)算機(jī)”(C

41、MAC)的解決辦法。在該算法中,一個(gè)輸入向量被映射到在中間記憶的幾個(gè)點(diǎn)上,而輸出向量是由總結(jié)存儲(chǔ)在輸入向量被映射到的所有點(diǎn)的值算出的。輸入向量的映射有一個(gè)特性就是在中間記憶中彼此靠近的輸入向量將映射在重疊的區(qū)域,這能導(dǎo)致插值的自動(dòng)進(jìn)行。</p><p>  在(Messner et al.,1991)中,我們介紹了基于積分變換的模仿控制系統(tǒng)的一類功能識(shí)別算法,以便處理在上文提到的點(diǎn)對(duì)點(diǎn)的重復(fù)性和模仿改善控制器的強(qiáng)

42、度和插值問題。在這種自適應(yīng)模仿算法中,未知函數(shù)是以第一類積分方程的形式定義的,這種積分方程由已知的和未知的核心功能影響函數(shù)組成。模仿過程中涉及到通過估計(jì)影響函數(shù)對(duì)未知函數(shù)的間接估計(jì)。整個(gè)影響函數(shù)被調(diào)整成與每一個(gè)點(diǎn)的核心值成比例。因此,在更新影響函數(shù)和發(fā)生函數(shù)的估計(jì)中核心值的使用為這些算法提供了理想的插值和平滑的性能,并且克服了前面點(diǎn)對(duì)點(diǎn)改善和重復(fù)控制方案的有關(guān)多變函數(shù)估計(jì)中的限制問題。此外,積分變換的運(yùn)用使得這些模仿算法表現(xiàn)出強(qiáng)大的穩(wěn)定

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論