2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩9頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、ArticlesWINTER 2015 105Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the prob- lems surrounding the con

2、struction of intelligent agents — systems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic notions of rationality — colloquially, the abil-

3、 ity to make good decisions, plans, or inferences. The adop- tion of probabilistic representations and statistical learning methods has led to a large degree of integration and cross- fertilization between AI, machine le

4、arning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availabil- ity of data and processing power, has yielded remarkable suc- cesses in

5、 various component tasks such as speech recogni- tion, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering sys- tems.Copyright © 2015, Association for the Advanc

6、ement of Artificial Intelligence. All rights reserved. ISSN 0738-4602Research Priorities for Robust and Beneficial Artificial IntelligenceStuart Russell, Daniel Dewey, Max Tegmarkn Success in the quest for artificial i

7、ntelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to inves- tigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous exa

8、mples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.ArticlesWINTER 2015 107Economic Measures It is possible that econo

9、mic measures such as real GDP per capita do not accurately capture the benefits and detriments of heavily AI-and-automation-based economies, making these metrics unsuitable for pol- icy purposes (Mokyr 2014). Research on

10、 improved metrics could be useful for decision making.Law and Ethics ResearchThe development of systems that embody significant amounts of intelligence and autonomy leads to important legal and ethical questions whose an

11、swers affect both producers and consumers of AI technolo- gy. These questions span law, public policy, profes- sional ethics, and philosophical ethics, and will require expertise from computer scientists, legal experts,

12、political scientists, and ethicists. For exam- ple:Liability and Law for Autonomous Vehicles If self-driving cars cut the roughly 40,000 annual U.S. traffic fatalities in half, the car makers might get not 20,000 thank-y

13、ou notes, but 20,000 lawsuits. In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and self- driving cars best be realized (Vladeck 2014)? Should legal questions about AI be hand

14、led by existing (soft- ware- and Internet-focused) cyberlaw, or should they be treated separately (Calo 2014b)? In both military and commercial applications, governments will need to decide how best to bring the relevant

15、 expertise to bear; for example, a panel or committee of profes- sionals and academics could be created, and Calo has proposed the creation of a Federal Robotics Commis- sion (Calo 2014a).Machine Ethics How should an aut

16、onomous vehicle trade off, say, a small probability of injury to a human against the near certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the pub- lic on these issues? Should su

17、ch trade-offs be the sub- ject of national standards?Autonomous Weapons Can lethal autonomous weapons be made to comply with humanitarian law (Churchill and Ulfstein 2000)? If, as some organizations have suggested, auton

18、omous weapons should be banned (Docherty 2012), is it possible to develop a precise definition of autonomy for this purpose, and can such a ban prac- tically be enforced? If it is permissible or legal to use lethal auton

19、omous weapons, how should these weapons be integrated into the existing command- and-control structure so that responsibility and lia- bility remain associated with specific human actors? What technical realities and for

20、ecasts should inform these questions, and how should meaningful human control over weapons be defined (Roff 2013, 2014; Anderson, Reisner, and Waxman 2014)? Are autonomous weapons likely to reduce political aver- sion to

21、 conflict, or perhaps result in accidental battlesor wars (Asaro 2008)? Would such weapons become the tool of choice for oppressors or terrorists? Final- ly, how can transparency and public discourse best be encouraged o

22、n these issues?Privacy How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, and so on, interact with the right to pri- vacy? How will privacy risks interact

23、with cybersecu- rity and cyberwarfare (Singer and Friedman 2014)? Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy (Manyika et al

24、. 2011; Agrawal and Srikant 2000).Professional Ethics What role should computer scientists play in the law and ethics of AI development and use? Past and cur- rent projects to explore these questions include the AAAI 200

25、8–09 Presidential Panel on Long-Term AI Futures (Horvitz and Selman 2009), the EPSRC Prin- ciples of Robotics (Boden et al. 2011), and recently announced programs such as Stanford’s One-Hun- dred Year Study of AI and the

26、 AAAI Committee on AI Impact and Ethical Issues.Policy Questions From a public policy perspective, AI (like any power- ful new technology) enables both great new benefits and novel pitfalls to be avoided, and appropriate

27、 policies can ensure that we can enjoy the benefits while risks are minimized. This raises policy ques- tions such as (1) What is the space of policies worth studying, and how might they be enacted? (2) Which criteria sh

28、ould be used to determine the mer- its of a policy? Candidates include verifiability of compliance, enforceability, ability to reduce risk, ability to avoid stifling desirable technology devel- opment, likelihood of bein

29、g adoped, and ability to adapt over time to changing circumstances.Computer Science Research for Robust AIAs autonomous systems become more prevalent in society, it becomes increasingly important that they robustly behav

30、e as intended. The development of autonomous vehicles, autonomous trading systems, autonomous weapons, and so on, has therefore stoked interest in high-assurance systems where strong robustness guarantees can be made; We

31、ld and Etzioni (1994) have argued that “society will reject autonomous agents unless we have some credible means of making them safe.” Different ways in which an AI system may fail to perform as desired correspond to dif

32、ferent areas of robustness research: Verification: How to prove that a system satisfies certain desired formal properties. (Did I build the sys- tem right?) Validity: How to ensure that a system that meets its formal req

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論