浙江大学社会科学研究平台

 您当前位置:首页 >> 新闻动态 >> 通知公告 >> 正文

学术讲座通知:人工智能伦理控制中的对话论辩机制
作者:系统管理员 发表时间:2017年10月15日星期日

学术讲座:人工智能伦理控制中的对话论辩机制

On legal aspects and argumentation for ethical standardization of responsible AI

主讲人:Speaker:Leon van der Torre, University of Luxembourg (卢森堡大学)


时间:2017年10月16日

地点:未定(具体时间地点请联系王志坚老师:13905815529)


Abstract: Ethical standardization for responsible AI is a KEY SUCCESS FACTOR for future industrial AI applications. Regulation of ethical standardization needs a legal framework to bridge the gap between ethical theory and industrial AI applications by implementing ethical norms. In the meantime, different stakeholders of AI (Government, Industry, Academia, Social groups, Customers, etc) have different viewpoints, thus we need ONTOLOGIES and ARGUMENTATION to take all their viewpoints and concerns into account. In this talk, the speaker will first introduce legal aspects of robotics and AI, based on some ongoing projects, including H2020 RISE 2016-2019 Mining and Reasoning with Legal Texts, H2020 ERIC Consolidator 2015-2020 Responsible Intelligent Systems, H2020 ERC advanced. Then, he will introduce formal and informal argumentation(modern dialogical logic) that can be used to relate the different viewpoints (communication, negotiation) among and within these different disciplines, taking the concerns of all the different stakeholders into account (Government, Industry, Academia, Politics, AIIA, International, etc) including computational tools for shared ontologies and checking compliance with regulations, standards and guidelines.


Keywords: social responsibility, formal argumentation, normative multi-agent systems, deontic logic


References:

[1] vdTorre et al, Handbook of deontic logic, College Publications, London, volume 1, 2013. Volume 2, 2018. Http://deonticlogic.org

[2] vdTorre et al, Handbook of normative multiagent systems, College Publications,  London, 2018. Http://normativeMAS.org

[3] vdTorre et al, Handbook of formal argumentation, College Publications, London, 2018.

[4] Broersen, Responsible Intelligent Systems,Kunstliche Intelligenz, 2014, 28(3): 209-214.


主讲人介绍: Leon van der Torre joined the University of Luxembourg as a full professor for Intelligent Systems in 2006, and is the head of the computer science and communication department since 2016. He founded the interdisciplinary institute for security, reliability and trust (SnT) in 2009, as well as the AI robolab in the same year, and he is a member of the ethical advisory committee of the university since its creation in 2012. His work is concerned with legal and ethical reasoning for autonomous intelligent systems, in particular normative reasoning, formal argumentation, regulatory compliance, and social robotics.

Leon van der Torre developed the BOID agent architecture (with colleagues from Vrije Universiteit Amsterdam), input/output logic (with David Makinson) and the game-theoretic approach to normative multiagent systems (with Guido Boella). He is an editor of the handbook of deontic logic and normative systems (first volume 2013, second volume in preparation), editor of the handbook on formal argumentation (in preparation), editor of the handbook on normative multi agent systems (in preparation), deontic logic corner editor of Journal of Logic and Computation, and member of editorial board of Logic Journal of the IGPL, the IfCoLog Journal of Logics and their Applications, and the EPiC series in Computer Science. Moreover he is local coordinator of the H2020 Erasmus+ Joint International Doctoral degree in Law, Science and Technology, (LASTJD 2012-2018), and coordinator of the H2020 Marie Curie RISE Network “Mining and Reasoning with Legal Texts” (MIREL, 2016-2019).

上一篇:
访问员积分奖励制度(试行)
下一篇:
社会科学研究基础平台工会关于组织...