外企头条人工智能要听人指挥.
外企头条:人工智能要听人指挥
如何让人工智能产品变得安全、可靠且可解释?博世发布使用人工智能技术的指导方针《AI道德准则》。
请看详细报道↓↓↓
博世已经建立起使用人工智能的伦理 “红线”,公司于日前发布在智能产品中使用人工智能技术的指导方针《AI道德准则》,其原则是必须保留人类对人工智能所做决策的控制权。
Bosch has established ethical “red lines” for the use of artificial intelligence 。 The company has now issued guidelines governing the use of AI in its intelligent products。 Boschs AI code of ethics is based on the following maxim: Humans should be the ultimate arbiter of any AI-based decisions。
博世首席执行官沃尔克马尔·邓纳尔博士表示:“人工智能是为人类服务的,博世的《AI道德准则》为员工开发智能产品提供明确指导。我们希望用户能够信赖博世的人工智能产品。”
“Artificial intelligence should serve people。 Our AI code of ethics provides our associates with clear guidance regarding the development of intelligent products。 Our goal is that people should trust our AI-based products。” Bosch CEO Volkmar Denner said。
人工智能技术对博世而言至关重要。到2025年,每款博世产品都将带有人工智能功能,或者在开发和生产过程中运用人工智能技术。
AI is a technology of vital importance for Bosch。 By 2025, the aim is for all Bosch products to either contain AI or have been developed or manufactured with its help。
博世的目标是让人工智能产品变得安全、可靠且可解释。
The company wants its AI-based products to be safe, robust, and explainable。
博世集团首席数字官兼首席技术官迈克尔·波尔博士强调:“只有当人们不再将人工智能视为神秘的‘黑匣子’时,信任的种子才会萌芽。而信任将是成就互联化世界不可或缺的要素。”
“If AI is a black box, then people wont trust it。 In a connected world, however, trust will be essential,” said Michael Bolle, the Bosch CDO and CTO。
博世致力于让其所生产的人工智能产品值得信赖。新发布的博世《AI道德准则》秉持“科技成就生活之美”的理念,将创新精神与社会责任相结合。
Bosch is aiming to produce AI-based products that are trustworthy。 The code of ethics is based on Boschs “Invented for life” ethos, which combines a quest for innovation with a sense of social responsibility。
博世计划在今后两年对近2万名员工进行人工智能培训。《AI道德准则》也会作为培训内容的一部分。
Over the next two years, Bosch plans to train 20,000 of its associates in the use of AI。 Boschs AI code of ethics governing the responsible use of this technology will be part of this training program。
人工智能蕴藏巨大潜力
AI offers major potential
人工智能是全球经济发展和增长的驱动力。据咨询公司普华永道预计,到2030年,人工智能将带动中国、北美和欧洲GDP分别增长约26%、14%和10%。
Artificial intelligence is a global engine of progress and growth。 The management consultants PwC, for example, project that between now and 2030, AI will boost GDP in China by 26 percent, by 14 percent in North America, and by around 10 percent in Europe。
这一技术不仅有助于克服气候行动带来的诸多挑战,而且能进一步优化在交通运输、医药和农业等多个领域所取得的成效。
This technology can help overcome challenges such as the need for climate action and optimize outcomes in a host of areas such as transportation, medicine, and agriculture。
算法通过评估大量数据得出结论,以作为人工智能决策的基础。
By analyzing huge volumes of data, algorithms are able to reason and make decisions。
早在具有约束力的欧盟标准出台之前,博世就已决定以《世界人权宣言》体现的价值观为基础,积极应对人工智能技术使用过程中出现的道德伦理问题。
Well in advance of the introduction of binding EU standards, Bosch has therefore taken the decision to actively engage with the ethical questions that the use of this technology raises。 The moral foundation for this process is provided by the values enshrined in the Universal Declaration of Human Rights。
人类必须保留控制权
Humans should retain control
根据博世《AI道德准则》,人工智能必须在某种程度的人为介入或影响的条件下做出决策。这也就意味着,人工智能必须始终作为服务于人类的工具。
Boschs AI code of ethics stipulates that artificial intelligence should not make any decisions about humans without this process being subject to some form of human oversight。
这也就意味着,人工智能必须始终作为服务于人类的工具。
Instead, artificial intelligence should serve people as a tool。
《AI道德准则》包含了三种机制,它们遵守着一个共同的原则,即博世开发的人工智能产品中,人类必须保留对人工智能所做决策的最终控制权。
Three possible approaches are described。 All have the following in common: in AI-based products developed by Bosch, humans should retain control over any decisions the technology makes。
第一种机制为人工控制,适用于人工智能仅作为一种辅助工具出现的情境。
In the first approach , artificial intelligence is purely an aid。
例如,在决策支持系统中,人工智能协助人们对物体或生物进行分类。
For example, in decision-supporting applications, where AI can help people classify items such as objects or organisms。
第二种机制为使用阶段的人为干预,适用于人工智能系统可以进行自主决策,但人类能够随时干预其决策的情境。
In the second approach , an intelligent system autonomously makes decisions that humans can, however, override at any time。
比如驾驶辅助系统,驾驶员可以直接干预停车辅助系统的决策等。
Examples of this include partially automated driving, where the human driver can directly intervene in the decisions of, say, a parking assistance system。
第三种机制为设计阶段的人为干预,适用于紧急制动系统等应用。
The third approach concerns intelligent technology such as emergency braking systems。
开发此类智能产品时,专家将定义参数作为人工智能决策的基础,而人类不参与决策,由人工智能进行。
Here, engineers define certain parameters during the development process。 Here, there is no scope for human intervention in the decision-making process itself。 The parameters provide the basis on which AI decides whether to activate the system or not。
但是工程师可以随时追溯检查机器是否遵守所设定的参数进行决策,必要情况下可以修改参数。
Engineers retrospectively test whether the system has remained within the defined parameters。 If necessary, these parameters can be adjusted。
共建AI信任感
Building trust together
博世希望《AI道德准则》能够让社会各界参与到推进人工智能的探讨中。
Bosch also hopes its AI code of ethics will contribute to public debate on artificial intelligence。
邓纳尔认为:“人工智能将改变我们日常生活的方方面面,所以展开多方探讨至关重要。”
“AI will change every aspect of our lives,” Denner said。 “For this reason, such a debate is vital。”
构建对智能生态的信任感不仅需要专业知识,还需要决策者、科学界和公众之间进行密切交流。
It will take more than just technical know-how to establish trust in intelligent systems – there is also a need for close dialogue among policymakers, the scientific community, and the general public。
为此,博世已经加入人工智能高级专家小组。该专家小组由欧盟委员会任命,负责审查人工智能涉及的道德伦理等问题。
This is why Bosch has signed up to the High-Level Expert Group on Artificial Intelligence, a body appointed by the European Commission to examine issues such as the ethical dimension of AI。
同时,博世在全球7个地点设立了人工智能中心,构建人工智能全球网络,并与阿姆斯特丹大学和位于宾夕法尼亚州匹兹堡的卡内基梅隆大学展开合作,致力于研究安全、可靠的人工智能技术。
In a global network currently comprising seven locations, and in collaboration with the University of Amsterdam and Carnegie Mellon University , Bosch is working to develop AI applications that are safer and more trustworthy。
作为巴登符腾堡州“Cyber Valley”研究联盟的创始成员之一,博世将投资1亿欧元用于建设人工智能园区。
Similarly, as a founding member of the Cyber Valley research alliance in Baden-Württemberg, Bosch is investing 100 million euros in the construction of an AI campus。
该园区的700名专家将很快能够与外部研究人员和初创公司并肩工作。波尔说:“打造安全、可靠的物联网是我们的共同目标。”
Where 700 of its own experts will soon be working side by side with external researchers and startup associates。 “Our shared objective is to make the internet of things safe and trustworthy,” Bolle said。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 931614094@qq.com 举报,一经查实,本站将立刻删除。