Deloitte’s State of AI in the Enterprise report highlights ethical and regulatory risks in artificial intelligence adoption.
As artificial intelligence has become more and more pervasive throughout the world, enterprise tech leaders have moved beyond questions of what can we do with this powerful new technology to how will our doing this impact our company and other things we care about like individual privacy, worker jobs, misuse by authoritarian governments, transparency, social responsibility, accountability and even the future of work itself.
In short, some very human elements have now become part of the algorithm. That’s perhaps the key finding of Deloitte’s just published third annual State of AI in the Enterprise, a survey of 2,737 IT and line-of-business executives in nine countries. examining their sentiments and practices regarding AI technologies.
The study of enterprise AI adopters found that 95 percent of respondents have concerns about ethical risks of the technology and more than 56 percent agree that their organization is slowing adoption of AI technologies because of emerging risks.
The authors of the report write:
尽管他们对AI的工作表现出极大的热情，但采用者也面临保留。 实际上，他们将管理与AI相关的风险列为其AI计划面临的最大挑战，这与数据管理和将AI集成到公司流程中的持续困难联系在一起。 (Despite strong enthusiasm for their AI efforts, adopters face reservations as well. In fact, they rank managing AI-related risks as the top challenge for their AI initiatives, tied with persistent difficulties of data management and integrating AI into their company’s processes.)
此外，在各种潜在的战略，运营和道德风险方面，采用者的准备工作差距也很大。 超过一半的采用者对他们的AI计划存在这些潜在风险表示“主要”或“极端”担忧，而只有十分之四的采用者认为其组织“为解决这些问题做好了充分准备”。 (Additionally, a troubling preparedness gap exists for adopters across a wide range of these potential strategic, operational, and ethical risks. More than half of adopters report “major” or “extreme” concerns about these potential risks for their AI initiatives while only four in 10 adopters rate their organization as “fully prepared” to address them.)
The high-level of fear of emerging risks appears to be inhibiting adoption of AI. Safety concerns were citied by a quarter of respondents as the single biggest ethical risk. Other concerns include lack of explainability and transparency in AI‐derived decisions, the elimination of jobs due to AI‐driven automation, and using AI to manipulate people’s thinking and behavior.
对新兴风险的高度恐惧似乎正在抑制AI的采用。 四分之一的受访者认为安全隐患是最大的道德风险。 其他问题包括在人工智能衍生的决策中缺乏可解释性和透明度，由于人工智能驱动的自动化而导致的工作减少以及使用人工智能来操纵人们的思维和行为。
Despite these worries, only about a third of adopters are actively addressing the risks — 36 percent are establishing policies or a board to guide AI ethics, and the same portion say they’re collaborating with external parties on leading practices.
You will probably not be surprised to learn that Deloitte is one of those external parties who are ready to lend to a hand.
In addition to the new enterprise AI report, the firm has also recently unveiled the Deloitte AI Institute to corral the best thinking and best practices on AI, as well as new “Trustworthy AI” framework to guide organizations on how to apply AI responsibly and ethically within their businesses.
除了新的企业AI报告之外，该公司最近还发布了Deloitte AI Institute，以收集有关AI的最佳思想和最佳实践，以及新的“可信赖AI”框架，以指导组织如何负责任地和道德地应用AI。在他们的业务范围内。
The framework will manage common risks and challenges related to AI ethics and governance, including fair and impartial use checks, implementing transparency and explainable AI, responsibility and accountability, security, reliability and privacy. Said Beena Ammanath, Deloitte AI Institute executive director:
该框架将管理与AI道德和治理相关的常见风险和挑战，包括公平和公正的使用检查，实施透明度和可解释的AI，责任与问责制，安全性，可靠性和隐私性。 德勤AI研究所执行董事said Beena Ammanath：
“准备好接受人工智能的组织必须首先以信任为中心。 我们不仅致力于帮助客户了解AI道德，而且致力于在我们自己的组织内维护道德心态。” (“Organizations ready to embrace AI must start by putting trust at the center. We are devoted to not only helping our clients navigate AI ethics, but also in maintaining an ethical mindset within our own organization.”)
One company cited as getting ethical AI adoption right is Workday, the provider of cloud-based enterprise software for financial management and human capital management. It has committed to a set of principles to ensure that its AI-derived recommendations are impartial and that it is practicing good data stewardship. Workday is also embedding “ethics-by-design controls” into its product development process. Said Barbara Cosgrove, Chief Privacy Officer, Workday:
被称为获得道德认可的AI采用权的公司之一是Workday，这是基于云的企业软件提供商，用于财务管理和人力资本管理。 它承诺遵循一系列原则，以确保其源自AI的建议是公正的，并且正在实践良好的数据管理。 Workday还在其产品开发过程中嵌入了“设计伦理控制”。 工作日首席隐私官Said Barbara Cosgrove ：
对于工程师和开发人员而言，将“道德规范”集成到技术产品中可能会感到很抽象。 尽管许多技术公司都在以具体和切实的方式独立研究实现此目标的方法，但我们必须突破这些孤岛并分享最佳实践。 通过合作学习彼此，我们可以提高整个行业的门槛，一个好的起点就是着眼于获得信任的事物。 (Integrating ‘ethics’ into technology products can feel abstract for engineers and developers. While many technology companies are working independently on ways to do this in concrete and tangible ways, it is imperative that we break out of those silos and share best practices. By working collaboratively to learn from each other, we can raise the bar for the industry as a whole — and a good place to start is focusing on the things that earn trust.)
Of all the modern dual-use technologies, it is probably fair to say that artificial intelligence has the most potential to do both good and evil. The same algorithms that used to run factory floors, automate tedious business processes, help farmers be more productive, support science and innovation, monitor extreme weather and climate change, improve health delivery, support safety and thousands of other useful tools can also be used to invade and track the behavior of private citizens. It is a dream tool for law enforcement and authoritarian regimes who want to keep their knees on the necks of their people. It is also biased in dangerous ways by the assumptions that are built-in either accidentally or on purpose.
在所有现代两用技术中，可以说人工智能在善与恶中都具有最大的潜力。 用来运行工厂车间，使繁琐的业务流程自动化，帮助农民提高生产力，支持科学和创新，监视极端天气和气候变化，改善健康状况，支持安全性以及数千种其他有用工具的算法也可以用于入侵并跟踪私人公民的行为。 对于希望屈膝于人民的执法和独裁政权来说，这是一个理想的工具。 由于意外或故意内置的假设，它也以危险的方式出现偏差。
And, it is everywhere. One of the funny/not funny findings of the Deloitte report is that many organizations have no idea how much or where their organization is using AI:
知道人工智能的存在是管理其风险的前提。 减轻风险的一个关键步骤是保持组织中所有AI模型，算法和系统的正式清单。 对于公司而言，跟踪所有AI的使用可能会很困难-一家银行“对使用高级或AI驱动算法的所有模型进行了盘点，发现总数惊人地达到了20,000。” (Knowing where AI exists is a prerequisite to managing its risks. One key step for mitigating risk is to keep a formal inventory of all of the organization’s AI models, algorithms, and systems. It can be difficult for companies to track all uses of AI — one bank “made an inventory of all their models that use advanced or AI-powered algorithms and found a staggering total of 20,000.”)
That is truly frightening.
Gain Access to Expert View — Subscribe to DDI Intel
获得访问专家视图的权限- 订阅DDI Intel