Nest’s thermostat, Roomba’s robot vacuum cleaner, Telsa’s self-driving car, internet search predictions, and voice recognition, such as Apple’s Siri, and Amazon’s Alexa. It’s largely understood that artificial intelligence (AI) will continue to influence our lives for generations to come. Businesses have navigated different stages of maturing automation and embedding best practices into technical processes. We’ve seen security-by-design and privacy-by-design programs, but more recently, we’ve seen the initiative of “trust-by-design” gain increased traction in the market.
The growth and maturity of AI is undeniable. A McKinsey management and consultancy survey estimates that annual global GDP’s potential contribution could be a 16% boost, around 13 trillion USD.
The International Organization for Standardization (ISO) looked into this, and on 7 July 2020, they released the ISO/IEC TR 24028:2020 Information Technology – Artificial Intelligence – Overview of Trustworthiness in Artificial Intelligence.
ISO Guidance & Safety Precautions for Emerging Technology
The report analyses factors that can impact the trustworthiness of systems providing or using AI. Additionally, it surveys existing approaches that can support or improve trustworthiness in technical systems and discusses their potential application to AI systems.
It describes the properties, related risk factors, available methods, and processes relating to:
- Use of AI inside a safety–related function to realize the functionality.
- Use of non-AI safety–related functions to ensure safety for AI–controlled equipment.
- Use of AI systems to design and develop safety–related functions.
The report can be leveraged by businesses of all sizes, sectors, and jurisdictions to help establish trust in AI systems by providing transparency.
Investing in Your Future, A System of Trust across People, Process & Technology
Why does it matter?
Ultimately consumers and businesses want to work with organizations that they trust. To support trustworthy outputs and transparent internal business practices, you need to rely on the technology you have in place confidently. This is imperative for GRC when tracking and measuring risk, potentially harmful business impacts, or growth opportunities.
As people increase their dependence on technology, there is an air of caution on investing in automation that cannot re-engineer itself when expected happens. This is a key consideration that the new ISO guidance looks to address. For AI to work effectively, it requires vast datasets to learn patterns and context to understand trends, outliers, and relevant business insights. The caveat to this is data hygiene. As you manage more and more data, there is a large chance of errors occurring. To confidently rely on any automation, particular AI, you need high-quality data for accuracy and consistency. These are some of the core aspects that businesses can confidently rely upon – and trust AI technology.
What does OneTrust GRC do?
OneTrust GRC’s Athena AI helps businesses enhance their risk program with focused insights across their digital enterprise. Translate risk data from ambiguous scores to value-based business impact. OneTrust Athena AI can map your risk exposure scope and monitor your compliance standing based on your internal controls and updates from OneTrust DataGuidance, the largest global privacy and security regulations database.
- Measure Corporate Compliance
- Suggest Mitigating Controls
- Synchronize Risk Updates
- Prioritize Audit Investigations
The evolution of AI in GRC has developed from an abstract concept to early solutions requiring significant enterprise support and engineering. Fast-forward to today, where practical applications and reputable guidance are both available to businesses of all sizes in the market today. As you evaluate your GRC technology, consider what additional long term gains you could realize with an investment in AI.
To learn more about OneTrust Athena and how she can inform your business to stay ahead of risk blind spots visit this blog.