Click HERE to download this section
Objectivity
- A PA is required to be objective,149 which means to exercise professional or business judgment without being compromised by:
- Bias;
- Conflict of interest; or
- Undue influence of, or undue reliance on, individuals, organizations, technology or other factors.
In this regard, the Code prohibits a PA from undertaking a professional activity if a circumstance or relationship unduly influences the PA’s professional judgment regarding that activity.
- Stakeholder outreach indicated that whereas relying on technology brings about many significant opportunities for value creation, a delicate balance needs to be achieved to ensure there is no undue reliance on technology. Stakeholders highlighted several circumstances perceived as increasing the risk of threats to compliance with the principle of objectivity, including:
- Bias – Objective decision-making is hampered by bias in PAs. Stakeholders also remarked that bias can manifest in numerous technology implementations, such as in the data used as inputs or in the programming of the technology. Accordingly, PAs using or relying on the output of technology should be aware of the potential of such bias when assessing the reasonableness of relying on, or using, that output.
- Over-reliance – Reliance on technology tools and outputs is an important aspect of decision-making. However, objective decision-making is impeded by PAs becoming over-reliant on technology, especially where there is a lack of technical competence and/or where the technology lacks transparency and explainability.
- Transparency and Explainability – In order for technology to be relied upon, it needs to be understandable (i.e., the PA has the ability, or has access to a technology expert who can explain such technology to enable a PA, to understand, assess the reasonableness of, and explain the output of the technology, having regard to the purpose for which it is to be used). For example, this might include assessing the appropriateness of how data is processed, understanding the rationale for automated decisions, and being able to justify the reliance on, or use of, the outputs of the tool.
Bias
- Bias is driven by human behavior and societal values that are impacted by, among other factors, one’s education, experience, and cultural upbringing. As a consequence, bias is inherent in all datasets, technology programming, and laws and regulations.
- Stakeholders stressed the importance of recognizing that there is inherent bias in data, which is particularly relevant to implementing and using AI systems. This includes data either used to train or test the system or as inputs for the system to process. Apart from data, AI systems also suffer from bias due to human programming. It is observed that there is increasing litigation on the basis of algorithm bias leading to unfair judgments, for example, for credit loans declined due to racial profiling or the inappropriate use of facial recognition.150
- Furthermore, stakeholders noted that PAs should seek to understand how bias is identified, considered, and mitigated in the creation, capture, and analysis of data in systems, including how the “human element” impacts AI training. Asking appropriate questions151 and analyzing output to facilitate such understanding are key to mitigating the effect of bias. Stakeholders also emphasized that additional guidance related to such risk, and how it can be identified and mitigated, is needed.
- The discussion in Technology Landscape: Artificial Intelligence outlines some actions for PAs to combat bias in AI systems. These actions are summarized as: (a) understanding the data going into the model, (b) understanding how the model operates, what the intended outputs are, and the potential unintended consequences of the model, (c) having the ability and competence to ask the effective questions, (d) ensuring a “human-in-loop” approach, and (e) promoting an ethics-based organizational culture.
Over-reliance
- Stakeholders reported that since the beginning of the COVID-19 pandemic, daily decisions have become more challenging with the increase in remote meetings and reliance on technology.152 For example, this reliance on technology can impact the PA’s ethics obligations to act with due care, be objective, and maintain confidentiality (including respecting data privacy). In particular, stakeholders noted that:
- People are increasingly simply deciding that the machine is “correct” (i.e., displaying automation bias).153 This calls into question how various accounting or auditing matters are decided – by the human or the machine. It also highlights the importance of assessing the effectiveness of the tool or system being used, and mitigating bias (i.e., ensuring that the algorithms do not make inappropriate judgments).
- Reliance on technology, for example, using an automatically generated report, reduces foundational training of less experienced team members and might deepen automation bias. Less experienced team members, who were never involved in creating the report and understanding its purpose, will have less ability to recognize or identify what might be unreasonable or incorrect, and likely will not be able to explain the report’s basis. See also the discussion on Competence Need in the Digital Age.
It was also noted that if such automatic reports are generated regularly enough, even more experienced team members will stop noticing what might be incorrect or omitted.
- Organizations and firms are looking for technology that can easily and rapidly increase revenues and/or reduce costs and time to make decisions. Some smaller and middle market PAPPs, for example, are looking for technology to shorten their project timeframes, believing that it will immediately alleviate the impact of competitive fee pricing in the face of staff shortages and evertighter deadlines. Stakeholders noted, however, that such “silver bullet” technology is often not fully tested and not yet proven. This means that its use could raise data integrity and security issues, and create material impacts on workflows that might result in unintended consequences, such as audit failures and reputational damage. It is important to recognize that whereas a mistake by one staff member on a single client might have relatively few long-term implications, implementing untested or unproven technology risks an entire process that is poorly automated and might impact numerous clients before the defects are caught.
- Technology tools and systems developed by recognizable “brand names” are often immediately trusted. This is despite the documentation of the technology’s source code or the detailed quality assessment processes underpinning its development generally not being made available by the technology developer. This is seen as a particular concern for small- and mid-sized organizations and firms in terms of sufficiently understanding the technology being used, given that they have less “bargaining power” than larger organizations to obtain such valuable (i.e., proprietary) information.
- When third-party tools are implemented by external consultants, organizations often lack the internal competence and resultant accountability to take responsibility for such tools and related outputs after the consultants complete the engagement.
- Analytical tools and digital assistants are becoming increasingly commonplace and improving with time and technological advancement.
Some stakeholders, particularly technologists, wondered at what point it becomes possible to stop trying to learn about the underlying technology and simply place trust in the system. They observed the parallel of relying on a digital tool (or digital assistant, see discussion on Technology Landscape: Robotic Process Automation) to relying on a supervised human staff member.
These stakeholders believed that the level of “trust” should be the same threshold used to assess reliance on the work of others in the Code. Some stakeholders also noted that this issue of “distrusting” technology is related to the ability to explain the decisions made by, or the outputs of, autonomous and intelligent systems and tools. They cautioned that this would be of increasing significance as developments in cognitive AI advance.
- Finally, stakeholders noted that to mitigate automation bias and over-reliance on technology, PAs need to be aware of the various blind spots where errors could occur when digitalizing. For example, using unstructured data in AI to evaluate anomalies in contracts might result in potential optical character recognition (OCR) errors due to poor key words and structuring, as well as issues in machine learning algorithm processes such as natural language processing (NLP).154
Transparency and Explainability
- Many current AI systems that are more rules-based and do not rely on machine learning are relatively explainable (see also discussion on Technology Landscape: Artificial Intelligence). Nevertheless, it was observed that documentation on such systems from technology developers remains lacking in detail and often does not explain the process of analysis followed by such technology tools, particularly when coupled with big data sets.
- As AI systems, and machine learning in particular, continue to advance and are deployed, explainability will become an even more significant issue. The sheer volume of data being consumed by such advanced systems as input, together with their computational power to drive machine learning, leaves humans unable to keep pace with them or effectively oversee them using manual means. Systems matching these criteria already exist and firms and organizations are likely to need their own AI systems to test another AI system.
- Lack of explainability is amplified in situations where the outputs of one AI algorithm becomes an input to another AI algorithm, creating a cascading effect.155 Not only does this exponentially increase the potential for unintended consequences, but it also increases the probability that the system’s “reasoning” cannot be explained by humans. Once again, this underscores the need for systems to be transparent and explainable.
- Some approaches to developing transparent and explainable AI systems include:
- Developing systems that are more linear and transparent. Assessing the reasonableness of AI with an inferential approach (i.e., through the evaluation of inputs and outputs) only yields some level of comfort, as compared to the comfort gained from being able to explain an AI system that is linear and transparent.
- Embedding check points in AI machine learning processes. The more quality data that an intelligent agent has access to, the better and faster it learns. These check points could be in the form of logic and reasonableness tests conducted periodically (as frequently as multiple times per hour, depending on the volume of data ingested and speed of learning) for the human to understand what the intelligent agent is doing. It is also important to “pause” the learning of the intelligent agent during these check points.
- Ensuring that there is adequate documentation of the logic and rationale for the AI system’s processing and decisionmaking. This is important so an independent third party, such as an auditor or regulator, can understand, explain, and validate the system. As mentioned previously, however, it is also observed that third-party technology is often inherently a “black box” because of challenges in obtaining access to source code, which is typically the intellectual property of the third party.
- Performing sensitivity analyses, for example, by altering a single input and measuring the change in model output. This gives a local, feature specific, linear approximation of the model’s response. By repeating this process for many values, a more extensive picture of model behavior can be built up.156
- Model evaluation to validate that AI systems meet the intended purpose and functional requirements. For example, evaluation can be done by testing models on a “held-out” portion of the data (i.e., historical data inputs not used to train the AI), and comparing the model outputs with the actual data, and reporting the “error”.157
- Continuous evaluation by programming in “common sense” safeguards against outputs that clearly do not make sense by a large margin.158
- Being aware of, and being able to identify and mitigate, inherent bias or incorrect assumptions used in the AI.159 See discussion on Objectivity: Bias.
Endnotes
149 Paragraphs R112.1 and R112.2 of the Code
150 See, for example, Meeker, Heather J, and Amit Itai. “Bias in Artificial Intelligence: Is Your Bot Bigoted?” Bloomberg Law, 19 October 2020, https://news.bloomberglaw.com/tech-and-telecom-law/bias-in-artificial-intelligence-is-your-bot-bigoted; ‘AI Litigation Database.” George Washington University, https://blogs.gwu.edu/law-eti/ai-litigation-database/; and Joizil, Karine, et al. “Could AI get you sued? Artificial intelligence and litigation risk.” McCarthy Tétrault, 26 April 2022, https://www.mccarthy.ca/en/insights/blogs/techlex/could-ai-get-you-sued-artificial-intelligence-and-litigationrisk.
151 See, for example, “Exploring the IESBA Code, A Focus on Technology – Artificial Intelligence” IFAC, 11 March 2022, https://www.ifac.org/knowledgegateway/supporting-international-standards/publications/exploring-iesba-code-focus-technology-artificial-intelligence.
152 “COVID-19: Ethics & Independence Considerations.” IESBA, https://www.ethicsboard.org/focus-areas/covid-19-ethics-independence-considerations.
153 Automation bias, which is a tendency to favor output generated from automated systems, even when human reasoning or contradictory information raises questions as to whether such output is reliable or fit for purpose.
154 IAASB Digital Technology Market Scan: Natural Language Processing.” IAASB, 22 June 2022, https://www.iaasb.org/news-events/2022-06/iaasbIESBAdigital-technology-market-scan-natural-language-processing.
155 See, for example, Sambasivan, Nithya, et al. “‘Everyone wants to do the model work, not the data work’”: Data Cascades in High-Stakes AI.” Google Research, 8 May 2021, https://storage.googleapis.com/pub-tools-public-publication-data/pdf/0d556e45afc54afeb2eb6b51a9bc1827b9961ff4.pdf and Hao, Karen. “Error-riddled data sets are warping our sense of how good AI really is.” MIT Technology Review, 1 April 2021, https://www.technologyreview.com/2021/04/01/1021619/ai-data-errors-warp-machine-learning-progress/.
156 Páez, Andrés, “The Pragmatic Turn in Explainable Artificial Intelligence (XAI).”, Minds and Machines 29, 441-459, September 2019, https://doi.org/10.1007/s11023-019-09502-w
157 Supra note 44
158 Supra note 44
159 See, for example, the challenges related to these issues in the results of a research study that found people both over-relied on the outputs from an AI system and misinterpreted what those outputs meant, even when they had knowledge about how AI systems work. Wiggers, Kyle. “Even experts are too quick to rely on AI explanations, study finds.” VentureBeat, 25 August 2021, https://venturebeat.com/business/even-experts-are-too-quick-torely-on-ai-explanations-study-finds/.