arrow-circle-downarrow-circle-rightarrow-leftarrow-rightcheckchevron-downPathPathclosefilterminuspausepeoplepinplayplusportalsearchsocial-icon-facebooksocial-icon-linkedinsocial-icon-twittersocial-linkedinsocial-youtube
Insights

AI risks and opportunities

The integration of AI into various sectors presents a unique blend of risks and opportunities that organisations must skillfully navigate. This section explores the dual nature of AI’s impact, focusing on the practical implications for business and society, offering a more pragmatic exploration of how AI reshapes industries and introduces new challenges.

AI technologies introduce risks that require vigilant management, including the risk of disseminating or relying on inaccurate, unreliable or biased/discriminatory content, intellectual property infringement risks and data privacy risks where personal information might be compromised, if not designed and deployed with clear ethical guidelines, generative AI can have unintended consequences and potentially cause real harm.

The opportunities presented by AI are transformative. AI facilitates the automation of complex tasks, yields insightful analytics from large datasets, and enables unprecedented levels of personalisation in customer service and product offerings. For instance, generative AI revolutionises creative processes and content generation, leading to significant operational efficiencies, cost reductions and enhanced market competitiveness.

By strategically leveraging AI, organisations can improve their internal processes, redefine customer interactions and expand into new markets, driving substantial business growth.

Governments understand the need to balance innovation and risk-taking with precautionary principles that factor risks associated with AI. International governments have taken different approaches in managing risks and opportunities, with some jurisdictions taking a more precautionary approach to AI risk management, creating clear limits on high-risk activities such as those outlined in the EU AI Act. The US Government also recognised the potential for significant harm of unregulated AI by creating the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Australian government has published a Mandatory Guardrails Proposals Paper open for consultation16. Governments are aware of the tremendous benefits to society and the economy that AI can bring but remain cognisant to the potential harms of unabated AI development.

Australia is also following this precautionary approach. AI regulation has been on the Government’s agenda since the release of its discussion paper ‘Safe and responsible AI in Australia’ in June 2023 which was then followed by an interim response paper in January 2024 signalling that mandatory regulation would follow. At the time of writing this paper in September 2024, the Australian Government has released a consultation paper proposing the introduction of regulations requiring 10 mandatory guardrails focusing on testing, transparency and accountability to be adopted by Australian organisations in developing or deploying high risk AI systems.

AI risks

The development, deployment and use of AI technologies presents a complex interplay between risks and opportunities, necessitating vigilant management and strategic governance. These risks are intertwined with opportunities, highlighting the need for a balanced approach to harness AI’s full potential while meeting ethical standards.

Integrated risk and opportunity management

Managing AI involves a nuanced approach that starts with testing initial value propositions iteratively, then adjusting strategies based on comprehensive risk and impact assessments. This incremental and iterative method helps identify and mitigate risks early, ensuring that AI implementations are deliberate and considerate. For example, the deployment of AI in loan approval processes can be tested in controlled environments to identify unintended biases against certain demographics before full-scale implementation.

Scaling AI risk management

As AI technology becomes more accessible, the challenge of managing its risks also scales. The expansion of AI capabilities increases the scope of potential impact, meaning that errors or missteps in AI applications can have consequences that extend beyond affecting just small groups of individuals. These errors can have wide-reaching effects, potentially impacting entire populations or causing systemic failures across entire networks or sectors. This underscores the need for robust standards and training for the safe development, deployment and use of AI – and these must be comprehensible to non-specialists. As AI becomes more accessible to individuals and employees, the risk of improper use and integration of AI into organisational tasks and activities requires greater training and accountability across all levels of the organisation.

Governance and legal frameworks

Effective governance requires anticipation of AI’s broader impacts, necessitating structures that can adapt to change. As AI applications permeate various sectors, organisations must ensure their governance frameworks can handle evolving legal and market dynamics. The role of voluntary guidance in shaping responses to AI challenges is another crucial consideration for proactive governance planning.

AI is currently regulated in a non-specific way in Australia by many existing laws - including the Privacy Act 1988 (Cth), Competition and Consumer Act 2010 (Cth), Copyright Act 1968 (Cth), Online Safety Act 2021 (Cth), Corporations Act 2001 (Cth), administrative laws, anti-discrimination laws, sector-specific laws and general laws like contract and tort laws. Establishing a clear understanding of these laws and how they apply to AI is critical to ensure compliance.

The Australian Government is also proposing to introduce AI specific regulation and various AI related reforms to existing laws. 

Given the application of existing laws to AI, a solid foundation of compliance infrastructure within organisations is likely to be in place already. That should be reviewed for its application to AI. Then, AI specific issues and cross framework integration can be handled by an AI specific framework.

Supply chain risk assessment

The establishment of common industry standards and frameworks is essential for facilitating negotiations and ensuring equitable agreements between parties of varying sizes. Adopting global best practices to update existing frameworks with AI-specific elements is crucial. For example, traditional supplier risk management evaluates vendors based on financial stability and compliance with general business regulations. However, AI-specific supplier risk management expands these evaluations to address unique AI challenges such as ethical data usage, algorithmic transparency and bias mitigation. It involves auditing AI development processes, scrutinising the data sets used for training algorithms, and continuously monitoring AI outputs to ensure they adhere to ethical standards and regulatory requirements. This comprehensive approach ensures that third-party vendors meet stringent standards necessary for the safe and ethical deployment of AI technologies to align with both existing regulations and emerging standards specific to AI.

expert tip:
Establish an understanding of existing law and how they apply to AI. Review existing compliance frameworks to make sure they work with AI. Also, implement an AI policy and compliance framework that deals with AI-specific issues and integrates with existing frameworks.

Complexities in AI procurement and contractual issues

The procurement of AI technologies presents some unique challenges that differ from those associated with more traditional rule based technologies. This requires rethinking some of the traditional approaches to procurement such as how to develop requirements specifications (favouring an outcome-based approach), iterative development, testing processes and addressing data provenance and management. Procurement contracts should be written to handle these new approaches also address any AI-specific risks - for example, the risk of copyright or privacy infringement associated with the use or creation of data using generative AI.

Contracts for the deployment of AI should explicitly incorporate ethical considerations and values to proactively address risks such as bias, security and transparency. This ensures that accountability measures are embedded throughout the AI lifecycle, from development to deployment and operation, enhancing both compliance and ethical integrity in AI applications.

Addressing the challenge of non-specialist understanding

It is crucial that AI risks are understandable to a broad audience. Simplifying complex technical details without losing the necessary depth requires careful communication strategies that enhance transparency and facilitate broader engagement.

Balancing AI opportunities with risk mitigation

AI offers transformative opportunities for businesses and society, though it comes with inherent risks that necessitate strategic management. By embracing AI opportunities strategically and managing associated risks diligently, organisations can not only improve their operational efficiencies but also drive innovation responsibly.

Balancing these aspects requires continuous assessment, adaptive strategies and an inclusive approach to AI integration, ensuring AI deployments enhance capabilities while safeguarding against potential risks.

Strategic adoption and deployment

The potential of AI to catalyse significant business growth is increasingly acknowledged across industries. Adopting AI strategically, with a balanced focus on both its risks and opportunities, is critical in cultivating a culture that prioritises responsible exploration and innovation. Tools like Microsoft Copilot, for instance, are being integrated into business workflows to enhance productivity and decision-making processes. This showcases how AI can be seamlessly incorporated into daily operations to drive efficiency and competitive advantages, demonstrating the strategic integration that yields substantial benefits.

EXPERT TIP:
Embed AI related legal, ethical and commercial considerations and practices by design into AI procurement and contracts.

To ensure this integration is both effective and secure, organisations must implement a proactive risk management strategy. This strategy extends beyond addressing immediate risks, such as data breaches or ethical concerns, to include the anticipation of long-term challenges like the impact of AI on business processes and compliance requirements. Effective risk management requires continuous evaluation of AI systems for vulnerabilities, enabling organisations to adapt to new threats and maintain control over AI operations. By aligning risk management with strategic deployment, organisations can safely and effectively harness AI’s capabilities, leading to progressive advancements rather than stagnation.

Adopting a holistic approach to AI deployment involves forming multi-functional teams to comprehensively scrutinise AI applications. Developing guidelines for responsible AI use, especially in customer-facing and internal business operations, is vital for compliance without compromising operational integrity.

The strategic appointment of people with diverse viewpoints, including sceptics, for AI initiatives can enrich governance processes, ensuring rigorous scrutiny and a balanced perspective on AI’s potential and pitfalls.

By integrating strategic deployment, risk management and comprehensive governance, organisations can achieve a robust framework that not only supports innovative AI applications but also ensures they are ethically sound and compliant with regulatory standards. This integrated approach ensures that AI deployments enhance capabilities while safeguarding against potential risks, promoting a responsible and sustainable advancement in AI technology.

If you approach something with a what can we do, how do we do this responsibly’ mindset, it changes the dynamic. Kenneth Weldin

Expanding AI opportunities across diverse sectors

AI is playing a transformative role across various sectors, including the not-for-profit sector, where its impact is particularly notable in enhancing operational efficiency even in resource-constrained environments. In these settings, AI excels in data analysis, streamlining governance processes and facilitating nuanced policy development. For instance, AI tools can automate the analysis of large volumes of donor data to identify trends and insights that drive more targeted and effective fundraising strategies. AI can be employed to monitor and report on governance practices, ensuring compliance and transparency to strengthen donor trust and engagement. These capabilities not only revolutionise how services are provided and how donor relationships are managed but also enhance overall organisational efficiency, making AI a valuable asset in extending the reach and impact of not-for-profits.

EXPERT TIP:
Implement a continuous evaluation model to detect AI system vulnerabilities and threats to maintain effective control over AI operations.

For any assistance with addressing the AI governance needs of your organisation, do not hesitate to contact your local PKF Audit and governance expert.


Related insights

Subscribe to our newsletter

Subscribe

Propel your career

Learn more about Careers

Follow us

Find your closest office

Locations

Read our latest Clarity mag

View now

About the firm

Transparency reports