CPD: Artificial intelligence and ethics in the advice practice

From

AI can be implemented in an advice practice while still adhering to obligations under the Financial Planner and Adviser Code of Ethics.

The integration of tools driven by artificial intelligence (AI) into financial advice practices will transform the industry. This article, proudly sponsored by GSFM, explores the ethical implications of using AI in financial advice and offers some practical strategies to maintain professional and ethical standards and safeguard client trust in a rapidly evolving digital landscape.

AI is set to revolutionise just about every industry on the planet, from+ manufacturing and health through to tech and finance. Financial advice is no different. The integration of AI into advice practices will enable the automation of routine tasks and forever change how advisers analyse data, generate insights, provide personalised recommendations and manage client data.

The tools powered by AI promise enhanced efficiency and productivity, deeper analytical capabilities and client solutions tailored to each individual. Its benefits include freedom from routine tasks to focus on conversations with clients, meeting their objectives and other business building activities. However, alongside the advancements that can be delivered by AI come ethical considerations that advisers must address to maintain trust, transparency and professional integrity.

AI’s ability to process vast amounts of financial and behavioural data offers unprecedented opportunities for improving client outcomes. Yet, questions remain about bias, accountability and data security; the role of human oversight remain central to its responsible use. Financial advisers, bound by fiduciary duties and a strong code of ethics, face a unique challenge: leveraging AI’s potential while, at the same time, ensuring that its application aligns with the principles of fairness, honesty and client best interests.

To fully realise the benefits of AI, ASIC comments that it’s important for licensees and advisers to balance innovation and protection. The integrity of our financial system – and the safety of the consumers who interact with it – relies on finding that right balance, as detailed in a report[1] published in October 2024. ASIC also notes that it has been reminding licensees that existing obligations apply to their use of AI. Current licensee obligations, consumer protection laws and director duties are technology neutral, and licensees must ensure that their use of AI does not breach any of these provisions.

“As the race to maximise the benefits of AI intensifies, it is critical that safeguards match the sophistication of the technology and how it is deployed. All entities who use AI have a responsibility to do so safely and ethically.”
ASIC, Beware the gap: Governance arrangements in the face of AI innovations

ASIC has stated its intention to engage with and monitor licensees’ AI use on an ongoing basis, particularly as the regulator considers how licensees and practices embed the requirements of any future AI-specific regulatory obligations.

AI in the advice process

Firstly, a quick recap on artificial intelligence. AI is an all-encompassing term that refers to the simulation of human intelligence in machines that are programmed to think, learn and make decisions. It incorporates a wide range of technologies, ranging from rule-based systems to advanced machine learning models. This enables machines to perform tasks that typically require human thought process. Examples include problem-solving, pattern recognition, understanding language and synthesising information to make predictions.

At its core, AI uses algorithms and large datasets to process information, identify trends and generate insights. Examples of AI include virtual assistants like Apple’s Siri, chatbots used by banks, telcos and airlines, recommendation systems like those used by Spotify or Netflix, as well as autonomous systems such as self-driving cars. Versatility and adaptability will make AI a transformative force to our personal and professional lives.

Generative AI focuses on creating or generating original content such as images, text, music, video, designs or recommendations. Unlike traditional AI techniques that produce output programmed or copied from existing data, generative AI techniques are designed to generate output based on patterns, structures and examples learned from large data sets during the training process.

AI will impact the end-to-end advice process, especially those more time consuming activities. Licensees and advice practices are introducing AI solutions to record and summarise client meetings, produce Statements of Advice, create and manage portfolios, handle back office administration and support compliance. As well as delivering significant cost savings, AI frees up advisers to focus on building long-term client relationships.

Generative AI – such as Microsoft’s Copilot, ChatGPT or Napkin – may help to provide more relatable content for presentations and advice documents, with plain English and visual representations of complex information.

An increasing number of licensees and practices recognise the potential provided by AI’s to progress and improve processes. The opportunities for AI in advice practices are many and include:

Enhanced efficiency and productivity

  • Automation of routine tasks: AI can automate tasks such as data entry, note taking and compliance checks, which in turn frees up time for advisers to focus on client relationships, business development and strategic decision making.
  • Faster analysis: AI accelerates the processing and analysis of large datasets, enabling quicker insights and recommendations.
  • Servicing more clients: AI enables advisers to manage a larger client base efficiently, including those clients with lower net wealth or less complex needs.
  • Cost reduction: Automation and streamlined processes reduce operational costs, making financial advice more affordable and firms more profitable.
  • Marketing: AI can be used to create written and visual content, target prospective clients and optimise digital advertising.
  • Advice: AI can be harnessed to create a financial plan, statements of advice and deliver regular client reviews.
  • Investments: AI tools can provide portfolio construction and management which spans investment research, investment selection and ongoing portfolio monitoring.

Improved personalisation

  • Tailored solutions: AI can analyse individual client preferences, risk tolerance and financial goals to generate personalised advice.
  • Dynamic adjustments: Machine learning algorithms can adapt to market conditions and client circumstances in real-time, flagging where and when a review of a client’s plan might be required.

Behavioural insights

  • Investor behaviour: AI tools can assess client behaviour and spending patterns to offer proactive financial insights and strategies.

Enhanced risk management

  • Fraud detection: AI systems can identify unusual patterns and flag potential fraud or security concerns.
  • Portfolio optimisation: AI tools can help mitigate investment risk by balancing portfolios and forecasting market volatility.

Regulatory compliance

  • Automated monitoring: AI tools can ensure compliance with regulations by identifying discrepancies and generating necessary reports.
  • Audit trails: AI systems provide detailed records of decisions, aiding transparency and accountability.

Challenges integrating AI into the advice process

As with all new technologies, the advancements promised by the integration of AI come with challenges. As noted by ASIC, licensees and advisers using AI platforms could experience compliance issues and face regulatory consequences in the event of a breach.

While the use of AI in an advice practice can provide significant benefits to both the business and its clients, it can magnify existing risks and create new risks for both business and clients. As highlighted in the above quote from ASIC’s 2024 report, all entities who use AI have a responsibility to do so safely and ethically. When implementing AI solutions, licensees and advisers must ensure the outcome will uphold the twelve standards of the Financial Planners and Advisers Code of Ethics 2019 (Code) (figure one).

Generative AI models have certain characteristics that make them particularly prone to risks of harm. For example, they tend to use large amounts of data in the training of the model. The presence of incomplete data in training sets mean that models have the potential to provide biased or inappropriate results, can generate outputs that are false or inaccurate, can use complex techniques that are not easily interpretable or explainable, and can be subject to cyber-attacks.

Some of the challenges inherent in AI – and the Code’s standards they could potentially breach – are as follows.

Bias and fairness

AI systems, particularly generative AI, use a huge amount of data. Consequently, it can inherit or amplify biases present in the data used to ‘train’ the AI or create its algorithms. Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one category over another in ways different from the intended function of the algorithm[2].

Because AI platforms will increasingly be used for roles traditionally undertaken by humans – such as assessing client risk profiles, recommending financial products or considering a client’s long term requirements – the question of best interests comes into play.

Bias in AI can lead to unfair outcomes, such as:

  • Marginalisation of specific groups; for example, where data predominantly represents male clients, the AI might overlook or undervalue the needs of female clients.
  • Privileging of other groups; for example, a greater focus on servicing clients in the accumulation rather than decumulation phase.
  • Disproportionate or negative impacts on specific clients or groups of clients, such as a specific cohort being denied access to insurance or having to pay a higher price for it.

ASIC’s research found that few licensees proactively identified risks of algorithmic bias or indicated that they actively tested for bias in AI deployed in their business. However, licensees and advisers must ensure that their AI systems don’t unfairly discriminate against particular clients or client groups. To do so could breach the following standards:

Informed consent

AI systems can make decisions that impact individuals without their direct involvement or consent which can remove human agency. ASIC describes it as “a crucial compliance aspect that should not be ignored.” Further, ASIC is advising licensees and advisers that if AI is being used in an advice practice, explicit client consent is required. Clients need to understand how their data will be used and stored, and they should affirmatively acknowledge and consent to this.

A failure to obtain informed consent for the use of AI could breach the following standards:

Transparency and accountability

Many AI systems, particularly those based on deep machine learning, can make decisions in ways that are difficult to understand or explain to clients. This raises concerns about:

  • Accountability – who is responsible for information generated or decisions made by AI? Clients need to understand this.
  • Trust – clients may be uncomfortable with AI systems they don’t understand and that their adviser has trouble explaining.
  • Transparency – clients must be informed when they are interacting with AI and AI generated information or AI has been used to make decisions that impact them.
  • Incorrect information – AI models can provide information or advice that appears correct but contains factual errors and exposes clients to the risk of harm arising from relying upon misleading or false information.
  • Contestability – clients need to be provided with a process and the necessary information to contest the outcome of a decision facilitated by AI. Contestability is undermined if clients are unaware that AI is being used.

Failing to be transparent and accountable could breach the following standards:

Privacy

AI often relies on vast amounts of data, including personal information, to function effectively. Consequently, the use of AI can lead to data breaches and unauthorised access, the use of personal data without informed consent and the erosion of privacy.

The use of AI in financial advice practices presents significant challenges related to privacy and data security. Financial advisers handle sensitive client information which makes practices attractive targets for cyberattacks. Because AI systems generally require access to large datasets to provide accurate and personalised advice, the risk of data breaches or unauthorised access is increased, particularly where robust security measures are not in place. Additionally, compliance with data protection regulations can be complex, especially when AI systems process and store data across multiple jurisdictions.

Under the Notifiable Data Breaches (NDB) scheme, a business must notify affected individuals and the Office of the Australian Information Commission[3] when a data breach is likely to result in serious harm to an individual whose personal information is involved. A data breach occurs when your clients’ personal information is lost or subjected to unauthorised access or disclosure. ASIC advises that advisers using AI tools could face serious compliance issues if a breach occurs.

It is also important to understand and be transparent as to how AI algorithms handle client data; this is critical to maintain clients’ trust, as clients may fear misuse or exploitation of their information. Addressing these challenges requires a combination of advanced cybersecurity protocols, strict data governance policies and ongoing monitoring to safeguard both personal information (PI) and personally identifiable information (PII).

Although ASIC’s report found licensees generally had documented policies or procedures for managing risks relevant to those associated with privacy, security and data quality, most had not considered these risks through the lens of AI.

Inadequate management of client privacy could breach the following standards:

Regulatory obligations

ASIC observed instances where AI use cases could have potential implications for licensees’ compliance with existing conduct and consumer protection obligations. For example, customer segmentation by AI models in a marketing program could potentially identify prospective or existing clients not in a product’s target market and lead to breaches of the design and distribution obligations.

Failure to consider the impact on regulatory obligations is particularly a risk where decisions about AI models or use cases are made without input or oversight by risk and compliance functions. ASIC noted that the effectiveness of governance and risk management frameworks in relation to AI is a key factor in determining what risks a licensee’s AI use poses. The regulator also stated that licensees should regularly review and update their governance and risk management arrangements to ensure the arrangements do not fall behind their evolving AI use.

A failure to meet regulatory obligations could breach the following standards:

Strategies to ensure AI does not compromise ethical obligations for advisers

It is evident that the regulatory framework around AI in Australia is still evolving. The Australian government’s AI Ethics Principles outline a set of guidelines that aim to make certain AI systems are safe, fair and reliable (figure two).

In the absence of enforceable regulations specific to the use AI in the financial advice industry, advisers and licensees should proactively align with these principles and the Code of Ethics. This will enable the delivery of positive client outcomes and avoid the scrutiny of the regulator. Licensees and advisers using AI tools are best placed to uphold the Code of Ethics through adherence to best practice, which include:

Understand the AI tools and how they meet your needs

  • Develop a strategy: It’s important not to implement AI tools just because you can; you should identify clear business objectives before you do your due diligence on the available tools.
  • Due diligence: Fully understand how the AI tool works. This includes its inputs, outputs and decision-making processes. What data does it use and how is it protected?
  • Limitations awareness: Recognise the limitations of each AI tool, including potential biases in data or algorithms. Have strategies in place to monitor and mitigate any limitations and the impact of those on both the business and clients.
  • Audit AI providers: Ensure your AI tools meet global cybersecurity standards and review the providers’ policies on data storage and encryption protocols to verify their claims.
  • Regular evaluation: Periodically review the AI tool’s performance to ensure it aligns with ethical and professional standards and ensures positive client outcomes.

Client-centric approach

  • Prioritise client interests: Ensure the recommendations made by AI serve the best interests of the client, not the financial adviser, licensee or AI provider.
  • Customisation: Avoid relying solely on generic AI outputs; tailor financial plans and financial product recommendations to the unique circumstances, needs and objectives of each client.
  • Transparency: Clearly explain how AI tools are used in the planning process and obtain client consent before applying them.

Maintain human oversight

  • Final responsibility: Advisers must always remain accountable for financial advice provided, even when it is informed by AI tools.
  • Validation of outputs: Maintain a balance between AI automation and human judgment to verify and validate any recommendations made by AI tools before presenting them to clients.
  • Ethical judgment: Use professional judgment to override AI recommendations when they conflict with ethical or fiduciary responsibilities.

Foster transparency and explainability

  • Disclose AI usage: Inform clients when and where AI tools are used in the planning process and explain the role of each.
  • Simplify explanations: Utilise AI to translate complex financial ideas and outputs into clear, understandable language for clients. Consider using AI tools to create visual explanations that may work better to educate some clients.
  • Audit trail: Maintain detailed records of how AI tools were used in formulating financial advice.

Ensure data privacy and security

  • Client consent: Obtain explicit consent before using client data in AI tools; clients need to understand how their data will be used and stored. They should affirmatively acknowledge and consent to this.
  • Data governance: Implement robust data security protocols and ensure compliance with privacy regulations.
  • Minimal data use: Use only the data necessary for the AI tool to function effectively and ethically.

Avoid conflicts of interest

  • AI vendor independence: Ensure that any AI tool used in the practice is not biased toward promoting specific products or services due to financial arrangements between the vendor and licensee.
  • Unbiased recommendations: Regularly audit the AI tool to ensure its outputs are impartial and unbiased.

Stay informed about ethical standards

  • Ongoing education: Stay up to date with developments in AI ethics, data governance and financial regulations. It’s likely that more explicit regulatory requirements will be introduced as AI becomes more embedded in the advice process.
  • Adopt industry best practices: Follow ethical guidelines and standards from professional bodies such as the FAAA or CFP Board.

Monitor and mitigate bias

  • Diverse data sources: Use AI tools trained on diverse, representative datasets to reduce bias.
  • Regular testing: Periodically test the AI tool for biases or inaccuracies and address them promptly.
  • Inclusive practices: Ensure recommendations account for the diverse financial circumstances and backgrounds of clients.

Plan for errors and discrepancies

  • Error handling: Have a process in place to address errors in AI outputs and mitigate their impact on clients.
  • Communicate with clients: Be proactive in informing clients if an AI tool’s recommendation is found to be flawed.

Regulatory compliance

  • Adhere to laws: Ensure the AI tools comply with all relevant financial regulations and ethical standards.
  • Regular audits: Conduct periodic compliance audits of the AI tools to ensure accuracy and reliability, and confirm they meet legal and ethical requirements.
  • Independent reviews: Engage third-party experts to evaluate the AI tools for alignment with ethical and regulatory standards.

The potential of AI is immense. However, without due diligence and safeguards, the risks can also be considerable. By combining AI’s capabilities with ethical diligence and professional judgment, financial advisers can enhance their practice while maintaining the trust, accountability and integrity important for an ethical approach to business.

Advisers who proactively align their practices with the highest ethical and compliance standards will not only protect themselves but also build a stronger, more trusted relationship with their clients – something AI, for all its power, cannot replicate.

As artificial intelligence continues to reshape the financial advice industry, advisers must strike a delicate balance between leveraging innovation and adhering to their ethical obligations. While AI tools can enhance decision-making, improve efficiency and deliver personalised insights, their use comes with inherent risks that must be carefully managed.

By prioritising transparency, accountability and fairness, financial advisers can integrate AI into their practices in ways that uphold their professional and ethical standards and safeguard client trust. This involves maintaining human oversight, ensuring data privacy and remaining vigilant against biases or conflicts of interest embedded in AI systems.

Ultimately, the ethical use of AI in financial advice is not just about compliance with regulations but also about fostering long-term relationships built on integrity and trust. By embracing a client-first approach and staying informed about the evolving AI landscape, financial advisers can harness the power of AI responsibly, ensuring its benefits serve both their practice and their clients effectively.

 

Take the FAAA accredited quiz to earn 0.75 CPD hour:

CPD Quiz

The following CPD quiz is accredited by the FAAA at 0.75 hour.

Legislated CPD Area: Professionalism & Ethics (0.75 hrs)

ASIC Knowledge Requirements: Ethics (0.75 hrs)

 

 

———-

Notes:
[1] Beware the gap: Governance arrangements in the face of AI innovation, ASIC, October 2024
[2] Ibid.
[3] https://www.oaic.gov.au/privacy/notifiable-data-breaches

You must be logged in to post or view comments.