The integration of artificial intelligence has revolutionized various industries, offering efficiency, accuracy and convenience. In the realm of estate planning and family offices, the integration of AI technologies has also promised greater efficiency and precision. However, AI comes with unique risks and challenges.
Let’s consider the risks associated with using AI in estate planning and family offices. We’ll focus specifically on concerns surrounding privacy, confidentiality and fiduciary responsibility.
Using AI in the Family Office Context
Why should practitioners use AI in their practice? AI and large language models are advanced technologies capable of understanding and generating human-like text. They operate by processing vast amounts of data to identify patterns and make predictions. In the family office context, AI can offer assistance by streamlining processes and enhancing decision-making. On the investment management side, AI can identify patterns in financial records, asset values and tax implications through data analysis, facilitating better-informed asset allocation and distribution strategies. Predictive analytics capabilities enable AI to forecast future market trends and potential risks that may help family offices optimize investment strategies for long-term wealth preservation and succession planning.
AI may also help prepare documents relating to estate planning. If given a set of information, AI can function as a quasi-search engine or prepare summaries of documents. It can also draft communications synthesizing complex topics. Overall, AI offers the potential to enhance efficiency, accuracy and foresight in estate planning and family office services. That being said, concerns about its use remain.
Privacy and Confidentiality
Family offices deal with highly sensitive information, including financial data, investment strategy, family dynamics and personal preferences. Sensitive client information can include intimate insight into one’s estate plan (for example, inconsistent treatment of various family members) or succession plans and trade secrets of a family business. Using AI to manage and process this information introduces a new dimension of risk to privacy and confidentiality.
AI systems, by their nature, require vast amounts of data to function effectively and train their models. In a public AI model, information given to the model may be used to generate responses to other users. For example, if an estate plan for John Smith, founder of ABC Corporation, is uploaded to an AI tool by a family office employee asked to summarize his 110-page trust instrument, a subsequent user who asks about the future of ABC Corporation may be told that the company will be sold after John Smith’s death.
Inadequate data anonymization practices also exacerbate privacy risks associated with AI. Even anonymized data can be de-anonymized through sophisticated techniques, potentially exposing individuals to identity theft, extortion, or other malicious activities. Thus, the indiscriminate collection and use of personal data by AI systems without robust anonymization protocols pose serious threats to client confidentiality.
Even if a client’s data is sufficiently anonymized, data used by AI is often stored in cloud-based systems, which aren’t impervious to breaches. Cybersecurity threats, such as hacking and data theft, pose a significant risk to clients’ privacy. The centralized storage of data in AI platforms increases the likelihood of large-scale data breaches. A breach could expose sensitive information, causing reputational damage and potential legal repercussions.
The best practice for family offices looking to use AI is to ensure that the AI tool under consideration has been vetted for security and confidentiality. As the AI landscape continues to evolve, family offices exploring AI should work with trusted providers with reliable privacy policies for their AI models.
Fiduciary Responsibility
Fiduciary responsibility is a cornerstone of estate planning and family offices. Professionals in these fields are obligated to act in the best interests of their clients (or beneficiaries) and to do so with care, diligence and loyalty, duties which could be compromised using AI. AI systems are designed to make decisions based on patterns and correlations in data. However, they currently lack the human ability to understand context, exercise judgment and consider ethical implications. Fundamentally speaking, they lack empathy. This limitation could lead to decisions that, while ostensibly consistent with the data, aren’t in the client’s best interests (or beneficiaries).
The reliance on AI-driven algorithms for decision-making may compromise the fiduciary duty of care. While AI systems excel at processing vast datasets and identifying patterns, they are not immune to errors or biases inherent in the data they analyze. Additionally, AI is designed to please the user and infamously has made up (or “hallucinated”) case law when asked legal research questions. In the financial context, inaccurate or biased algorithms could lead to suboptimal recommendations or decisions, potentially undermining the fiduciary’s obligation to manage assets prudently. For instance, an AI system might recommend a particular investment based on historical data, but it might fail to consider factors such as the client’s risk tolerance, ethical preferences or long-term goals, which a human advisor would consider.
In addition, AI is prone to errors resulting from inaccuracy, oversimplification and lack of contextual understanding. AI is often recommended for summarizing difficult concepts and drafting client communications. Giving AI a classic summary question, such as “explain the rule against perpetuities in a simple manner,” demonstrates these issues. When given that prompt, ChatGPT summarized the time when perpetuity periods usually expire as “around 21 years after the person who set up the arrangement has died.” As estate planners know, that’s a vast oversimplification to the point of being inaccurate in most circumstances. Correcting ChatGPT generated an improved explanation, “within a reasonable amount of time after certain people who were alive when the arrangement was made have passed away.” However, this summary would still be inaccurate in certain contexts. This exchange highlights the limitations of AI and the importance of human review.
Given AI’s propensity to make errors, delegating decision-making authority to AI systems presumably wouldn’t absolve the fiduciary from legal responsibility in the case of errors or misconduct. As reliance on AI expands throughout professional life, fiduciaries may become more likely to use AI to perform their duties. An unchecked reliance on AI could lead to errors for which clients and beneficiaries would seek to hold the fiduciary liable.
Lastly, the nature of AI’s algorithms can undermine fiduciary transparency and disclosure. Clients entrust fiduciaries with their financial affairs with the expectation of full transparency and informed decision-making. However, AI systems often operate as “black boxes,” meaning their decision-making processes lack transparency. Unlike traditional software systems where the logic is transparent and auditable, AI operates through complex algorithms that are often proprietary and inscrutable. The black-box nature of AI algorithms obscures the rationale behind recommendations or decisions, making it difficult to assess their validity or challenge their outcomes. This lack of transparency could undermine the fiduciary’s duty to communicate openly and honestly with clients or beneficiaries, eroding trust and confidence in the fiduciary relationship.
Mitigate Risks
While AI offers many potential benefits, its use in estate planning and family offices isn’t without risk. Privacy and confidentiality concerns, coupled with the impact on fiduciary responsibility, highlight the need for careful consideration and regulation.
It’s crucial that professionals in these fields understand these risks and take steps to mitigate them. This could include implementing robust cybersecurity measures, counteracting the lack of transparency in AI decision-making processes, and, above all, maintaining a human element in decision-making that involves the exercise of judgment.