Hello to all CTF members—
Here at the Foundation, we spend considerable time thinking about the evolution of professional services. One of our most important responsibilities is ensuring that we are attuned to emerging trends so that we are able both to deliver timely, relevant services to our members today and to adapt our services to support your future success. The disruptive effect of technology is central to this effort, and interesting issues continue to unfold.
Earlier this year, the Corporate Management Tax Conference was held, with tax and technology as its central theme. All of the sessions were interesting, but among the most provocative was a panel discussion on artificial intelligence (AI) and, in particular, the question of what principles and safeguards should be guiding the development and the application of these tools by practitioners and by the judiciary. A related question was whether existing ethical rules and codes of professional conduct provide sufficient guidance. It was clear from the discussion that while the path forward remains somewhat uncertain, robust responses are required to avoid the potential pitfalls and unintended consequences associated with these powerful applications and to ensure that professional judgment and oversight retain their rightful place.
Events have continued to unfold since then, in Canada and elsewhere. Tax law is an obvious candidate for these technologies, given its pervasive nature, constant state of change, and inherent complexity. I recently came across yet another new example of innovative technology—the Artificial Intelligence Tax Treaty Assistant, which is based on a predictive algorithm designed by Błażej Kuźniacki, a Polish academic and lawyer. The goal of this tool is to determine, on the basis of a series of questions, the likelihood of treaty abuse under the principal purpose test (PPT) in the multilateral instrument in respect of international arrangements or transactions. Opportunities for innovation are everywhere!
Some have focused on restraining the use of these new technologies; others embrace the potential they offer to analyze difficult issues more effectively. A good example of the former approach is the recent change to the law in France making it illegal to use the names of judges to “assess, analyze, compare or predict” how decisions are made. The measure was targeted at technology companies that use analytics of publicly available legal data to compare judges and offer predictions on the likely outcome of cases. The rationale for the amendment was concern about undermining public confidence in the integrity of the judicial system.
It is not clear that banning judicial analytics will achieve that objective. A more appropriate response may be to combine a cautious adoption of the inevitable with a call for greater transparency and understanding of the algorithms (particularly any built-in biases) that underpin these analytics. Such an approach may offer the opportunity to improve, not erode, public confidence. Two recent high-profile examples in the United States involving immigration law and differentiated access to bail illustrate the power of legal analytics to expose systemic problems and to open avenues for remediation.
This balanced approach seems to be the one favoured in Canada. The federal government is actively exploring the use of AI in government programs and services and has developed a set of guiding principles (available online), with the goal of ensuring effective and ethical use of AI. Transparency, training, and impact measurement are specifically incorporated into these principles. Although the relevant web page includes a statement about AI being governed by “clear values, ethics, and laws,” further details on that front remain outstanding. Last fall, as part of this exploratory effort, Justice Canada initiated a pilot project covering immigration, pension, and tax law. Many of us await the outcome with great interest, particularly because adoption of these technologies in the private sector is growing rapidly.
Returning to the issue of professional responsibility, I am encouraged by the significant time now devoted to ethical issues by those working in the area of AI. The issues most relevant to the use of these new tools—including transparency, oversight, the role of human intervention in the decision-making process, and ultimate accountability—are all critical. I look forward to continued engagement in this important debate.
See you next month.
Heather L. Evans,
Executive Director and CEO