Need for Safe and Productive Development and Use of Artificial Intelligence—Inquiry

By: The Hon. Marty Klyne

Share this post:

Hon. Marty Klyne: Honourable senators, I rise to speak to Senator Moodie’s inquiry on a matter of urgency: the regulation of artificial intelligence in Canada.

At the outset, I note that the Oxford philosopher Nick Bostrom has argued that given the potential power of advanced computers to run simulations, our reality is very likely a simulation. With that in mind, you might want to take my speech today with a grain of salt, but we’ll wait until the finish, because maybe not.

Colleagues, we stand on the cusp of a profound transformation. Artificial intelligence, or AI, is a change on the scale of the Industrial Revolution, nuclear energy or the advent of language itself. AI is not just another emerging technology; it is a force multiplier — a general-purpose capability evolving faster than our institutions, laws and imagination.

Today, I’ll speak to you about four aspects of this subject. First, I will outline the far-reaching implications of AI and the potential regulation for our democracy and society. Second, I’ll explore containment — technical, normative and legal — as a framework for governance. Third, I’ll survey global approaches, from the EU to the U.S. and China, before turning to Canada’s efforts through Bill C-27 and the artificial intelligence and data act. Finally, I’ll offer concrete proposals for strengthening our approach so that AI remains a tool that serves the public interest, not one that undermines it.

First, there are the social implications of AI and the potential regulation. Some argue that regulation stifles innovation. In the context of AI, however, what we want is innovation accompanied by robust risk management. This is where potential regulation comes in. A strong regulatory framework can attract investment, foster innovation and bolster public confidence. This is why potential regulation must address civil rights, including the rights to privacy, free speech and transparency, as well as concerns around safety and accountability.

The stakes could not be higher. AI is no longer on the margins. It’s reshaping how we work, learn, communicate and govern. In its most advanced forms, AI won’t merely assist us, but it will replace, optimize, predict and sometimes outpace us. From finance to health care to defence and justice, AI’s reach is expanding rapidly. Its growing power brings opportunities but also serious risks to our society.

Colleagues, this is not alarmism. The modern democratic state once promised us security, prosperity and democratic rights. AI now threatens to topple those pillars.

How about security? Imagine a world where autonomous drones and other weapons controlled by algorithms wage wars — machines out-thinking machines in conflicts we can’t even comprehend. Where is human accountability? Where does it stop? At the same time, without advanced AI, our country and our allies would be vulnerable to the advanced AI capabilities of potential adversaries.

As for our prosperity, AI-driven systems could come to dominate — and even manipulate — financial markets. A small number of firms may end up controlling the machines that influence your mortgage, pension or employment. Technology has already displaced many forms of physical labour, and now even the domains of human thought, creativity and artistic expression are at risk. In a society whose social contract has long balanced the rewards of free enterprise with the protections of a social safety net, what new challenges will AI pose to this fragile balance?

AI could also undermine social justice. A 2019 study in the journal Science found that AI in U.S. health care was far less likely to recommend care for Black patients than White patients with similar conditions — not due to malice but because it mirrored past discrimination baked into the data. This isn’t conscious bias; it’s structural harm, and it’s invisible until it becomes systemic.

As for democratic rights, algorithmically curated content and social media manipulation threaten to undermine their meaningful exercise. Generative AI systems, like ChatGPT, Claude, Gemini, Llama, Grok and Copilot, are embedded in our browsers, messengers and productivity tools. These tools don’t just generate text — they shape influence. They direct information flows, frame debates and set emotional tones. With minor tweaks, bad actors can weaponize them, cheaply flooding public spaces with misinformation, “deepfakes” and propaganda. In a world where artificial intelligence, or AI, may undermine the development of critical thinking skills in young people, this is all the more dangerous.

Although one danger is that AI could become malevolent, a pressing concern is that it is indifferent. It doesn’t care. It optimizes. It reflects the world as it is, not as it should be. Without deliberate choices about the values and limits we encode, AI will default to the logic of profit, power and prediction.

There is another danger — subtler still — that the machines will misunderstand. Humans are walking contradictions. We are nothing if not inconsistent. We want both adventure and safety, privacy and convenience. We lie. We change our minds. We change our moods. We exaggerate. We regret. How can machines understand us when we contain multitudes?

Yet, we are poised to hand over not just our tasks but our judgments — even our ethics — to machines. How will machines balance the rights of individuals with the greater good, a dilemma that humans continue to debate in many contexts?

This is not just a technical issue. It is a political challenge, a moral test, a crisis of good governance.

So how should we respond?

This brings me to my second point: containment, not as suppression, but stewardship. Not through fear, but through responsibility. “Containment” means democratic control over the tools we create. It rests on three principles.

First, technical containment, referring to what happens in a lab or a R&D facility. In the context of AI, this includes using air‑gapped systems, secure sandboxes, controlled simulations, emergency shut-off mechanisms and robust built-in safety and security protocols. These tools help ensure a system’s safety, integrity and freedom from compromise, and allow it to be shut down if necessary.

Second, normative containment, a culture among developers and institutions that values ethics over velocity. Power without reflection is dangerous.

Third, legal containment, regulation that crosses borders, laws ensuring transparency, civil rights, liability, oversight, integrity, values and ethics, transparency and sustainability as well.

Let me be clear, regulation alone is not enough. A summit or a Silicon Valley press release is no substitute for binding rules. We must bring together government, industry, academia and civil society to co-create a Canadian vision of AI rooted in that integrity, values and ethics, transparency and sustainability, not to mention fairness, inclusion and peace.

We must proactively act before we’re forced to react, before the next discriminatory algorithm, job loss or erosion of trust.

To my third point, globally, governments are taking divergent approaches.

The European Union adopted a comprehensive Artificial Intelligence Act — a tiered, risk-based system with clear obligations for high-risk systems and enforceable transparency rules for generative AI.

The U.S. is taking a sectoral, market-led approach, encouraging cooperation, but with uneven results.

China — once a leader in regulation and an outlier in practice. On paper, China looks proactive, regulating social media, banning crypto and publishing AI ethics guidelines. Its draft rules for large language models — LLMs — go further than the West’s. But in reality, civilian AI is tightly controlled while military and surveillance AI operate with few limits. AI there is not just a tool; it is state power. That is the future we must avoid.

To close, where then does Canada stand?

Our most significant step was Bill C-27, the digital charter implementation act, which included the artificial intelligence and data act, or AIDA. AIDA proposed risk-based oversight for high-impact systems, including generative models. But AIDA didn’t pass before Parliament dissolved. Canada now lacks binding legal safeguards, leaving a critical governance gap.

In response, the government introduced a Voluntary Code of Practice for generative AI developers. It encourages fairness, transparency and accountability, but is non-binding and unenforceable. It is no substitute for legislation.

More recently, Canada appointed its first Minister of Artificial Intelligence and Digital Innovation, announced by Prime Minister Carney on May 13, 2025. The Honourable Evan Solomon’s appointment signals growing recognition of AI’s importance, but the minister’s mandate is still undefined. According to a May 17 CBC report, the Prime Minister’s Office referred inquiries to the Liberal Party’s platform, Canada Strong, where AI is mainly tied to economic growth and public service reform. Worthy goals, but leaving many questions unanswered.

Contrast that with the EU’s AI Act which requires developers to disclose copyrighted training data, prevent illegal content generation and comply with General Data Protection and Regulation-level privacy rules. Canada’s approach via AIDA and the voluntary code remains vague and toothless. The gap is especially clear in one critical area: privacy.

Privacy needs urgent attention. AI is transforming how data is collected, inferred and used. In Quebec, a 2022 ruling found AI‑generated dropout predictions counted as personal information even when based on anonymized data. The Privacy Commissioner has called for mandatory privacy impact assessments for high-risk AI systems. This chamber should support that call.

We must ensure that AI serves people, not the other way around. That means enforceable standards for defining and regulating generative AI; mandatory privacy safeguards and impact assessments; public disclosure rules for high-risk applications; and independent oversight with enforcement power.

It also means broad and inclusive consultation with technologists, ethicists, labour leaders, Indigenous communities and Canadians.

Honourable senators, AI governance is a global challenge, but our response must be distinctly Canadian, rooted in dignity, equality, transparency and the rule of law.

AI is not just a tool. It changes how we make decisions, assign accountability and define human agency. We must meet this moment with clarity and resolve.

If we delay, we risk falling behind, letting digital systems evolve faster than our laws, leaving Canadians exposed to discrimination, misinformation and privacy violations.

Let us commit to making Canadian innovation a force not only for economic development but for justice and well-being.

In short, as science fiction becomes reality, let us remember the lesson of The Terminator franchise. In the words of John Connor, “There’s no fate but what we make for ourselves.”

Thank you, hiy kitatamîhin.

 

 

Share this post: