top of page
  • Howard Law

The Libs’ Online Harms Bill Can’t Be Rushed


Facebook whistleblower Frances Haugen probably doesn’t know anything about Canada’s soon to be Online Harms Bill. But her timing is dead on.


While the American public reacts wide-eyed to Haugen’s revelations about Facebook CEO Mark Zuckerberg’s indifference to dangerous online content, in Canada the federal Liberals have promised to table legislation regulating online harms by Christmas time.


Just this summer, Heritage Minister Steven Guilbeault cut the ribbon on the pre-legislation consultation phase. The government’s Discussion Paper targets five kinds of harmful online content to be monitored and potentially banned from the major social media platforms Facebook, TikTok, Instagram and Twitter (it exempts e-mails and messaging). The five harms are sexual exploitation of children, terrorism, incitement of hate and violence, and non-consensual distribution of intimate images. A special federal regulator comprised of a Digital Safety Commissioner and an appeal tribunal will be in charge.


At the core of legislation, the government would impose a legal obligation on the big social media platforms to exercise “reasonable care” in rooting out and de-platforming harmful content. If they don’t make those reasonable efforts, they could be fined. Uploaders whose content is taken down can appeal to the Tribunal.


Haugen’s evidence reveals Facebook as a textbook example of a social media platform purposively failing to exercise reasonable care as it ignores its own Civic Integrity Unit’s warnings in the name of profit.


But aside from exposing Facebook’s greed, Haugen’s revelations highlight a practical problem for any government: how do you police online platforms hosting 5 billion posts every day of which approximately ---at least for Facebook--- 5 million are ugly enough to be flagged by its AI-driven monitoring program and 15,000 moderators (of which 10% are false positives according to Zuckerberg). For even when Zuckerberg finally decided in favour of the public good over audience engagement —by clamping down on anti-vaxxer disinformation— Facebook struggled to clean up its platform.


A handful of public reactions to Heritage’s Discussion Paper — in an Orwellian twist the Department is keeping submissions confidential— are voluntarily posted on Internet advocate Michael Geist’s blog. And, while some submissions are hostile to any policing of online content, even the critics who agree that online harms are serious enough to warrant regulation make some important criticisms of the Heritage Paper. An insightful blog from University of Calgary law professors Emily Laidlaw and Darryl Carmichael asks if the Discussion Paper contemplates regulation with too narrow a scope but at the same time too intrusive in its enforcement:


  • By limiting regulation to the five harms (sexual exploitation of children, terrorism, incitement of hate and violence, and non-consensual intimate images), the government omits bullying, harassment, and defamation. As well, the Paper contemplates a narrow definition of hate tied to the Criminal Code: vile abuse of women, LBGT+, and racialized Canadians may not be offside if the hate isn’t expressed in bigoted vocabulary. The Canadian public, say the authors, probably expect a much broader definition of what’s harmful enough to be regulated.


  • The proposed 24-hour deadline for platforms to take down harmful content might be too long for clearly illegal content like child porn or terrorism (they might have added revenge porn) but not nearly long enough for other content that might be ambiguous and worthy of an uploader’s explanation before a post is removed.


  • For the same reason, the obligation for Platforms to report flagged content to law enforcement could capture too much ambiguous content and too many innocent Canadians.


But most importantly to Laidlaw and Carmichael, they challenge the wisdom of imposing a general duty of care on platforms to proactively monitor all their content, counted in the billions (or millions in the case of Canada) daily posts, all under the threat of fines if they don’t do a sufficient cleanse.


This will likely encourage the platforms to over-delete content. Laidlaw and Carmichael view this not only as a general threat of censorship, but a specific threat to the vulnerable communities the Paper seeks to protect from online harm, potentially flagging and removing posts about sexual freedom or the dangers of white supremacy.


The implication of the professors’ opposition to proactive monitoring is that AI systems should not be used even to perform a rough partition of the vile from the civilized. In fact, instead of pro-active monitoring, the authors propose a complaint-based system or perhaps a more “proportional” monitoring designed by the Platforms themselves.


It’s difficult to imagine what less intrusive methods the authors have in mind to replace Facebook’s own system of AI monitoring supplemented by 15,000 moderators and still cope with the enormous volume, but perhaps Facebook will come to the table to discuss it as they recently claimed to have made significant improvements to their AI-driven content sieve.


The social harm and censorship issues raised by the Discussion Paper and the public submissions are compelling, even incendiary. Recalling the pyrotechnic politics of the C-10 debate, in which the Conservatives and their free speech allies distinguished themselves, the online harms legislation is not a Bill to rush to the starting line by Christmas for the sake of keeping an election promise.


The Liberals should slow down and get the design right before pushing such an important Bill into the mosh pit of Parliamentary politics.

bottom of page