Advertisement

URP leaderboard Apr 2024

Screening for ideals: Social credit is alive and well in Canada

Signalling the emergence of a dystopian panopticon, predictive tech is ascending the most profitable rungs in the civic ladder

Canadian PoliticsCanadian BusinessGlobalization

“Subway,” Fortunato Depero, 1930

You will be perfect, you will be machine-equal. The path to one-hundred-percent happiness is clear.
—Yevgenii Zamyatin, We

On bad behaviour

In a bloated market for the collection and sale of personal data, predictive tech has been quickly ascending the most profitable rungs in the civic ladder. It’s the emergence of a dystopian panopticon. In addition to keeping populations under surveillance for possible criminal activity and extreme “social threats,” every aspect of daily life is subject to quantifiable measurement: what your consumer activity says about your character, how likely you are to be radicalized through your book club or knitting circle, whether those rising medical bills make you more likely to move out of your apartment, how likely you are to stay in that job where you are routinely harassed and what that means for your future employer’s profit margins. Somewhere, the breadcrumbs of data processed thorough daily transactions, banking statements, bills, employment history, and social media activity are writing a narrative about each of us—inferring a personality based on automated assessments. If we saw how this narrative characterized us, would we agree?

Over the years, China’s deeply integrated “social credit” system has attracted much scrutiny, both raising the profile of privacy discussions and debate on whether a similar social credit system would come to Canada—and of course, conveniently being politicized in attacks on the country’s so-called communist government. The National Post disparaged China’s use of “increased computing power, artificial intelligence and other technology to track and control the Chinese public;” Rabble raised the alarm about centralized, state social control coming to Canada; CTV decried social credit blacklists and arbitrary court decisions.

In 2018, CSIS published the summary Rethinking Security: China and the Age of Strategic Rivalry, which states that the “function of social credit in the CCP’s management methodology is to automate ‘individual responsibility,’ a concept according to which each citizen upholds social stability and national security.” The social credit system and big data do not replace the government, but augment its authority over everyday social and economic activity, and introduce a punishing “social credit blacklisting system.”

Perhaps Canadians would like to deceive themselves into thinking that this is the exceptional, Frankensteinian creature of a pseudo-communist government. But despite the fear-mongering around the Chinese social credit system coming to invade Canada, we are already living with its more banal, made-in-Canada analogues—except instead of being referred to as “social credit scores,” they may be called something like a “risk score” or a “risk evaluation.” Silicon Valley aspirants have found a profitable niche in defining what exactly that “risk” may be, while claiming that their automated products will provide their clients with accurate profiles by inferring behaviour from otherwise generalized factors. And the companies that are collecting data to determine such scores are becoming increasingly integrated through information sharing partnerships.

Predictive technologies are routinely used in policing and have long raised concerns over racism, with algorithms built and trained on discriminatory data. The Saskatchewan Police Predictive Analytics Lab, for example, centralizes data collected by police, social services and social media. And as policing extends further to social media users and content, algorithmic monitoring and information collection in cyberspace continue to raise concerns over privacy rights.

As predictive software takes its natural course from military and policing applications to wider adoption at the consumer level—for ordinary tasks like renting an apartment or finding a job—it raises a few key issues around the generated risk scores. Using an algorithm to evaluate candidates for housing and employment, rather than person-to-person assessments, could have significant implications for accessibility, as those who are most vulnerable are subjected to an automated decision-making process that may be inherently biased against them—for which there is no meaningful form of legal recourse, at least for the moment.

“New Babel,” Fortunato Depero, 1930

Screening for “ideal” candidates

With a booming homegrown AI industry, Canadian app developers are making headway in the market for automated screening tools and predictive technologies. While such applications can’t yet be described as ubiquitous, two Canadian AI-based apps in particular, Naborly and Ideal, have attracted the attention of CIFAR’s AI Futures policy labs, as well as mainstream media. They have met with substantial commercial success through wide user bases in both Canada and the United States, increasing services for enterprise clients and raising crucial seed investment. This makes them worth examining critically, not only for their purported solutions, but for the ethical quandaries their technologies present as these companies grow.

Naborly, which has offices in Toronto and San Francisco, measures risk scores for applicants based on an evaluation of the perceived willingness of an applicant to pay their rent on the basis of private and publicly available data. Naborly announced its services for landlords in 2016 and quickly expanded its tenant screening services in Toronto. The app claims to identify “patterns of risk that could only be detected by our AI”—an algorithm previously known as SHIRLY.

On August 7, the company launched the Naborly Credit Report after what Lenz referred to as a process of “soul searching,” which would complement the app’s own Naborly Tenant Insights. It proved difficult to obtain any details about this reporting system: a support worker on the company’s phone-line directed Canadian Dimension to the support email, which in turn led to a response from Brenda Manea, of the marketing firm and “storytelling agency” BAM, who replied on behalf of Naborly: “this isn’t a good time for an interview on this topic.” They did not clarify the algorithm’s decision-making processes for the new Naborly report, nor about where data is stored or how Canadian and US data is identified, or even the new name of the algorithm.

But based on the information that can be gleaned from Naborly’s website, the primary data points for Naborly Tenant Insights are identity confirmation, employment and income verification, credit score and eviction history. The data points that follow are far less straightforward: the app claims to analyze an applicant’s income and employment stability with “rent to income” and “debt to income” ratios, “property suitability,” and how much disposable income an applicant has.

The app also considers consumer behaviour based on information furnished by Equifax, and “personal information that the individual has made available to the public,” that is “including but not limited to” social media content or other “information accessible via the internet.” The Naborly score considers international eviction and rental records, in addition to data on medical bills and whether an applicant has health insurance. Unverified information collected through Naborly will negatively impact a person’s score; one such example is rental history, where it might be impossible to officially verify a previous landlord (or any other information collected).

Taking a prophetic stance on an app-driven future of renting, Naborly CEO Dylan Lenz announced in 2016 that his company would “completely change the rental industry.”

It is the widespread adoption of privately-owned, automated screening apps to make decisions on rental applications that should be raising concerns. The addition of further screening and barriers by a third-party—not to mention the invasive level of data collection—carries the risk of even greater marginalization of people with less-than-perfect profiles.

Credit checks are already a flawed system and don’t always reflect a person’s ability to pay rent. The working class and poor sometimes have to resort to an underground economy, especially within the service industry, and may have to routinely negotiate decisions for survival that don’t reflect well on credit reports. Tenants may have to compromise on other payments in order to pay rent in difficult times, or their personal scores may be affected by unrelated and even contradictory factors—such as their work credit cards, paying off the entirety of an outstanding debt, or even not having credit cards in the first place.

Lenz and his company also faced accusations of blacklisting tenants, by allowing landlords to report tenants to a central database. In response to Jack Hauen of QP Briefing, Lenz denied it, saying that laws across Canadian jurisdictions prevent landlords from seeing other landlords’ feedback about tenants.

Lenz is himself a landlord in San Francisco, considered to be the most gentrified city in the US, with over 8,000 people experiencing homelessness in 2019. The clean and easy solutions that are offered to landlords by automated tools—what Lenz reframes as “beneficial for tenants”—pose much more serious consequences for rental applicants in cities besieged by gentrification.

It’s notable that Naborly also acquired LRANK.com in 2016, “a location intelligence engine” that analyzes civic data about property locations and neighbourhoods—apparently to “make new predictions about the people who live there, what they do, what they want, how they are changing.” Naborly claims that data from LRANK factors into “property suitability analysis,” alongside the company’s own credit report, to determine “how happy a tenant will be in a given location.” Such predictive analysis by Naborly’s LRANK, however, threatens not only to make neighbourhoods and tenants more homogeneous based on indeterminate decisions that claim to “build stronger communities,” but can also guide and exacerbate the gentrification of neighbourhoods based on speculative projections of future development—as LRANK boasts, from transit to the number of young families in the area.

This year, Montréal’s famous July 1 moving day left more than 370 families without a lease (and that’s just the reported numbers). The unprecedented toll of the pandemic has not stopped evictions, just put them on pause, as Québec lifted the moratorium on July 20. For many who have, or will soon exhaust their financial options, have missed rent or bill payments, are facing eviction, or experiencing unemployment, these may all leave permanent marks on their profiles. And what if they were just starting to recover from previous “patterns of risk”?

Sachil Singh, fellow at the Surveillance Studies Centre at Queens University who has worked on the Big Data Surveillance project, stated in an email to Canadian Dimension, “The point is that willingness is a complicated subjective term, yet risk scores are often applied as if they are objective.”

Employment histories and other data points that make up a candidate’s profile could already be shaped by systemic discrimination—the absence of an “ideal” data point revealing more about the labour or housing market than about a candidate’s abilities. While purportedly eliminating bias, an algorithm may not be able to account for a candidate’s previous experience with racism or other systemic discrimination — in addition to any conscious or unconscious bias in the actual development of the algorithm.

Singh has also conducted research into discrimination in the Canadian healthcare system on the basis of racial data, and essentialist positions that tie together racial attributes and behaviours. “The broader point is that algorithms are designed by people whose biases are an inherent part of their products,” said Singh, “so their lived experiences with race can implicitly shape their design of the platforms.” These biases have also been noted by the AI Now Institute at New York University, which pointed out that gender and racial diversity in AI development is still low, which has an impact on the methodology of development and the data used for training algorithms.

Meghan McDermott, Acting Policy Director at the BC Civil Liberties Association (BCCLA), confirmed that tenant screening apps “can exacerbate the discrimination that already occurs in the rental housing market by collecting information about an applicant that is irrelevant, inaccurate or stigmatizing.”

Such screening tools often embody faceless and automated application processes: Naborly proudly advertises that the landlord doesn’t even have to meet the tenant.

For Ideal, a Toronto-based AI program for screening job applicants spearheaded by multi-millionaire Somen Mondal, automation is extended across multiple stages of recruitment. Marketed toward enterprise teams such as Bell, Staples and Purolator, Ideal claims to screen, shortlist and grade candidates using AI to “efficiently manage thousands of applications,” while its chatbot “can easily replace phone screens to further qualify candidates.” All this is meant to benefit the client by “cutting down on HR,” reducing the volume of applications, and minimizing other “repetitive recruiting tasks.”

“One argument is that automation is the only way for an employer to deal efficiently with high volumes of job applications,” noted Singh, adding that this automation also devalues such positions. However, the wider adoption of automated recruitment tools could pose increasingly difficult hurdles for job applicants who have non-traditional career paths and experience, or who have worked precarious jobs before having the chance to work in their chosen industry. At the expense of a trained HR department, an AI-based screening process may also not consider an applicant’s history in toxic workplaces where they may have faced harassment, as McDermott mentioned. And as Singh noted, this automation could come at the expense of applicants’ accessibility requirements.

These screening tools are about optimizing human capital, reducing people to the discrete, uncomplicated, computable, and uncontroversial parts of themselves. This has also attracted scrutiny in the United States, where the Utah-based HireVue, a similar “pre-employment testing and video interviewing platform,” analyzes facial movements, word choice and how candidates speak in order to grade their employability. In an article featured in the Washington Post, Meredith Whittaker of the AI Now Institute stated, “It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms.”

The marketing point that truly drives this industry of automated screening apps is the promise of eliminating various forms of discrimination and bias in fraught decision-making processes—a proprietary piece of software will magically imbue a company with a culture of “diversity and inclusion.”

Ideal claims to remove “variables that commonly lead to biased screening and matching, laying out the foundation for unprejudiced hiring practices.” Another AI-based screening company, Toronto-based Knockin’ AI Video Recruiting, for example, targets its product to Fortune 500 companies while claiming to “diminish bias” and act as a “catalyst for social change.”

But the pitfalls of training conversational AI for chatbots using publicly available data—such as social media or forum posts—have been repeatedly denounced, from the infamous crowdsourced misogyny and racism of Microsoft’s Tay to racial bias in chatbots’ selective understanding and use of language and dialects. Singh emphasized the potential of AI tools to discriminate against accents, voices and word choices that it cannot recognize, noting that “impersonal screening methods limit workforce diversity in favour of those individuals who communicate a certain way.”

Regulation and privacy

From the content used to train AI apps to the grounds for rejecting an applicant or candidate, predictive apps allow the screening and decision-making processes to be obscured in ways that human-based decisions cannot be. With rental apps like Naborly, McDermott explains, “the basis for rejecting a prospective tenant is obscured and may not be understood or even defensible by a landlord should they be challenged to explain their decision at a human rights, or landlord and tenant, tribunal.”

This past February, the Office of the Privacy Commissioner of Canada (OPC) launched a consultation on the regulation of artificial intelligence through the Personal Information Protection and Electronic Documents Act (PIPEDA). The OPC found that PIPEDA “falls short in its application to AI systems”. The proposals emphasized regulations around privacy laws in the private sector for AI and predictive technology, on the grounds that “the impacts to privacy, data protection and, by extension, human rights will be immense if clear rules are not enshrined in legislation that protect these rights against the possible negative outcomes of AI and machine learning processes.”

Crucially, the OPC found that this also includes viewing the “right to object and to be free from automated decisions as analogous to the right to withhold consent.”

As it stands, the Canadian Privacy Commissioner “cannot issue binding orders or fines,” as McDermott stated, “which is a significant gap in privacy protection and allows non-compliance to persist.”

“Sbornia Monumental,” Fortunato Depero, 1945

In the Privacy Commissioner’s 2019 report, the OPC argued that Canadian legislation should further enshrine privacy as a human right through such laws that protect “the right to be forgotten,” and necessitate algorithmic transparency. As the OPC notes, “reformed legislation must incorporate rights that protect against harms that are unique to the digital era, including but not limited to ubiquitous surveillance, discrimination in profiling, automated decision-making, and behavioural data analytics.”

With wider adoption of screening apps, it is also crucial to regulate the collection and sale of private data, even though oversight mechanisms are easier to implement for federal government tools than for private companies that may exploit differences in privacy laws across Canada.

“Generally speaking, it is unlawful to even ask a prospective tenant for their age or to seek their credit history and criminal record,” said McDermott. Québec tenants, however, know well that credit checks are almost standard in order to rent apartments.

Naborly has partnered with credit bureaus and other private startups, such as Equifax and RentMoola, a Vancouver-based platform integrating payment and credit reporting systems, which in turn recently partnered with Paypal. Such partnerships ultimately create more vectors for sharing, and possibly hacking, vulnerable private data such as all those useful credit and consumer behaviour reports, payment information, business transactions and possibly criminal record checks from foreign jurisdictions.

There is also the issue of data storage. Ideal, for example, hosts data on Amazon cloud servers. While Amazon operates several Canadian servers, Amazon has previously signed contracts with the CIA and the Pentagon, including the controversial JEDI cloud contract. While Naborly and Ideal are just two relatively small examples of Canadian companies collecting and analyzing data through their screening tools, the recent US-Mexico-Canada Agreement (USMCA), which replaces NAFTA, prevents the governments of Canada, the United States and Mexico from restricting cross-border transfer of personal data “for business purposes.”

Here too, the OPC has criticized the current complaints based model as “insufficient to give individuals the assurance and trust they need that their privacy will continue to be protected when their personal information is transferred for processing.”

When it comes to private companies, the security of data is hardly reliable: a start-up can be bought out at any moment, along with the ownership of all its collected data. A sobering reminder of this is Montréal’s encrypted messaging company Peerio, which was acquired by WorkJam in 2019. Peerio’s co-founder and cryptographer Nadim Kobeïssi had quit controversially three years earlier, when the encrypted messaging company had already signaled that it was moving toward serving enterprise-level clients and could provide administrative backdoors.

Ultimately, the benefits of AI-powered screening apps are intended to maximize profit margins for companies and landlords, while shifting real accountability from people with institutional power and capital to the apps that serve them. With neat “storytelling” marketing about how an algorithm should replace human conscientiousness—and is therefore not only morally superior, but a means to achieve a more equitable future—these companies are positioning themselves as indispensable. They seek to normalize the outsourcing of ethical and legal accountability in some of the most fraught selection processes. Without sufficient protections for Canadian data through USMCA, we should be demanding accountability from the federal government around domestic privacy protections, and looking critically at the profit-driven AI-based programs that are increasingly supplanting human judgement in every sphere of life.

Lital Khaikin is an author and journalist based in Tiohtiá:ke (Montréal). She has published articles in Toward Freedom, Warscapes, Briarpatch, and the Media Co-op, and has appeared in literary publications like 3:AM Magazine, Berfrois, Tripwire, and Black Sun Lit’s “Vestiges” journal. She also runs The Green Violin, a slow-burning samizdat-style literary press for the free distribution of literary paraphernalia.

Advertisement

Fernwood 2023 leaderboard

Browse the Archive