Sweden’s algorithmically powered welfare system is disproportionately concentrating on marginalised teams in Swedish society for profit fraud investigations, and should be instantly discontinued, Amnesty Worldwide has mentioned.
An investigation revealed by Lighthouse Stories and Svenska Dagbladet (SvB) on 27 November 2024 discovered that the machine studying (ML) system being utilized by Försäkringskassan, Sweden’s Social Insurance coverage Company, is disproportionally flagging sure teams for additional investigation over social advantages fraud, together with girls, people with “international” backgrounds, low-income earners and other people with out college levels.
Primarily based on an evaluation of combination information on the outcomes of fraud investigations the place circumstances had been flagged by the algorithms, the investigation additionally discovered the system was largely ineffective at figuring out males and wealthy individuals that truly had dedicated some form of social safety fraud.
To detect social advantages fraud, the ML-powered system – launched by Försäkringskassan in 2013 – assigns threat scores to social safety candidates, which then robotically triggers an investigation if the danger rating is excessive sufficient.
These with the very best threat scores are referred to the company’s “management” division, which takes on circumstances the place there’s suspicion of prison intent, whereas these with decrease scores are referred to case employees, the place they’re investigated with out the presumption of prison intent.
As soon as circumstances are flagged to fraud investigators, they then have the ability to trawl by way of an individual’s social media accounts, get hold of information from establishments akin to colleges and banks, and even interview a person’s neighbours as a part of their investigations. These incorrectly flagged by the social safety system have complained they then find yourself going through delays and authorized hurdles in accessing their welfare entitlement.
“Your complete system is akin to a witch hunt towards anybody who’s flagged for social advantages fraud investigations,” mentioned David Nolan, senior investigative researcher at Amnesty Tech. “One of many most important points with AI [artificial intelligence] techniques being deployed by social safety businesses is that they’ll worsen pre-existing inequalities and discrimination. As soon as a person is flagged, they’re handled with suspicion from the beginning. This may be extraordinarily dehumanising. It is a clear instance of individuals’s proper to social safety, equality and non-discrimination, and privateness being violated by a system that’s clearly biased.”
Testing towards equity metrics
Utilizing the mixture information – which was solely potential as Sweden’s Inspectorate for Social Safety (ISF) had beforehand requested the identical information – SvB and Lighthouse Stories had been in a position to check the algorithmic system towards six normal statistical equity metrics, together with demographic parity, predictive parity and false optimistic charges.
They famous that whereas the findings confirmed the Swedish system is disproportionately concentrating on already marginalised teams in Swedish society, Försäkringskassan has not been totally clear concerning the interior workings of the system, having rejected plenty of freedom of data (FOI) requests submitted by the investigators.
They added that once they offered their evaluation to Anders Viseth, head of analytics at Försäkringskassan, he didn’t query it, and as an alternative argued there was no drawback recognized.
“The picks we make, we don’t think about them to be an obstacle,” he mentioned. “We have a look at particular person circumstances and assess them primarily based on the chance of error and those that are chosen obtain a good trial. These fashions have confirmed to be among the many most correct now we have. And now we have to make use of our assets in an economical means. On the identical time, we don’t discriminate towards anybody, however we comply with the discrimination legislation.”
Laptop Weekly contacted Försäkringskassan concerning the investigation and Amnesty’s subsequent name for the system to be discontinued.
“Försäkringskassan bears a major accountability to stop prison actions concentrating on the Swedish social safety system,” mentioned a spokesperson for the company. “This machine learning-based system is one among a number of instruments used to safeguard Swedish taxpayers’ cash.
“Importantly, the system operates in full compliance with Swedish legislation. It’s value noting that the system doesn’t flag people however relatively particular functions. Moreover, being flagged doesn’t robotically result in an investigation. And if an applicant is entitled to advantages, they are going to obtain them no matter whether or not their utility was flagged. We perceive the curiosity in transparency; nonetheless, revealing the specifics of how the system operates might allow people to bypass detection. This place has been upheld by the Administrative Courtroom of Enchantment (Stockholms Kammarrätt, case no. 7804-23).”
Nolan mentioned if use of the system continues, then Sweden could also be sleepwalking right into a scandal just like the one within the Netherlands, the place tax authorities used algorithms to falsely accuse tens of hundreds of fogeys and caregivers from largely low-income households of fraud, which additionally disproportionately harmed individuals from ethnic minority backgrounds.
“Given the opaque response from the Swedish authorities, not permitting us to grasp the interior workings of the system, and the imprecise framing of the social scoring ban beneath the AI Act, it’s tough to find out the place this particular system would fall beneath the AI Act’s risk-based classification of AI techniques,” he mentioned. “Nevertheless, there’s sufficient proof to counsel that the system violates the suitable to equality and non-discrimination. Subsequently, the system should be instantly discontinued.”
Underneath the AI Act – which got here into pressure on 1 August 2024 – the usage of AI techniques by public authorities to find out entry to important public companies and advantages should meet strict technical, transparency and governance guidelines, together with an obligation by deployers to hold out an evaluation of human rights dangers and assure there are mitigation measures in place earlier than utilizing them. Particular techniques which can be thought-about as instruments for social scoring are prohibited.
Sweden’s ISF beforehand present in 2018 that the algorithm utilized by Försäkringskassan “in its present design [the algorithm] doesn’t meet equal remedy”, though the company pushed again on the time by arguing the evaluation was flawed and primarily based on doubtful grounds.
An information safety officer who beforehand labored for the Försäkringskassan additionally warned in 2020 that the system’s operation violates the European Normal Information Safety Regulation, as a result of the authority has no authorized foundation for profiling individuals.
On 13 November, Amnesty Worldwide uncovered how AI instruments utilized by Denmark’s welfare company are creating pernicious mass surveillance, risking discrimination towards individuals with disabilities, racialised teams, migrants and refugees.