Companies that use services like this often mention it in their privacy policies—see Airbnb’s here—but how many of us realize our account behaviors are being shared with companies we’ve never heard of, in the name of security? How much of the information one company shares with these fraud-detection services is used by other clients of that service? And why can’t we access any of this data ourselves, to update, correct or delete it?
According to Sift and competitors such as SecureAuth, which has a similar scoring system, this practice complies with regulations such as the European Union’s General Data Protection Regulation, which mandates that companies don’t store data that can be used to identify real human beings unless they give permission.
Unfortunately GDPR, which went into effect a year ago, has rules that are often vaguely worded, says Lisa Hawke, vice president of security and compliance at the legal tech startup Everlaw. All of this will have to get sorted out in court, she adds.
Another concern for companies using fraud-detection software is just how stringent to be about flagging suspicious behavior. When the algorithms are not zealous enough, they let fraudsters through. And if they’re overzealous, they lock out legitimate customers. Sift and its competitors market themselves as being better and smarter discriminators between “good” and “bad” customers.
In the gap between who is taking responsibility for user data—Sift or its clients—there appears to be ample room for the kind of slip-ups that could run afoul of privacy laws. Without an audit of such a system it’s impossible to know. Companies live under increasing threat of prosecution, but as just-released research on biases in Facebook ’s advertising algorithm suggest, even the most sophisticated operators don’t seem to be fully aware of how their systems are behaving.