@ignaloidas I wasn’t talking about literal gambling, and honestly though the net outcome of that situation may be positive, the amount of power and control involved is nonetheless alarming.
but again, without the ability to audit the process, nobody can truly consent to your service. and unless the people who are consenting have the option to go through a non-automated process instead, even their nominal consent is in fact coerced
@ignaloidas but you’re just guessing you can trust everyone to Do The Right Thing. you can’t be sure, and yet you’re gambling other people’s privacy. what gives you the right?
@ignaloidas yes, but it would be slower, more error prone, and more importantly, you wouldn’t be enabling them.
as for IP and facial recognition... if your technology gets embedded in an app or a website, it becomes trivial to record the IP address as well as the results of the verification check, you know that
and laws aren’t “mostly” based on ethical principles, many are first and foremost about protecting wealthy property owners, not ending things like hunger, homelessness, and war. it’s not at all a safe assumption
yes, you’re talking about identity verification, I know that. the question ICE would ask is not “who is this person” but rather “is this [NAME], the person we’re looking for?” and the question data companies would ask is not “who is the user of IP address X”, but rather, “can we verify that IP address X was physically being used by [NAME] at that time”, which allows you to build a reliable timeline of people’s actions by connecting data from many sources
and yeah we were originally talking about the EU but the now we are talking about whether your example of “ethical” facial recognition is in fact ethical. you cited the GDPR as a reason you expected only ethical behavior from your clients, and I simply pointed out that that only works if you operate entirely within the EU and if you assume companies would never break the law and get away with it.
it’s also troubling that you seem to be implying that just because something is unlawful, that makes it ethical to prevent it. that’s not necessarily the case.
@ignaloidas I mean the application in the case of ICE is pretty obvious. they knock at someone’s door, use their phone’s camera to identify who answers, and thanks to the cloud, they get a match and can arrest people before they have a chance to flee. insurance fees are also pretty clear, at least in the US, insurers are constantly working on profiling people to adjust their rates. successful facial ID checkins are very valuable data points.
saying that large scale facial recognition is illegal in the EU doesn’t mean much. are your clients strictly based in EU? do you have mechanisms to ensure they won’t lie about how they use your software? would your company really risk bankruptcy if it came back that your biggest clients were conducting unlawful surveillance? or would people just look the other way and collect their checks?
I admit I do not have statistics about scamming victims, but again think about who’s gonna be the people paying you to build this tech— large corporations. by definition, you are not serving the interests of regular people, except for when those interests are in alignment with large companies. incidental altruism is not really altruism at all
@ignaloidas an increase in efficiency is not necessarily a good thing. yes, individuals may find their paperwork goes faster in some cases. they also will find the process of having their insurance fees hiked, or being detained by ICE, to be much more streamlined as well.
furthermore, yes adversarial imagery can be created but think about who’s most at risk here. the targets of genuine fraud tend to be either corporations, or wealthy people, who have the resources to easily recover. meanwhile, the disenfranchised and impoverished stand to gain little in this tradeoff of “denying scammers”, as they have few to no assets to protect. meanwhile, it is these minorities who themselves are most at risk for being targets of government persecution!
so yes, sharing the model opens up risk, but consider who bears that risk, and who benefits from it. and if your AI is so vulnerable to exploitation that publishing the details compromises the entire use case, that may be a sign that AI isn’t the right approach to solving this “problem”
@ignaloidas if the consumers cannot know _how_ exactly you are ID-ing them, they cannot truly consent. this goes doubly so if the ID-ing is required for use of a service the user wants (effectively meaning they have little choice), and triply so if the entity using your product is a large organization, i.e. a government or a megacorp, that has significant power over people’s lives
@Nowak *pets and scritches behind your ears*
@roxy I absolutely adore little details like this
@PK I’m breathing out, the c h e m i c a l s
re: mario maker
@TJCheetah excellent! lemme know what ya think when you do!
friendly pixie-fox thing that doesn’t bite, I swear! they/them pronouns please!
The Vulpine Club is a friendly and welcoming community of foxes and their associates, friends, and fans! =^^=